Game AI Science Guide: How will it change the future of games?


This article was first published on the NetEase Game Academy APP, contributed by “NetEase Interactive Entertainment AI Lab”, and released by GameRes with authorization.

【Game Balance】

I. Introduction

Perhaps AI can become the future breaking point for the game industry?

Recently, the “Report on China’s Game Industry from January to June 2020” was released. The report shows that in the first half of this year, the actual sales of China’s game industry increased by 22.34% year-on-year, achieving growth against the trend. However, the report also pointed out that with the expansion of the game market, the demographic dividend has gradually disappeared, the growth of the user scale has slowed, and the game industry has become more competitive.

(Data source: China Audiovisual and Digital Publishing Association “China Game Industry Report January-June 2020”)

Along with the battle for users, there are increasingly stringent game supervision policies. Since the version number restriction in 2018, the number of game version numbers has continued to decline. Regulators intend to guide industry innovation and quality upgrades by controlling quantity.

In response to this trend, “games going overseas” and “cloud games” have once again become hot topics. But no matter which choice, it is a test for the survival of the fittest and the survival of the fittest for game manufacturers. And AI may be able to provide game developers with more possibilities to accelerate game innovation, create high-quality games, create better experiences, and seize more market share.

2. AI can solve game design problems

In the early stages of game development, AI has actually been used to solve game design problems, such as the classic arcade game “Pac-Man” that swept the world in the 1980s.

In this game, players need to avoid the chase of four different color monsters and eat all the beans in the maze at the same time. These 4 different colored monsters use different chasing algorithms, so they will not line up to chase the player, but chase the player from different paths and in different ways. In “Pac-Man”, for the first time, the game characters showed a rich personality. In addition, the game also uses Meta AI to dynamically adjust the difficulty level for players of different levels to provide a more personalized gaming experience.

(The picture comes from the design planning book “Pac-Man” written in 1980: the action mode design table of 4 monsters in different states)

Subsequently, more and more video games began to combine AI technology with games. In these games, AI mostly plays the role of monsters or bosses in the game, the main purpose is to bring a better game experience and control difficulty.

In recent years, with the continuous development of video games and technological advancements, AI has begun to have more roles and functions in games. For example, in “The Witcher 3”, AI can automatically generate cutscenes based on set actions and scenes. The camera position, character performance content, and action changes are all automatically completed by AI.

In this article, we will focus on how AI assists game design from the perspective of game balance.

3. AI helps enhance game balance

With the continuous development of artificial intelligence technology, especially reinforcement learning technology, we may be more familiar with the AI ​​that dominates various types of games in the news: zero-season Go world champion Ke Jie’s AlphaZero defeated 99.8% of the players in StarCraft. “AlphaStar”, Dota2 AI “OpenAI Five”, which defeated the world champion in e-sports games, and Pluribus, which defeated human professional players in six-player unlimited Texas Hold’em…

However, the purpose of the game AI is not to completely defeat humans, but to provide a well-matched battle experience, ensure a balanced game difficulty, and let players enjoy the fun of competition. The following will explain how the game AI helps ensure the balance of the game from the aspects of battle robots, test robots, and game matching.

Combat robot

Computer-controlled game robots such as mobs, BOSS, NPC, etc. are not only necessary characters in the game, but sometimes even become a major selling point of the game. For example, the “Soul” series and “Sekiro: Shadows Die Twice” developed by Japanese game manufacturer From Software, their highly difficult level battle design has aroused extensive discussion among players and aroused the desire of many game masters to challenge.



“Sekiro”

In PVP games, automated combat robots also have many uses, such as replenishment robots in idle time, warm game robots that give bonuses to losing streak players, as well as novice guidance, sparring robots in advanced practice, and so on.

However, it is not easy to realize a witty and interesting game robot. Simply adjusting the AI ​​strength by adjusting the value of the game character is not suitable for all games. Difficulty or easy will arouse the player’s disgust and greatly reduce the fun of the game. What we really need is a smart, flexible, and moderately difficult game AI. This means that developers need to design reasonable AI behavior logic for different situations that may occur in the game, and show specific in-game behaviors through the character’s control interface.

Traditionally, the industry has adopted a rule-driven approach when developing game NPCs or bots (such as state machines, behavior trees, or more complex utility systems such as GOAPHTN), and artificially defined conditions will trigger pre-designed game behaviors. , The more conditions, the more complex the behavior, and the smarter the character looks. The limitations of rule-driven are also obvious, which are mainly reflected in two aspects: first, it is difficult to cover all situations by rules, and the performance of the designed robot will be relatively rigid; second, a lot of manpower is required to design and implement complex AI behavior logic .

Artificial intelligence technology represented by reinforcement learning happens to be able to break through the above two limitations. We can construct a neural network model with game state as input and character control actions as output. Through training, we can get rid of the dependence on artificial design, and at the same time obtain a smarter robot.

UE4’s behavior tree



Typical reinforcement learning interaction process

Reinforcement learning is a type of artificial intelligence method that does not rely on data, but constantly trial and error in the interaction with a specific environment, and discovers the optimal strategy to solve a given task. Unlike deep learning tasks such as image type prediction and machine translation, reinforcement learning models do not require pre-prepared data. The model will participate in the game as a role, try different actions (Action) according to various situations (State) that may occur in the game, and adjust the action strategy according to the game’s feedback (Reward) until the model can complete the design perfectly. Set task. Thanks to the powerful generalization capabilities of neural networks, the reinforcement learning model can be used to develop bots and NPCs, and can even handle complex games like StarCraft with up to 10^26 theoretical optional operations per frame.

DeepMind introduces the operational complexity of StarCraft

In addition, comparing the game AI development process in the figure below, we can see that through reinforcement learning, the labor cost of game AI design has become the labor cost of the design model and the machine cost of the training model. Therefore, when the expected game AI is sufficiently complex, the reinforcement learning method will have a more cost advantage.

Traditional AI development process



Reinforcement learning AI development process

Test robot

Ensuring the balance of the game’s combat system has always been one of the difficulties of game development. For example, after adding a new hero to the MOBA game, the game planner must ensure that the character’s own skills and numerical design meet expectations, and also ensure that there will be no unsolvable routines when the character cooperates with other heroes, otherwise it will also affect The fairness of competition. And this process usually requires a lot of manpower to repeatedly perform game testing and design adjustments.

As mentioned above, reinforcement learning can get a powerful game AI through constant trial and error in the environment. Based on this “continuous trial and error” feature, we can transform the reinforcement learning model into a powerful automated game testing tool.

Specifically, first we can train a game AI that is close to the level of high-end players, and then through simulated battles with other existing character AIs, we can simulate real game data before the character goes online and adjust the character accordingly. The design of the game evaluates the impact on the balance of the game after the role is added. At the same time, if there are vulnerabilities in the game environment, the model is very likely to be discovered and used in massive exploration attempts, indirectly helping us to dig out game bugs that are deeply hidden or triggered by extreme operations.

In addition, for some relatively simple game links, such as a small copy task in an MMORPG game, using search algorithms or an appropriate amount of human player data for supervised learning, it can also quickly generate test road map robots.

Combat power evaluation and intelligent matching

I believe that many players have had the experience of being pitted by their teammates or being completely abused by their opponents. Through the AI ​​matching mechanism, we can accurately evaluate the player’s level, teammates and opponents with the same level of matching, ensure the fairness of the battle, and improve the player’s game experience.

Compared with traditional ELO and other methods, Trueskill2 can integrate a priori information (such as equipment purchase status, game experience value, etc.) and posterior information (such as the number of kills in the game, the number of kills, the disconnection and escape situation, etc.), and pass the probability map The model allows news to spread throughout the model, constantly revising the assessment of the player’s level, and obtaining an accurate estimate of the player’s level within a few games.



Trueskill2 schematic

On the basis of level matching, players may also have corresponding needs for social interaction. We can analyze the player’s user portrait through the AI ​​algorithm, and then make the corresponding matching, we can meet the player’s higher level pursuit.

The above is some sharing on how AI can enhance the balance of the game~

【Game Environment】

What is the pain in the heart of every FPS game player? “Knowing that the opponent is open but can’t sanction it” is definitely an answer. For example, when playing Overwatch, when you encounter a lock hanger, no matter how the snakeskin moves, you will be shot at the head by the external hanger. The human response is completely unmatched and it is maddening.

There is no doubt that if there are malicious behaviors such as plug-ins and insults in a wonderful and exciting game, it will inevitably destroy the normal game experience of the players. AI can attack game plug-ins through image recognition or user portrait analysis, and can also filter sensitive information such as profanity based on voice language recognition technology, making the game environment more fair, just, clean and orderly.

1. Plug-in attack

There are many kinds of game plug-ins, such as full-picture cheaters and numerical modifiers for competitive games, and plug-ins such as automatic hang-up and automatic sweeping in RPG games. The traditional solution is to apply various security patches on the client side, but the road is too high, and the plug-in will also iterate, which makes the major game manufacturers miserable.

For example, the perspective hanging of a chicken-eating game, whether the player is hiding in the room or behind a bunker, will be unobstructed by the opener. In essence, the opener has illegally modified the client, so that the game screen displays information that shouldn’t be known. The AI ​​can detect the game screen regularly or when certain conditions are met (for example, a player heads a pistol gun without a field of view), so that plug-in behavior has nowhere to hide.



Perspective hanging in the chicken eating game

Using AI for image recognition and attacking plug-ins, its basic principle structure is Convolutional Neural Networks (CNN). CNN is usually a multi-layered network that can “observe” various regions in an image through mathematical operations such as convolution. The calculated output of each area will go through a non-linear activation function and be summarized at a higher level until the target output is obtained at the highest level. For anti-plug-in, the output of CNN can be a binary judgment, that is, whether the game image is suspected of being plug-in. As long as the AI ​​has seen enough plug-in pictures during training, the AI ​​can remember the characteristics of such pictures and monitor the plug-ins without blind spots.



Convolutional Neural Network

Similarly, AI can also conduct illegal review of pictures uploaded by players, such as avatars, to prevent pornographic and violent content. The face recognition used by minors to prevent addiction is also based on a similar principle. You only need to change the input from one picture to two pictures to ensure that the user’s real-time verification face and the registered face are the same person.

2. Chat content filtering and review

Uncontrolled speech in the game chat public screen interface will bring many potential risks. Insults, pornography, politics, violence, and advertising destroy the game environment, affect the normal communication of players, and plague many game studios. The traditional solution is to use keyword filtering, but the expression of players is constantly evolving. Simple keyword recognition can easily be bypassed using character stuffing, homophonic, slang and other methods. For profitable gray industry studios, new ideas are constantly emerging.

In this case, we can use the powerful generalization capabilities of the AI ​​model to identify sensitive information, such as classifiers based on TextCNN and Transformer. TextCNN is an application of convolutional neural network in text classification. It has a simple structure and fast performance. It is suitable for situations where chat content is short, illegal features are obvious, and information is concentrated.



TextCNN schematic

Transformer is the network structure proposed by google in 2017. Compared with TextCNN, its structure is more complex and more flexible. Transformer’s self-attention structure automatically adjusts the weight of each word according to the degree of relevance of the context content, so that the feature information can be highlighted. Multi-head Attention divides the word vector into multiple segments, and each segment is calculated with a different Attention matrix, allowing the model to capture more diverse contextual information.

The graphic information in the game is huge and complex. Traditional methods may be difficult to effectively identify and filter sensitive content. However, deep learning can respond calmly and create a more orderly game world with its strong learning ability.

This is the end of the sharing of some technologies on how AI can improve the game environment.

[Picture Quality]

For the player, the screen is the “first impression” after contact with the game, and it can even determine to a large extent whether the player is “love at first sight” or “not interested”. Therefore, the exquisite graphics have become a guarantee for the expressiveness of all kinds of games.

For example, in the next-generation game “Assassin’s Creed: Origins”, which is “White Moonlight” in the hearts of countless senior gamers, excellent graphics have always been one of its biggest features that attract many players. In addition to the meticulous depiction of humanities, each vivid and natural NPC also further enhances the sense of substitution of the game.

But well-made games usually mean strong game art support, and behind the powerful art resources are high time costs and huge production costs. A series of AI technologies can help reduce the cost and time of animation production, make more exquisite game screens possible, and enhance the player’s game experience from a visual level. For example, AI voice drives character facial animation technology and character motion control technology.

1. AI voice-driven character facial animation technology

When it comes to character facial animation, the most familiar technology is facial motion capture. Traditional facial motion capture technology requires professional motion capture equipment and actors to shoot, and it will also incur high costs. But in fact, in addition to facial motion capture technology, there is also a lower cost and more applicable AI black technology-voice-driven character facial animation technology.

Voice-driven facial character animation technology framework

First, we need to establish a sufficient amount of speech and corresponding facial expression action sequence data, and train an end-to-end deep neural network model based on these data. Whether it’s from voice to facial expressions and lip tags, or from voice directly to facial expression controllers for three-dimensional characters, this model can achieve quite good generalization capabilities. Basically, one model can handle different genders, ages, and Voice data of the language.

Although pure voice-based facial animation is still some distance away from facial motion capture in terms of accuracy and effect, but due to the cost, with the size of this generation of 3A games, it is impossible for every NPC to receive motion capture treatment. Using this technology, on the one hand, all characters in the game can have a certain expression animation at a very low cost; on the other hand, in the context of internationalization, many games have multilingual dubbing, and there is no game work yet. The room will be extravagant to capture a set of lip animations for the dubbing of one language. Using this technology, lip animations that match the pronunciation of different languages ​​can be quickly generated, and it can also provide players with a more localized experience.

2. AI facial motion capture technology

In addition to the voice-driven character facial animation mentioned above, we can combine AI and graphics algorithms to obtain a lightweight facial motion capture technology to make more vivid character facial expression animations. Only a monocular camera (either a normal RGB camera or an RGB-D camera) is required. Without the need to set any facial markers, the technology can transfer the actor’s facial expressions to the virtual characters in real time, and supports any Types of expression production methods (such as skeletal animation, blendshapes, live2d, texture animation, etc.).

At present, the mainstream large facial motion capture kits in the industry are based on the mesh-based tracking framework. The basic principles are similar. The core algorithm has been proposed in the 1990s, but the AI ​​technology at that time has not yet entered the era of deep learning. The performance is not yet able to meet the high precision required by motion capture, and it still relies on manual marking of marker points on the face of the actor. This method can obtain ultra-high-precision results, but the use conditions are very harsh and the ease of use is poor. AI facial motion capture technology is also based on the mesh-based tracking framework, but from data, to algorithms, and then to tool chain, a complete and lightweight facial motion capture system is realized.

In fact, for many mobile games, lightweight facial motion capture devices are more suitable. On the one hand, due to package restrictions, many games only make facial expression animations for the protagonist or very important NPCs, and there is not much demand. On the other hand, compared to PC and console games, the development cycle of mobile games is shorter, and the project team does not have much willingness to invest too much energy in the eastern part of the face. AI facial motion capture technology reduces the conditions of use, guarantees certain accuracy, improves ease of use and speed, improves the efficiency of art production, and can also meet the needs of planning and marketing students on virtual idol live broadcasts and game UGC gameplay. demand.

3. Character movement control technology

The smoothness of character movement and state switching is also one of the important factors affecting the expressiveness of the picture. In the traditional state machine-based character animation control system, a set of action devices with perfect experience are developed to realize smooth and natural action generation. The complexity is actually beyond the imagination of players. In some action-type 3A games, the animation state machine of a protagonist can easily contain thousands of action nodes. The jump and transition rules between these nodes are completely dependent on the experience of the game developer to manually specify, and its development and maintenance are complicated. The degree can be imagined.

Driven by this pain point, Ubisoft proposed Motion Matching technology, which liberates R&D personnel from complex movement transition rules and only needs to focus on the target movement itself. Given a target action and a motion capture action library, Motion Matching will automatically optimize an action that can naturally transition from the current pose to the target action based on the character’s current posture, motion speed, motion direction and other information. sequence.

Students engaged in algorithm work may be familiar with the above process. Yes, if you put aside the underlying technical principles of 3D skeletal animation, character animation is actually a set of sequence data describing the translation and rotation of each skeletal point. The above motion transition problem is actually an interpolation or completion problem for sequence data, which is also one of the areas that AI technology is best at handling.



AI action control effect indication

We can use AI technology to design a special network structure for skeletal animation data, and use a large amount of motion capture data to train the model, so as to give the model information from the current state of the character, environmental information, historical state, user control signals, etc. , The ability to automatically predict the action of the next frame. Although the technology is currently facing many engineering difficulties in the game, compared with Motion Matching technology, AI technology undoubtedly gives more imagination to the character animation control. With the powerful generalization capabilities of deep neural networks, AI models can abstract and summarize deep-level logic about actions from massive data, so as to achieve more natural action switching and transition, and better adaptation to the environment.

In addition, AI empowered art has more ways to play, such as automatically generating dance animations based on music; based on existing fashion models and reference fashion drawings, quickly producing fashion maps or “repairing” game equipment; based on a small number of games Character head model, quickly batch generate a variety of differentiated character images; or quickly determine or change the style of game icons, etc.

With the continuous emergence of 3A masterpieces in the game industry, players have higher and higher requirements for game graphics and immersion. If you want to improve the quality of game graphics while reducing costs as much as possible, AI must be one of the best solutions.

That’s it for the sharing of AI empowered game art.


Original: https://mp.weixin.qq.com/s/QrwpVXKTgiKp94vafctfAw

.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *