[Interview] How Neural Networks And Machine Learning Are Making Games More Interesting

Written by alexlash | Published 2019/01/04
Tech Story Tags: artificial-intelligence | games | game-development | gamedev | machine-learning

TLDRvia the TL;DR App

Machine learning and neural networks are hot topics in many tech areas, and the game dev is one of them. There such new technologies are used to make games more interesting.

How this is achieved, what companies are now leaders in new tech adoption and research when we as users will see any notable results of this research and lots more to be discussed today. We will talk to Vladimir Ivanov, the leading ML in gaming expert.

The first question is: what do you mean when talking that games are “not interesting” and the new tech could fix this?

Well, the thing is pretty simple: if we are talking not about human vs. human game mode, you need to compete with bots. Often this is not that interesting as it could be. These bots are scripted, i.e., their advantage over gamer is a faster reaction and in-depth knowledge of a played map.

However, after several games are played, the human can recognize the algorithms the bot is following and use it to forecast its new moves. The bot can’t do non-standard things; this is why you can get bored fast.

Human vs. human mode is not always possible, and it would be cool to create bots that act more like humans to enrich the gaming experience. This is why the development of human-like bots is one of the promising research fields in the gaming industry. Machine learning is a perfect tool to apply here.

Ok, so how can we create better bots that are more interesting to play with?

Training the bot by playing against other bots is one of the ways to learn. Let’s imagine, that we’ve created some game where you can move on the map, shoot enemies, perform some other actions.

We could then send the bots with pre-programmed skills to this map. These bots will then play against each other, master their skills, learn new moves. Potentially, thousands of games played can teach the bot a lot of new stuff. As a result, its behavior will be less predictable and more human-like than it is possible now.

Another interesting approach is interaction-based learning when bots are sent to the map to act as a team.

Sounds interesting, but are there any real-world implementations? Maybe, there are some examples of real games with such intelligent bots?

The main complexity of ML and neural networks is a large amount of resources you need to process all possible action variants. For example, to teach a neural network to recognize the elephant on a picture, you will have to show it a lot of images with elephants. With the games, the system needs to play an enormous number of games. You will be able to get more or less interesting results starting from several dozens of thousands of parameters analyzed.

There are games when you can achieve something like this, but these games should be simple. For example, a VizDoom is simple enough plus its platform has a high frame frequency of 70 thousand FPS. So, the game bots can live a lot of lives for a short period.

As a result, they can be trained faster, which leads to better results. It is much more interesting to play VizDoom with trained bots rather than usual scripted ones. If you decide to put ML-not against the usual one, the ML version will always win as well.

With all this said, you need to know that the bot’s actions will still be relatively simple. The bot will be good at performing reflex movements like bullets dodging but will not be able to memorize the map good enough and come up with non-standard ideas (like doing an ambush at the enemies respawn zone to kill them immediately). This is too complex. The VizDoom platform lacks resources so you can grow the number of experiments to the desired level.

What are other restrictions that prevent broader ML and neural network implementations?

Another essential problem is acquiring data for analysis and building hypothesis to test. Sometimes researchers can get such information, for, eg. Valve provides such data for DOTA. Usually, this is done by generating a JSON file which you can parse to understand how the game’s world is functioning, analyze all essential elements like the health level, etc.

If the developers of a particular game do not provide such information, you have to create your tools for data sourcing. Assuming that you need to collect and analyze thousands of parameters for multiple objects, solving this task might be tricky.

Sometimes the intermediate situation is possible when you have only a video of a game and need to analyze it. This approach can work well for simple games like VizDoom; you can deconstruct the image and understand the main parameters. For more complex games you will need more data, which is hard to collect this way.

What companies are the most active players in this market? Do they have any notable results?

I like the work of the DICE company, the developers of Battlefield. In a modern first-person AAA game, they’ve managed to repeat on a large number of computers things that previously were done in a simpler Doom. The factor that differs Battlefield from other games is a more complex game mechanics, visually more diverse world. But other things are quite similar to easier games.

This means that the neural network gets an image as input, and has to generate some output. The only difference is that for Doom the image will be 300х400 pixels resolution with less number of possible actions while for Battlefield it will 1920х1080 with more activities possible. Also, as the game world is richer, the number of layers in the neural network larger than three. It is needed to memorize more things during the exploration of the world. For example, it should understand that there are not only guns but grenades also that fly not like a bullet, and that you can climb low obstacles.

Interestingly, before switching to machine learning for Battlefield, the developers created a VizDoom-like system to test the algorithm. Instead of houses, it used big cubes, colored boxes depicted health and armor, etc. This allowed making the task more straightforward for the agent. It learns to understand visual information first. So, before the bot is able to go to the rear of its enemy, it needs to become able to differentiate house and the first aid kit. This is why simulators for testing reinforcement learning algorithms are always as simple as possible from a visual point of view. This is done to simplify the initial understanding of the world.

OpenAI team has also published the results of machine learning research. For example, recently they’ve conducted a series of experiments. During one of the tests, people were playing against the bot in a one on one mode. Humans were struggling to defeat the bot in a face to face fight, as it had faster reactions. However, soon they were able to find flaws in bot’s strategy and use it to win.

The second test was conducted in a team on team mode when the former DOTA champions played against self-learning bots. Finally, the bot team has learned how to win.

During the last experiment, this bot team was fighting against the current DOTA champions, and this time humans won using another flaw in bots’ strategy. These bots were trained on standard DOTA games which are almost always 40 minutes in length. However, the fights with the human champions were hard and lasted 50–60 minutes. Bots were good first 40 minutes, but after this threshold, their performance significantly decreased. They failed to create such long strategies.

Not that many companies can conduct such research. It requires a lot of resources; you need to spend money on R&D department, buy computers to run tests. The thing is this research can’t be commercialized anytime soon, so you will be investing in a distant future.

Why there are still no human-like bots?

Bots are weak and generalization, i.e., generating conclusions based on previous experience. Humans are good at it, and the researchers still cannot fill this gap using the machine learning technology.

So, what can be done about this? Is there any probability that the situation improves?

Everything takes time. The specifics of using ML and neural networks in game development is that you need to run a hypothesis using hundreds of computers for several days. You can simplify the task by using mini-games.

This approach implies splitting one big game to several smaller chunk. For example, the game that involves only shooting, moving or purchasing some goods. Algorithms for such mini-games are much simpler, and you need fewer resources to debug them. Sometimes a mini-game can be rendered using GPU. You can scale the achieved results in more complex gaming environments.

Finally, could you give your forecast on ML and neural networks development in connection to game dev and other spheres?

It is more or less clear with games: as the time goes, there will be more advanced bots which will be more interesting to interact with.

However, games are only the tip of an iceberg. I think that the creation of intelligent bots can lead to something bigger. Today you have an entirely virtual bot that good in playing shooters; tomorrow you upload this neural network to physical robotic mechanisms. If the algorithm knows how the hand moves, what muscles are involved in specific actions, it will be able to send controlling signals to a physical member. Imagine, how productive such robots can be while used at real production lines.

Everyone has seen the robots by Boston Dynamics, but they do not actively use ML. This company has resources to bring the best mathematicians on board to create models and algorithms. It is costly. In turn, applying approaches to the creation of human-like developed in game dev allows significant cost reduction and can lead to the rise of dozens of new players on a robotics market.


Written by alexlash | journalist, tech entrepreneur
Published by HackerNoon on 2019/01/04