DeepMind’s artificial intelligence defeats StarCraft II in a new feat against the best human players. It’s just another milestone on his resume.
The triumph in the Go game was the most outstanding milestone. At least at the resonance level. It was also the first milestone of DeepMind’s Alpha program. The games, in total five, were played in 2016.
The chosen year was not by chance. It was 20 years since the first game between Deep Blue and Gary Kasparov. The one that the legendary Russian chess player won. The next, in 1997, would be won by the IBM supercomputer. The media announced it with great fanfare. The machine had won the man in a game of thinking. A game whose practice at the highest level was considered an intellectual exercise reserved only for the most prodigious minds.
It took two decades, however, before a machine could beat the world’s best Go player. This is actually a much more complex game than chess for a computer to master. But DeepMind ran over his opponentSouth Korean Lee Sedol.
The next step has been to master different disciplines. This step has come with Alphonse, a program capable of winning chess, Shogi and Go at the same time. Artificial intelligence was able to triumph over the best software in each of these games.
And now DeepMind beats StarCraft II. A strategy game always praised for its complexity, for the wide range of options offered to its players. It has done so through a program specifically oriented towards this video game, AlphaStar.
Game by game
Victory over professional StarCraft II players –though not the best in the world– it is another milestone in the way of DeepMind. AlphaStar won ten times and only lost the last game.
Video games like StarCraft II are more complicated for machines than board games like chess or Go. Computers must react in real time and calculate the next move without having the full picture of the game. In this strategy game the objective is to build a base, train an army and conquer the territory of the enemy.
One of AlphaStar’s strengths was micromanagement tasks, how to control your troops quickly and efficiently. Although some experts in the game pointed out that the program had an advantage in this aspect. I had restricted the number of clicks I could make in a minute, to match those of a human. But he could see the whole map at a glance, unlike a person, who has to handle it manually.
Precisely in the victory that a human player got against AlphaStar, he had limited his vision of the map, to adjust it more to that of his opponent. These are debatable issues. But, in the end, the purpose of these games is not to defeat humans but sharpening DeepMind’s artificial intelligence and training methods.
The path followed by the company, owned by Google, is the mastery of different disciplines, one by one. Ultimately, this would result in the achievement of a general artificial intelligence. We no longer talk about a very good program at something, but about a software capable of covering a huge variety of disciplines.
Images: Blizzard Entertainment