Google and Blizzard Are Teaching an AI to Beat You at ‘StarCraft 2’
DeepMind is finally about to show 'StarCraft' fans how a real machine plays.
StarCraft II screenshot courtesy of Blizzard Entertainment
These past few days have marked some serious steps forward for AI research in video games. OpenAI's bot just defeated one of the best Dota 2 pros at The International, while Google and Blizzard have also made an important announcement: the developer is opening StarCraft II as a research environment for Google's DeepMind AI program. That means Blizzard has given AI researchers the necessary tools to tackle artificial learning in a game that is much more complex than previous AI conquests like chess or poker. As Blizzard's API is applied, DeepMind's bots will be able to mold themselves into progressively better StarCraft II players.
But DeepMind is a long way from becoming the Terminator of real-time strategy. At present, Google's AI can't win a single match, even against the game's easiest built-in AI opponent. Unlike chess or certain types of poker, StarCraft II is an "imperfect information game," meaning it requires a higher level of creativity, adaptability and spontaneous decisions. Human players are, of course, much better at this—for now. To simulate "past experience" for DeepMind, Blizzard is providing a dataset of 65000 random matches played in the game that DeepMind bots can learn from, and that dataset will grow by roughly half a million matches every month, according to Wired. At this rate, AI researchers predict a DeepMind bot will be able to defeat a human on equal footing in about 5 years—and it will only get smarter from there.
AI research and development has a lengthy history with StarCraft games. For years, the Student StarCraft AI Tournament (or SSCAIT) has been held at the Czech Technical University in Prague and at Comenius University in Bratislava. It is a competitive environment for AI and computer science students who submit bots programmed in C++ or Java to play 1v1 StarCraft matches against each other, somewhat similar to the AI matchups made in MUGEN and broadcasted by SaltyBet. SSCAIT, however, is all about creating autonomous artificial intelligence through adaptive programming.
"Every year we see an increase in the amount of genuine AI involved in the matches," the official site says. "While most of the older bots have hard-coded plans of what to build when, where to attack and how, newer bots apply a mixture of those methods and adaptive programming. They are able to respond to the adversary's air units by building anti-air weapons or to delay the production of combat units in favour of economic expansion, based on scouting information. Thus, they will have a stronger economy, and a better chance of victory in the long run."
According to SSCAIT, a handful of bots actually engage in learning strategies, like Martin Rooijackers' LetaBot or Dave Churchill's UAlbertaBot. These bots "employ the machine learning algorithms that allow the AI to learn from replays of earlier games, or simply by playing the game a lot. The most successful AI is usually the one with the optimal balance between a good plan, clever heuristics and fastest learning capabilities."
This is exactly what DeepMind hopes to achieve, albeit on a larger scale. Training its bots to play StarCraft II will not only create an awesome, nigh-unstoppable real time strategy machine, but could also result in AI taking on more difficult jobs. Google has already used information from the DeepMind program to reduce cooling bills in company datacenters, and there's no telling what kind of work a true machine intelligence may be capable of doing in the future. It might be a while before we even see a bot beat a person at StarCraft, but the implications about what could happen after that are exciting... and unsettling.