On August 5, a team of AI bots beat professional human players in two consecutive games of Dota 2. The team, called the OpenAI Five, was developed by OpenAI, a technology non-profit sponsored by Elon Musk and Peter Thiel, among other individuals and companies.
Dota 2 is a multiplayer online battle arena game. Teams of five square off on a large map and compete to push down three lanes, smashing the opposing team’s defensive towers and eventually destroying their base.
“It just played differently than anything else that we've ever seen before,” Austin Walsh, one of the Dota 2 players the AI defeated told me over the phone.. ”What they did was just something like completely different. It's almost nonsensical in the approach.” Walsh pointed out that the bots employed weird strategies such as grouping four heroes in one lane and sacrificing another. "There was a lot of things like that—It just played differently,” he said.
When outlets wrote about the matches, many focused on the AI’s overwhelming victory. “Bots just beat human pros at ‘Dota 2’ and it wasn’t even much of a contest,” wrote BGR. “OpenAI Bots Crush the Best Human Dota 2 Players in the World,” said ExtremeTech. Does this mean that AI is already more clever than humans when it comes to playing competitive video games? Not exactly.
The OpenAI Five bots consisted of algorithms known as neural networks, which loosely mimic the brain and “learn” to complete tasks after a process of training and feedback. The research company put its Dota 2-playing AI through 180 days worth of virtual training to prepare it for the match, and it showed. However, the bots had to play within some highly specific limitations.
Dota 2 is a complicated game with more than 100 heroes. Some of them use quirky and game-changing abilities. For this exhibition, the hero pool was limited to just 18. That’s an incredible handicap because so much of Dota 2 involves a team picking the proper group composition and reacting to what its opponents pick. Reducing the number of champions from more than 100 to 18 made things much simpler for the AI.
The OpenAI Five bots also played Dota 2 by reading the game’s information directly from its application programming interface (API), which allows other programs to easily interface with Dota 2. This gives the AI instant knowledge about the game, whereas human players have to visually interpret a screen. If a human was able to do this in a competitive match against other humans, we'd probably call it cheating
In June, a (human) professional player got his entire team disqualified for using a programmable mouse. Open AI Five plays like an entire team with programmable mice and telepathy.
“The API is designed not to give the AI more information than a human would have,” Mark Riedl, an associate professor of AI and machine learning at the Georgia Tech College of Computing, told me over the phone “But what they are able to know, they know perfectly and instantaneously. They need to move to a full vision based input system. It needs to be on the same playing field as humans who must also use their eyes.”
Walsh told me that he noticed the bot’s unnatural abilities. “The bot plays with such confident knowledge,” he said. “It has the knowledge of where everyone is, it has the knowledge of exactly how much [attack power] you have. It knows exactly how much damage they can do between the three or four heroes that it has in one lane and it instantly pounces the moment that you are in the wrong position. It knows. And I've never played anything like that, it was just it was amazing to watch.”
Even with this AI advantage, Walsh and his team beat the bots in the third game, when the match organizers turned hero selection over to the crowd, which gave the AI a weak hero composition. Walsh thinks he and his team could eventually beat the AI in a fair right, even given the limited hero pool and other restrictions.
“Once you find a way to beat it, it’s not going to be able to self-correct,” he said. “That’s the human advantage. Once you find a weakness, you can exploit that weakness.”
Riedl said Go and Dota 2 show that AI might soon get smart enough to handle complex tasks outside of a game. “One of the big claims for [teaching AI to play games] is that games are a stepping stone to something that looks more like the complexity of the real world,” he said.
But he cautioned that games aren’t real life. Games have rules and different forms of feedback, like points. The boundaries are clear and the AI is programmed to work within those boundaries. “Those artificial parts of the games that make them fun to humans are still being used as a crutch for AI researchers,” he said.
In other words, it’s impressive to watch a robot crush at Go or Dota 2, but it's still just a machine running code—if you throw it a curveball, it has trouble recovering, and the real world is a barrage of curve balls.
Despite those caveats, Riedl was still excited. “It has shown that some of the things that are known to be really, really hard can be dealt with computationally.”
Get six of our favorite Motherboard stories every day by signing up for our newsletter.