Chipmaker Nvidia says it has trained a machine learning program to create Pac-Man from scratch and without a game engine after forcing the AI to "watch" 50,000 matches of the game.
The AI is called GameGAN, a neural network based on Nivdia’s generative adversarial network (GAN) model. These programs learn by observing large numbers of training inputs, like video of thousands of Pac-Man matches plus user inputs, and producing examples that match the original. The AI’s version of Pac-Man is "fully functional" and Nvidia plans to release its code soon.
“This is the first research to emulate a game engine using GAN-based neural networks,” Seung-Wook Kim, an Nvidia researcher said in a press release. “We wanted to see whether the AI could learn the rules of an environment just by looking at the screenplay of an agent moving through the game. And it did.”
According to a Nvidia blog, the AI program generates the game frame-by-frame, keeping track of what's already been generated to maintain consistency. According to Nvidia, the AI managed to reproduce the game's core rules and mechanics. However, Polygon noted, Nvidia VP of simulation technology Rev Lebaredian said in a media briefing this week that the AI actually ended up with a bias towards never letting the player die. Even if it has to bend the rules, it will try and keep the player alive.
It’s impressive work, but GANs aren’t new and people have used them to generate faces that don’t exist, turn pictures of dogs into cats, and transform humans into waifus. Indeed, researchers have used AI to generate elements of video games for a while now, even generating Super Mario Bros. levels, for example. Computer scientists Matthew Guzdial and Mark Riedl even created an algorithm that generates whole video games by watching footage of Kirby, Mega Man, and Super Mario Bros . The games started as reams of code, but Guzdial and Riedl eventually trained the AI to add visuals and sound with minimal input from the user.
Guzdial, who is now an AI researcher and assistant professor at the University of Alberta, said there’s a lot of pre-existing research that focuses on AI generating video games or elements of video games.
“The new stuff here compared to a typical GAN is that they're explicitly giving it ‘memory’ of what it has already output and splitting up the modelling of static and dynamic components of the game,” he told Motherboard in a Twitter DM.
Nvidia’s GameGAN separates the foreground and background elements of Pac-Man, the ghosts and the player character from the maze, and trains either as different elements. This allowed the team to do some strange things like replace Pac-Man with Mario in the maze.
Guzdial was careful to point out that Nvidia’s official research paper isn’t out until May 25, and said he’s seen similar work from other researchers.
“First off, Ha and Schmidhuber and their ‘World Models’ paper from 2018, which was able to get similar looking results over one of the two same domains using [very simplified _Doom_ mini game,” he said. Second off, there's very recent work from Queen Mary University which also looks to try to learn to replicate a game, and notably does it with a ‘Neural Game Engine,’ basically a way of letting neural nets learn game rules more explicitly, instead of learning game rules as big complicated functions of tons of parameters.”
The original Pac-Man rom is around 14KB. Guzdial said he thinks, though he’s not sure because the research isn’t yet published, that Nvidia’s AI generated Pac-Man will be several GB large. He doesn’t think we’ll be playing AI generated versions of old-school games anytime soon.
“Even if we could, why play a noisy remake of an original game, when you could just play the original game?” He said.“Despite demonstrating that they can swap out static and dynamic elements, this Nvidida work is still fundamentally going to be an imperfect copy of an existing game.”
According to Nvidia, the goal of the work is more about advances in training AI than video games; teaching an AI the rules of Pac-Man by making it watch hours of footage of the game is a proof of concept.
“Suppose you install a camera on a car. It can record what the road environment looks like or what the driver is doing, like turning the steering wheel or hitting the accelerator,” Nvidia said in a blog about the research. “This data could be used to train a deep learning model that can predict what would happen in the real world if a human driver—or an autonomous car—took an action like slamming the brakes.”
According to Guzdial, the future of AI in video games is about assistance. Game designers may soon use AI to help them sculpt new games out of existing code and footage. “The thing that's most interesting to me for [deep neural network] game engines specifically is how they can be used to help agents learn to play the original games,” he said. He pointed to Go-Explore, an AI that learned how to explore environments by playing the Atari games Montezuma’s Revenge and Pitfall. The point, Guzdial says, is to get the AI working with an imperfect game agent—an AI player—to improve the way the AI makes predictions.
As for Nvidia's project, “Their stated goal is to learn models of real world domains, but I just don't see us getting there very soon given the accuracy these approaches have been demonstrating,” Guzdial said. “Specifically, I don’t expect GAN architectures specialized to games will be how we get there.”
This article originally appeared on VICE US.