A Human Amateur Beat a Top Go-Playing AI Using a Simple Trick

Researchers developed a simple way to exploit a computer's understanding of the game so that even an amateur can win.
Image: Getty Images

In a surprising turn of events, an amateur player has beaten a highly-ranked AI system in the board game Go, as first reported by The Financial Times. This win for humans comes years after the groundbreaking 2016 game when Google DeepMind’s AlphaGo program beat the strongest Go player in the world, Lee Sedol, causing him to retire early. Now, it appears that AI players are not as sophisticated as we were led to think.  


The player, Kellin Pelrine, is one level below the top amateur ranking. He was able to beat his machine opponent, KataGo, an open-source computer Go program, by using suggestions from another computer program that was designed by the research firm FAR AI to find a shortcoming in Go-playing AI systems. Notably, the winning tactic was not to out-play KataGo, but to exploit its nature as a machine program to fool it. 

FAR AI made a program that played over a million games to find a blind spot that even intermediate players could exploit. Pelrine memorized the tactic suggested by the program and was able to beat the game 14 out of 15 times. He also used the same method to win against Leela Zero, another open-source Go program. 

How did he do it? In Go, there are two players. One has black stones while the other has white. The goal of the game is for one player to surround their opponent’s stones and enclose the most space with their stones on a 19 by 19 grid. Pelrine tricked the AI by creating a large “loop” of stones to encircle one of the opponent’s groups while distracting it by also moving stones in other corners of the board. Pelrine told The Financial Times that the Go-playing bot did not notice its vulnerability even when the encirclement was nearly complete. This, to him, would’ve been something easily spotted by a human player. 

This accomplishment overturns the reputation of the Go-playing AI systems for the last seven years, with many having deemed these machines undefeatable and unmatched by human players. As skilled as they may be in regular play, they also have some shocking vulnerabilities that can be exploited. 

In 2015, Google DeepMind designed its computer program AlphaGo to play the board game Go. It was trained on around 30 million moves and 160,000 games and then played against itself to reinforce its skills. Prior to this, the best Go AI programs were only at an amateur level. So when, in 2016, AlphaGo was able to beat Sedol 4-1, it became a notable milestone for AI progress. In 2019, Sedol cited AI as the reason he would retire early, saying “Even if I become the number one, there is an entity that cannot be defeated.” 

Plerine’s ability to beat an AI system on par with AlphaGo reveals that these AI systems are still very much works-in-progress. They’re still unable to predict every possible outcome of each move and only have information from the games they were trained on. Adam Gleave, the CEO of FAR AI, told The Financial Times that it's common to find flaws in AI systems using “adversarial attacks.” Yet, big AI systems continue to be released without adequate protection, he said. This vulnerability has already appeared in some of the most popular AI tools right now, such as ChatGPT, which would break when prompted with anomalous tokens.