This year marks the twentieth anniversary of the infamous chess matches between world chess champion Garry Kasparov and IBM supercomputer Deep Blue. The six matches played in New York City in 1997 marked the first time an artificial intelligence beat a world champion at chess, an event that the pioneers of the computer revolution such as Claude Shannon and Alan Turing believed would mark the creation of a true AI.
In hindsight, it's easy to see that this wasn't the case. Deep Blue was only able to beat Kasparov by constantly cycling through millions of possible chess moves to determine an optimal strategy. This brute-force approach to playing chess isn't exactly what most people would describe as intelligence, but the computer's ability to beat a world chess champion in this manner was nevertheless a remarkable feat of computation.
In the two decades since, artificial intelligence has come a long way. Brute force approaches to AI have given way to neural networks, a type of computing architecture inspired by neurons in the human brain. This type of AI certainly seems more "intelligent" than Deep Blue, and has allowed computers to outperform humans in everything from medical diagnoses to the Chinese game of Go, a far more complex task for a computer than winning at chess. At the same time, AI struggles at some of the most basic human abilities, such as recognizing objects in a photo.
In other words, the creation of general AI—that is, a computer that can perform any intellectual task at the same level of a human—is still a long ways off. But the one thing that hasn't changed over the last few decades is the fear about how this type of AI would negatively affect humanity. Most recently, Mark Zuckerberg and Elon Musk publicly sparred over the need to regulate AI. Musk has long been an evangelist for the "regulate-AI-before-machines-take-over-the-world" camp, while Zuckerberg doesn't seem too phased by the rapid rise of artificial intelligence.
As the first hyper-intelligent person to get truly pwned by an AI, Kasparov has a unique vantage point on the artificial intelligence debate. I caught up with Kasparov at Def Con last weekend to ask him why he's not worried about the future of AI.
Motherboard: This is the 20th anniversary of your match with Deep Blue. What was it like to be the first chess master to lose to an artificial intelligence?
Kasparov: Deep Blue was anything but artificial intelligence. It was brute force. It's quite ironic that the original expectations of Claude Shannon and Alan Turing were that solving chess would be evidence of artificial intelligence, but Deep Blue was not intelligent.
Why do you think they looked at beating a human at chess as a metric for artificial intelligence?
The game was a symbol of intelligence in western civilization for centuries. It's not surprising that these great minds loved the game of chess, but moreover they saw that machines playing chess could offer some very valuable insights into the way that machines can one day duplicate a human mind.
But they were wrong that cracking chess would lead to intelligence. Do you think Go is a more accurate measure of artificial intelligence?
Go is more like AI since we're talking about deep learning machines. But it's all about the vulnerability of humans. You can be creative as hell, but at the end of the day, humans don't have the same level of vigilance and decision that is required to fight the machines. That's why human-machine competition is no longer interesting. Not because machines have solved the game, but simply because machines don't make the same mistakes as humans do. Humans are not in the position to capitalize on the minor weaknesses in machines.
After losing to Deep Blue in 1997, where did you think AI would end up in twenty years?
Trying to come up with predictions is not a rewarding business—unless you are on the doomsaying side. Then people will listen because doomsday predictions are always very popular. Even if you try to be optimistic, you may easily get things wrong.
Are you worried about AI becoming weaponized?
Yes, there's a danger of AI being on the destructive side, but that comes from the nature of any technology. Technology doesn't mean our lives will be changed for better. Technology is agnostic. We know from history that first of all, any new technology is used for destruction. First you have a nuclear bomb before you try to do something positive with nuclear energy. That's why it's not about technology, it's about society. How are we going to make sure this technology upgrades our lives, instead of downgrading them.
Is that an argument for regulation?
What do we even mean, regulating AI? Do we mean America? Do we mean everywhere? Let's not forget the internet is borderless. We could try to stop some process here in the US, but what about Putin? What about North Korea? I don't have answers, but this whole debate about regulating AI in the United States is useless with a borderless internet. The AI could be generated here, but at the end of the day it goes around the world.
The debate between Elon Musk and Mark Zuckerberg is a debate between tycoons in Silicon Valley, rather than statesmen debating about the future of the world. It's about their views on AI, and it will attract attention in these small circles. But it's something that in a few years will affect all our lives.
Read More: The Race to Solve Chess
If the argument about regulating AI is useless, what is the real problem when it comes to the rise of AI
[There is a] growing gap between the technological revolution and the archaic social structure of our society. We're talking about a new way of regulating the industry. AI is the cherry on the pie. We have no idea if government regulations will play a positive role in the development of AI. The main problem is that politicians in the developed world aren't addressing these issues at all. Politicians are trying make sure that changes will not affect the vulnerable voters. How can we reconcile that? It's speeding up on one side and slowing on the other.
Deep Blue is often labeled as AI, but you said it was anything but intelligent. So what are we talking about when we talk about artificial intelligence?
It's a very blurry definition. The best I can come up with for artificial intelligence is a machine that can make improvements based on its own playground of patterns. That's the best way I can think of to describe something like [Go-winning computer program] AlphaGo. The moment you move into the 'I'—intelligence—it's the realm of philosophy. I don't know the exact definition of intelligence. The moment you talk about the human mind, many questions remain open. For example, much of human intelligence is connected to our emotions, which go against the odds. I don't think emotions can be quantified.
Why do you think people tend to fear AI?
It's a primal fear, of something you don't know. Another one is just a fear of machines regulating humans, destroying our jobs, and so on. We got used to the fact that machines can replace people in manufacturing jobs, which is happening as we speak. But those are blue collar jobs and now machines are coming after people with a college degree, [with] political influences— then it becomes a big story. What's the difference? Machines are helping us to quit some repetitive task and to unleash our creative energy. So let's look for an opportunity to work with the machines and understand how human creativity can still be used.
There are two sides to the AI debate and I think it's time to look at the bright side. Let's stop mourning because the future is a self-fulfilling prophecy.
This interview has been lightly edited for clarity and length.