It wasn't so long ago that artificial intelligence, and specifically one form of AI known as deep learning, was considered a career dead-end for academics. Now, just a few decades later, workable AI exists across a spectrum of complexity and integration into our daily lives: from Siri all the way up to AIs that can "learn" all on their own.
But you already knew all of this, didn't you? You don't need some internet guy to tell you that artificial intelligence is everywhere—public figures like Elon Musk and Stephen Hawking have all weighed in on the technology with varying degrees of apocalyptica, and movies featuring superintelligent AIs are box office smashes. Intelligent machines are inescapable; we are utterly enraptured by these tools of our own making. The reason for this, I think, is that our AIs are starting to act a lot like us, and it's getting harder to ignore.
Consider something Yoshua Bengio, one of a handful of Canadian computer scientists widely credited with making modern deep learning AI what it is today, told me in a recent conversation: artificial intelligence (deep learning in particular) only works as well as it does now because humans have let go of the fine details. We're past the point of stressing over how these systems teach themselves to recognize animals, on a mathematical level, and are now content to let the machines do their thing.
The story of AI is ultimately a human one
The overall effect is something like trying to explain a human's thought process in physiological terms. When a friend, let's say, decides to turn left instead of right while driving a car, is there any use in attempting to suss out how that thought came to be, in terms of the electricity running along a mound of somehow-alive meat? Not really. Neuroscientists are trying to figure it out, but in daily life we often accept that things in the brain just happen. And this is precisely where we are at with AI: we have the general principles down pat, but the fine-grain process is beyond us. And, according to many artificial intelligence researchers and computer scientists that I've spoken with, that's the way it should be.
None of this means that any one AI is particularly brilliant at the moment; in fact, a recently tested AI barely passed an eighth-grade science test. But a single machine really doesn't need to be able to paint and string together a sentence and play an ancient board game to complicate any notion of the snowflake special-ness of human abilities—it's enough for AIs to do these things at all, even as a dedicated function, and they do.
So here we are, finally, in a world where machines of our own making are handing our asses to us in games we invented thousands of years ago and making us fall in love with them, just a little bit. It's scary. Some people's fear regarding AI is more abstract, the stuff of philosophical treatises and thought experiments about superintelligent AIs wiping us out. Others have more direct worries: how will they put food on the table when a machine takes their job?
Many of these worries, while often well-founded, are misplaced. A technology is never just a dumb tool to be corralled, but neither is it free of internal and external pressure and biases. In the case of AI, although the technology lends itself to certain uses and indeed a certain idea of the future, it is embedded in a confused, writhing mess of humanity that develops it, guides it, and, yes, uses it. Some people are very bad. Others are better. Remember in Pokemon when Ash said there are no such things as bad Pokemon, just bad trainers? With AI systems that need to be trained on huge amounts of data, the case is similar.
The story of AI is ultimately a human one, and that's the story that we are telling this week through a series of excellent reports, features, and stories on our rapidly approaching inhuman future, which, for better or for worse, will be created in our image.
In Our Image is Motherboard's theme week of stories about artificial intelligence.