FYI.

This story is over 5 years old.

Tech

How Scared Should I Be of the Singularity?

Maybe extermination by an army of self-aware machines isn't in humanity's future, but that doesn't mean we should be complacent.
Photo via Flickr user Dick Thomas Johnson

Get the VICE App on iOS and Android

Time for "How Scared Should I Be?" the column that quantifies the scariness of everything under the sun and teaches you how to allocate that most precious of natural resources: your fear.

The singularity is a hypothesis from computer scientist and novelist Vernor Vinge, who said in 1993 that technology is about to cause a shift as dramatic as the emergence of life on Earth, and that afterward "the human era will be ended." By this he meant that, for better or worse, computers will be running shit.

Advertisement

Some futurists, like Ray Kurzweil, think that when the singularity hits, it's going to be fucking awesome. Ever-improving machines will start repairing our cells from the inside, thinking for us whenever we don't want to think, and generally making everything better.

But for paranoid sci-fi fans like me, the singularity is the mythical moment when humans will pay for our promethean technological hubris—probably in the form of a war against evil computers, like in the movie Terminator 2: Judgment Day, which, for the sake of this article, I used as the model for an evil singularity.

I ran this evil singularity concept past some scientists. For the most part, they said a rational person shouldn't be stressed out about it all the time (like I am). "My opinion is that there is no ground for fear, whatsoever," Danko Nikolic, researcher at the Max Planck Institute for Brain Research, and an artificial intelligence commentator, told me. But artificial intelligence sounds like it might still bring some scary surprises with it.

Phase one: Computers Control Everything

Good Terminator: "Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterward, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online on August 4, 1997. Human decisions are removed from strategic defense."

Thanks to Predator and Reaper drones, it seems like a fair amount of the stuff that led to Judgment Day in Terminator 2 has already come true. But drones hardly have a "perfect operational record," since they have a reputation for killing more civilians than enemy combatants. Granted, that doesn't exactly make them less scary, but for the moment, this aspect of Judgment Day seems very far away.

Advertisement

And while it is a scary thought that—singularity or not—autonomous robots might be on the battlefield soon, it's also worth noting that robot soldiers will most likely suck for a long time according to Peter Asaro, philosopher of science and technology at the New School, and a spokesperson for the Campaign to Stop Killer Robots.

"Systems are not good at coming up with their own goals," he told me. Yes, we can point to creepy examples of software beating the best human brains at chess, or recognizing our friend's faces on Facebook. But those abilities are the results of mountains of data, supplied by very patient humans. For a computer to find and solve problems autonomously, Asaro said, "we would need pretty much a scientific revolution in computer science on the order of Einstein replacing Newton."

So in short, autonomous, armed robots probably aren't going to replace—say—the police anytime soon, which is comforting since those would be more or less a prerequisite for phase two.

Phase Two: Computers Become Self-Aware and Run Amok

Good Terminator: "Skynet begins to learn at a geometric rate. It becomes self-aware 2:14 AM, Eastern time, August 29. In a panic, they try to pull the plug."

Sarah Connor: "Skynet fights back."

Can computers develop human-like consciousness and "wake up"? For the purposes of this column, I really don't care about that for two reasons: 1) There's some question as to whether it even makes sense, and 2) a machine without consciousness that plans to kill me is still scary as hell.

Advertisement

But regardless of whether robots are conscious per se, the singularity requires artificially intelligent systems to exhibit what Ray Kurzweil calls the Law of Accelerating Returns, or what Asaro calls the "accelerationist model." This imagines a state at which "things will start developing so quickly that [a system] will take over and start innovating itself."

Will that really happen? Maybe, but according to Nikolic, probably not.

It's folly to assume that increases in computer power are the same as increases in sophistication, he explained. "This is like saying that if we get enough paper and pencils, everyone could write Dostoevsky-type masterpieces." Nikolic hinted that a huge improvement in neural networks—linked computer systems that mimic the way neurons in our brains work—might maybe bring about some kind of rapid acceleration, but he still doubted that self-driven exponential increases in artificial intelligence would ever be possible. By way of an explanation, he pointed to some of the limitations of human brains.

We humans can learn stuff, but we can't, after all, rejigger our own brains in order to make ourselves smarter. This would be true of an intelligent neural network as well. "You can make changes to computer software, but you would not know what to change in order to make yourself more intelligent," he told me. In other words, a super intelligent computer could upgrade itself, and that would probably be handy, but it wouldn't necessarily be getting exponentially more intelligent just because it was snapping more and better Pentiums onto its motherboard.

Advertisement

The Takeaway

So a singularity in which sentient robots with guns march down the post-apocalyptic highways repeating "kill all humans" is probably Hollywood bullshit. But the dawn of any form of what Vinge called "superhuman intelligence" is still scary for other reasons, Asaro told me. "Like any other technology that's in widespread use, we should be concerned about how AI is developing, and the impact it's going to have on our lives."

Even if the intelligent robots that take over our lives are friendly and only ever want to protect us, they might put us in peril—economic and social peril. Sure, Silicon Valley fetishizes so-called disruptive technologies that show up and slap the corded phones out of our hands, toss our CD collections out the window, and annihilate the taxi business. But disruptions have downsides. For instance, the rise of Uber, he pointed out, "economically benefitted one company at the expense of many hundreds or thousands of companies."

So when AI comes, Asoro worries, it could just be one more in a long line of technologies that show up in society and toss aspects of our lives that we value into the dumpster of obsolescence. And after some form of a singularity, if AI itself is guiding innovation and adoption of new technologies, the rapid march of progress that brings about those innovations will become less of a rapid march, and more of a tornado, with "no individual and no group of people [able to] really guide it anymore."

The thought is half-scary and half-exhilarating.

Final Verdict: How Scared Should I Be of the Singularity?

3/5: Sweating It