Tech

Welcome to Nick Bostrom’s Paper-Clip Factory Dystopia

Until recently, Nick Bostrom was best known as the source of the infamous simulated universe argument. If you’ll recall, this is where at some point in the universe’s history, in one way or another, some technologically advanced civilization achieves the ability to simulate either an entire universe or enough of a universe to adequately fool any resident of that partial universe into thinking that they resided in a complete universe. Once this has been accomplished, it becomes much, much, much easier to create a universe in this fashion than in the “natural” way, e.g. as a process of historical, unaided development via emergent selection mechanisms. Ergo, we are probably living in a simulated universe if and only if such a universe is possible.

Read more: The Universe Isn’t a Simulation, Twitter Bots Suggest

Videos by VICE

Bostrom is currently riding a new wave of notoriety thanks to his book Superintelligence: Paths, Dangers, Strategies, which received a platinum plug last fall from Elon Musk, who is very afraid of artificial intelligence and its daydreamed capability for enslaving and-or destroying and-or making things really weird for humanity. The philosopher spoke to Rudyard Griffiths, chair of the Munk Debates, recently and it’s worth a listen for Bostrom believers and Bostrom skeptics alike, particularly this bit, which, in a way similar to the simulated universes argument, offers the compelling fudge that artificial super-intelligence will happen so quickly that we as bio-intelligent humans will barely even notice that we’ve become zoo animals or lab rats.

The point is, however long it takes to get from where research is now to a sort of human-level general intelligence, the step from human-level general intelligence to super intelligence will be rapid. I don’t think that the artificial-intelligence train will slow down or stop at the human-ability station. Rather, it’s likely to swoosh right by it. If I am correct, it means that we might go, in a relatively brief period of time, from something slightly subhuman to something radically super intelligent. And whether that will happen in a few decades or many decades from now, we should prepare for this transition to be quite rapid, if and when it does occur. I think that kind of rapid-transition scenario does involve some very particular types of existential risk.

Post-transition is where things get very strange and Matrix-like, or just plain bad. Imagine if, for example, we programmed our artificial intelligences to make paper clips, Bostrom offers. Benign enough, right? Hardly.

But, say one day we create a super intelligence and we ask it to make as many paper clips as possible. Maybe we built it to run our paper-clip factory. If we were to think through what it would actually mean to configure the universe in a way that maximizes the number of paper clips that exist, you realize that such an AI would have incentives, instrumental reasons, to harm humans. Maybe it would want to get rid of humans, so we don’t switch it off, because then there would be fewer paper clips. Human bodies consist of a lot of atoms and they can be used to build more paper clips. If you plug into a super-intelligent machine with almost any goal you can imagine, most would be inconsistent with the survival and flourishing of the human civilization.

So, we need to be careful about how we tell computers to do things and what sort of abilities to give them, though one gets the sense that Bostrom is resigned to his paper clip dystopia’s inevitability.

Some huge part of the problem implied by Bostrom’s vision of super-intelligence is that we as humans have no idea of how to program super-intelligence. In Bostrom’s vision, we can’t predict its behavior, which can only lead to a sea of unintended consequences, up to and including vast arrays of human batteries powering paper clip-making genius robots.