FYI.

This story is over 5 years old.

THE FUTURE OF TECH ISSUE

How to Prepare for the AI Tipping Point

Nick Bostrom wants to ensure that superintelligence doesn’t destroy humanity
Andy Donohoe

This story appeared in the February Issue of VICE magazine. Click HERE to subscribe.

As director of the Future of Humanity Institute at Oxford University, Professor Nick Bostrom's work focuses on the big questions that face humanity. The Institute researches the implications of synthetic biology, surveillance technology and the prospect of "technological maturity"—the point at which humans have built everything that we theoretically can. However, much of their work, including Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies, dwells upon artificial intelligence (AI), and managing the potential risks of creating a machine intelligence greater than our own.

Advertisement

VICE: Can you explain the role of AI within the Institute's work?
Nick Bostrom: AI has been our biggest focus for a number of years now. We are particularly interested in what happens if AI one day succeeds in its original goal, which has always been to create a general intelligence. Not just systems that can automate specific tasks, but an AI that has the same powerful learning and planning ability that we humans have. How would that situation unfold? There are various ways that it could go wrong, and various ideas about how one could prevent it from going wrong.

Unfortunately a lot of those ideas don't work. There is therefore the challenge of coming up with better ideas for how you could control a machine superintelligence. To be more concrete you could say that the reason we are interested in AI is that it looks like it might have a determining effect on what happens to earth originating intelligent life. If people a million years from now look back on this era and say, "What were the things we did or didn't do that really made a difference?", a plausible candidate, maybe the only candidate, would be AI.

The concern is that we will create a machine that is intelligent enough that it could
self-educate and self-perpetuate beyond our control?
That's right. The idea is that intelligence, in particular greater-than-human intelligence, is a very powerful force. Human intelligence is very powerful relative to other animals, so it's important for us to get the transition to the machine intelligence era right. I think this transition will happen in this century. In particular we need to think about scalable control methods. There are all sorts of existing AIs and computer systems that we have reasonable ability to get to do what we want, but the methods we use to achieve this are not scalable, they will predictably break once the system becomes smart enough.

How would an AI getting to that point, breaking away from human control, look?
There's a wide range of potential outcomes. It could be the last invention that humans ever need to make. If you have something that is radically smarter than us in general intelligence then one of the things it can apply its intelligence to is doing the inventing, and indeed doing virtually all other economically productive tasks that human brains are currently used to perform. It's not one more cool technology, it's not another iteration of the smartphone, but a fundamental new era in the history of life on earth. The outcome for humanity is dependent on the values of this superintelligence and the goals it has.

The possible outcomes range from extremely good to existential catastrophe. You could think of the potential upside as having two parts, one being getting rid of the negatives. There's huge misery and need in the world – ameliorating a lot of that would be in itself a massive win. I happen to think that there could also be this second enormous upside, once you achieve the ability to change some of the fundamental parameters of human capacity – increasing cognition, increasing lifespan, improving our emotional wellbeing – of not just getting the human condition to be as good as it can be, but to go beyond that. It's hard to get an idea of this, because a lot of it might literally be beyond our ability to imagine – an ability conditioned by our three pounds of gooey grey matter. Just as our great ape ancestors, if asked, might have thought having as many bananas as they wanted would be the upside of becoming human, similarly there's a lot outside our own present cognitive horizons.

And the downside?
There are more concrete and more direct downsides. The displacement of humans, human infrastructure, the biosphere, by some structure optimised for realising the goals of the AI. Maybe an earth covered by nanobot infrastructure that is used only to produce space-colonising self-replicating probes that can transform the universe into whatever structure the AI places the highest value on, with humans being swept away as a side effect of that effort. We should aim to engineer the AI so that it's an extension of us, and of our will. So that it's on our side rather than trying to achieve something at odds with our aims. We ideally would want to keep it boxed up, but if it's superintelligent it will likely sooner or later escape that box. We want to be sure that even once out of its box, it's trying to help us.