Tech

AI Theorist Says Nuclear War Preferable to Developing Advanced AI

A prominent AI theorist penned an op-ed calling for the end of AI research backed up by airstrikes and the threat of nuclear war.
Trinity_Test_Fireball_16ms
Los Alamos National Laboratory photo.

An AI researcher has called on the countries of the world to use the threat of nuclear war to prevent the rise of artificial general intelligence. 

In an op-ed for TIME, AI theorist Eliezer Yudkowsky said that pausing research into AI isn’t enough. Yudkowsky said that the world must be willing to destroy the GPU clusters training AI with airstrikes and threaten to nuke countries that won’t stop researching the new technology.

Advertisement

Yudkowsky’s op-ed was a response to an open letter calling for a six month moratorium in the study of AI by asking the world to shut down all its AI research. In Yudkowsky’s thinking, the pause isn’t long enough. For him, it is not a matter of if AI will kill humanity, but when. “Not as in ‘maybe possibly some remote chance,’ but as in ‘that is the obvious thing that would happen,’” he wrote.

Yudkowsky believes that the only way to stop the creation of AGI—machine intelligence that matches or surpasses humans—and therefore the destruction of the entire human race, is to take actions that would inevitably cause a wider war and lead to millions, or even billions, of dead humans.

In his piece for TIME, Yudkowsky said that the world must shut down all the large GPU clusters and put a ceiling on the power draw used by a computing system training AI. “No exceptions for governments and militaries,” he said. “Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.”

Advertisement

Yudkowsky is asking the audience to be more afraid of the hypothetical possibility that AI will kill humans than the likelihood that his prescriptions would cause a nuclear war. “Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature,” he said. “Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.”

The U.S. and China are the relevant examples here because both their military and civilian infrastructures are racing towards developing AI. These are the countries, in Yudkowsky’s scenario, most likely to air strike a data center in the heart of their rival. 

To be clear, either country conducting that kind of airstrike would start a war, full stop. Tensions are already high between China and the U.S. and both are armed with nuclear weapons. China can’t float a balloon above the U.S. without the Pentagon shooting it down, it’s hard to imagine it responding to an airstrike on a data center in San Antonio with anything less than the full force of its military.

China, likewise, likely wouldn’t suffer an incursion and bombing in its airspace. Yudkowsky made clear that nuclear threats are on the table and that the world’s great powers should not stop short of a full nuclear exchange to prevent the rise of AI. China and the U.S. entering into a full nuclear exchange would kill billions of people and permanently alter the climate of the planet, starving billions more. He is, in his moral calculus, willing to kill potentially billions of people and possibly doom the planet for a generation to prevent The Terminator from happening.

Advertisement

And this is where we should take a large step back. Yudkowsky is proposing devastation on a scale that would make the horrific war in Ukraine look like child’s play in order to prevent something that he fears. There have been many proposals for what an existential risk posed by an AI would look like, a common one being the result of unintended consequences: that an AI might marshall all of humanity’s resources by force to achieve some predefined goal. Lately, Yudkowsky seems fond of portraying advanced machine learning programs as a kind of unknowable—and inherently terrifying—alien mind that will have goals opposed to humanity and will both lie to us and be too smart to stop. 

This is in keeping with Yudkowsky’s previous claim to fame around AI: freaking out about a thought experiment posted on a forum about a superintelligent future AI that will torture anyone who didn’t work to create it. He banned discussion of the idea—called Roko’s Basilisk—on the forum, and later explained why: “I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock” that someone would post what he called a “pure infohazard” that would harm people’s brains, he wrote in a Reddit post. 

The idea of a Cthulhu-esque “shoggoth” AI is a far cry—far, as in, shouting from the bottom of the Grand Canyon to the moon—from even the most advanced AIs that exist today. They are stochastic parrots, as researcher Timnit Gebru has coined them, and they have no independent “mind” to speak of. Still, they pose sub-existential risks that can and must be addressed rationally, and without considering nuclear war. An AI model, with enough plugins, may already have the ability to be assigned a task by a scammer and to take the necessary steps to carry it out, including by manipulating humans. AI-driven facial recognition systems used by police have already put innocent Black people in jail. This is not what Yudkowsky seems most worried about, though. 

Similarly, nuclear weapons exist right now. They are not part of a hypothetical future that’s coming in six months, or six years, and possibly never. They’re real. We’ve seen their devastating effects, have studied the destruction, and understand the consequences.

And the risk of nuclear war has grown in recent years. New START, the last remaining nuclear disarmament treaty between the U.S. and Russia, is effectively dead. Moscow has teased its suite of new and advanced nuclear weapons. The Pentagon plans to spend billions of dollars to modernize and upgrade its nuclear arsenal. China, once a steady hand with a comparatively low amount of nuclear weapons, is rushing to build more.

Given this climate, the idea that a nuclear superpower could convince another to stop working on AI is asinine. Yudkowsky writes that we have to frame this as a “anything but a conflict of national interests,” and that is absurd. It’s a fantasy problem that ignores hundreds of years of history and realpolitik. It is inventing a monster and demanding that world leaders be as afraid of it as you are.