FYI.

This story is over 5 years old.

Tech

Nukes in the Age of AI

A new report from the RAND Corporation explains how artificial intelligence might affect the risk of nuclear war.
Inside a Peacekeeper missile silo at Vandenburg Air Force Base, California. Photo: DoD

In 1983, Soviet Lieutenant Colonel Stanislav Petrov sat in a bunker in Moscow watching monitors and waiting for an attack from the US. If he saw one, he would report it up the chain and Russia would retaliate with nuclear hellfire. One September night, the monitors warned him that missiles were headed to Moscow. But Petrov hesitated. He thought it might have been a false alarm.

“I had a funny feeling in my gut,” Petrov later told the Washington Post. “I didn’t want to make a mistake. I made a decision, and that was it.”

Advertisement

Petrov was right. His inaction and second guessing of the warning system saved millions of lives. But what if there had been an intelligent machine in his place?

It’s a question at the heart of a new report from the RAND Corporation, a non-partisan DC think tank. RAND wanted to know what would happen if the Petrovs of the world were no longer in the room watching for missiles. In other words, what happens when AI meets nuclear weapons?

Read more: Experts: America Doesn't Need All These Nukes

The supposed benefits of machine learning are vast. Tech leaders, from Microsoft to Facebook, speak of AI in the same utopian terms previously used to describe the internet. By contrast, RAND’s report explains why we should all be afraid instead.

“Artificial intelligence may be strategically destabilizing not because it works too well,” the report reads, “but because it works just well enough to feed uncertainty.”

RAND gathered “a variety of expert groups, including both nuclear-security professionals and AI researchers, as well as participants from government and industry,” to sit down, discuss, and theorize. Over the course of three different workshops, the various groups talked about nuclear war, AI, and how machine learning might save—or destroy—the future.

For decades we’ve lived with a slow-moving nuclear detente powered by mutual assured destruction (MAD), the RAND-coined theory that no nuclear power would ever launch a first nuclear weapon strike. That’s because automated systems like Russia’s Dead Hand, which went online during the Cold War as a result of Soviet paranoia about losing a nuclear war, would retaliate with such overwhelming force that everyone on the planet would be dead long before they realized a war had even started. It’s a horrifying idea, but one that people argue has kept us safe for decades.

Advertisement

AI could change all that, giving one side an advantage over another.

“In an extreme case, AI could undermine the condition of MAD and make nuclear war winnable, but it takes much less to undermine strategic stability,” the report said. The other problem is that machines screw up and, in the past, humans like Petrov have intervened in automatic systems to avert nuclear disaster.

RAND’s group first had to decide on just how far AI technology would go. The tech could either plateau in the near future, breakout completely and revolutionize everything, or slow to a crawl. However, a few philosophers in the group worried about a distinctly science fiction-sounding scenario.

“Superintelligence is anticipated by some to be an inevitable state where machines come to hopelessly outmatch humans intellectually,” the report said. “Once a superintelligence exists, two outcomes are possible: The superintelligence is benevolent and solves all humanity’s problems, or the superintelligence destroys humanity […] If benevolent, superintelligence would save humanity from nuclear war; if malevolent, nuclear strikes would be just one of many possible methods for extinction.”

The report noted that though the possibility of a Skynet-like human extinction event is low, “many supporters believe it merits attention because of the extreme nature of its costs and benefits.” The experts who didn’t believe in an all-powerful machine god radically changing life for better or worse broke down into three camps—complacents, alarmists, and subverisonists.

Advertisement

The complacents think that things will largely stay the same. For them, AI is a new technology and new technologies have come and gone since the invention of the atomic bomb but none have radically changed tactics or strategy.

Alarmists take the other extreme, believing that AI, regardless of its implementation, will radically change things. “AI needs only to be perceived as highly effective to be destabilizing—for example, in the tracking and targeting of adversary launchers,” the report said. “Threatened with potential loss of its second-strike capability, an adversary would be pressured into a preemptive first strike or into expanding its arsenal, both undesirable outcomes.”

The subverisonists, meanwhile, fall between the two groups and believe that AI will change things, but that it won’t be catastrophic. According to this group, AI is too easy to reprogram and subvert to be any real threat. All that will change about nuclear war are its specific tactics.

“Some researchers argue that this is a pervasive trait of machine learning and that they expect that it will persist for years to come,” the report said. Skilled programmers of the future might write AI that can better track incoming missiles or delay detection of its own country’s nuclear first strike. “On the other hand,” the report added, “an actor may believe that it can subvert an AI’s ability to identify a preemptive first strike, making such a strike a viable option and therefore destabilizing.”

Advertisement

Just as the steam engine and then the atomic bomb and space rocket threw everything from the pace of competition to what the sides knew about each other, so too should something like AI.”

The unsatisfying answer to RAND’s initial question, how might AI affect the risk of nuclear war, is that we don’t know.

“I actually buy the premise of the report, mainly because of the sheer uncertainty that accompanies any game-changing new technology like AI will be,” Peter W. Singer, a strategist at the New America Foundation and editor at Popular Science, told me over email.

AI will change everything, according to Singer, but it’s still too early to predict how those changes will go. “Just as the steam engine and then the atomic bomb and space rocket threw everything from the pace of competition to what the sides knew about each other,” Singer said, “so too should something like AI.”

One hopes the AI of the future has a little Petrov in it. As he said after the false alarm in 1983, “I was simply doing my job. I was the right person at the right time, that's all.”

Get six of our favorite Motherboard stories every day by signing up for our newsletter.