Tech

Experts Want to Give Control of America's Nuclear Missiles to AI

If America is attacked with a nuclear bomb, artificial intelligence would automatically fire back even if we are all dead.
GettyImages-525450811
Image: Celafon/Getty Images

When it comes to nuclear weapons and the Cold War, everything old is new again. Old treaties against the creation of long range nuclear weapons are dead and Russia is working on new nukes it promises can strike the United States in record time. Two experts have an idea how to counter the new Russian threat—turn over control of America’s nuclear weapons to artificial intelligence. It’s a terrible idea.

Advertisement

In an article for the national security blog War on the Rocks, nuclear policy wonks turned college professors Adam Lowther and Curtis McGiffin, proposed making it easier for the President to launch nukes and advocating for an American, artificially intelligent “Dead Hand.” “Dead Hand” is a Russian fail-deadly (like a fail-safe, but everyone dies), first deployed during the Cold War that ensures Russia’s nukes fly if the country is attacked, even if no one exists to launch them Nuclear deterrence hinges on the theory that no country is willing to launch a nuke because it knows that rival countries will retaliate in kind. That’s the idea behind Mutual Assured Destruction.

"Some ideas cross into bad science fictionland"

Lowther and McGiffin suggest that, thanks to Russia’s new nuclear weapons, the credible fear that America could retaliate with a nuclear strike is disappearing. The solution is to give control of nuclear weapons to AI. “Time compression has placed America’s senior leadership in a situation where the existing [command and control] system may not act rapidly enough,” they wrote. “Thus, it may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position.”

A 2018 report from the RAND Corporation suggested that AI might, in fact, make the world less safe from nuclear war. The report asked several experts to weigh in on how AI might change nuclear deterrence and the results were inconclusive. Some of RAND’s experts believed AI would make the world safer, and others believed it would radically destabilize the current balance of nuclear power. “AI needs only to be perceived as highly effective to be destabilizing—for example, in the tracking and targeting of adversary launchers. Threatened with potential loss of its second-strike capability, an adversary would be pressured into a preemptive first strike or into expanding its arsenal, both undesirable outcomes,” the report said.

Advertisement

The War on the Rocks blog has spread widely among people nuclear weapons experts, some of whom think it’s a dangerous idea. The Bulletin of Atomic Scientists covered the idea late last week.

“Its, uh, quite the article,” Peter W. Singer, a Senior Fellow at the New America Foundation, said of the War on the Rocks blog in an email. Singer admitted Lowther and McGiffin proposed some good ideas, such as increasing investment in reconnaissance. “Then some ideas cross into bad science fictionland.”

Singer says the use of artificial intelligence in America’s nuclear command and control systems set off alarm bells, but it wasn’t the worst thing the pair suggested. “For me the stand out was proposing a change in ‘first-strike policy that allowed the president to launch a nuclear attack based on strategic warning,’” Singer said. “We have a President who just anger-tweeted Grace from Will & Grace and pondered nuking hurricanes and you're proposing that we should LOWER the threshold for the use of nuclear weapons? Read the room.”

The post-Cold War detente and slow draw down of the world’s nuclear arsenal is over. Russia is working on new nuclear weapons it claims will give it an edge in a nuclear war. But none of those weapons have been deployed.

The history of nuclear weapons is a history of paranoia, accidents, and human intervention preventing a global disaster. Before the development of intercontinental ballistic missiles, the United States kept a fleet of nuclear bombers flying in the skies across the world 24 hours a day. The strategy resulted in several crashes and lost nuclear bombs, including the contamination of Greenland in 1968.

In Britain, which has nuclear weapon-equipped submarines, Submarine captains rely on a letter of last resort to instruct them in the event of a nuclear war that destroys London. Every new Prime Minister must personally write a letter instructing the captain of how to proceed should the United Kingdom be destroyed by nuclear fire and the submarine left at sea. Every PM has to decide—if I’m dead, should the nukes fly or not?

Artificial intelligence won’t solve these problems, and it might make them worse. In 1983, Soviet Lieutenant Colonel Stanislav Petrov prevented a nuclear war. He was monitoring the USSR’s radar and noted saw what appeared to be American missiles headed for his country. Instead of readying the USSR for war, he waited, assuming it to be a technical glitch. He was right, and he prevented a disaster.

There’s no way to know what happens if we cede control of these systems to an artificial intelligence, but we do know the likelihood of a person like Petrov stepping in to stop the madness plummets.