Some of the world's top scientists fear that we're on the brink of unlocking the disturbing potential of artificial intelligence, and have called for a ban on autonomous weapons systems and "killer robots" that can select and assault targets without being directed by humans.
Apple co-founder Steve Wozniak, Tesla and SpaceX founder Elon Musk, and renowned physicist Stephen Hawking were among more than 1,000 distinguished scientists, researchers, and engineers who signed a letter warning that in the race to develop such defense systems, "autonomous weapons will become the Kalashnikovs of tomorrow."
The letter was presented at the opening of International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, on Monday.
'The future is scary and very bad for people.'
It acknowledged that robotic weapons could potentially reduce human casualties in warfare, but argued that the costs ultimately outweigh the benefits.
"AI technology has reached a point where the deployment of [autonomous weapons] is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms," the letter says.
But unlike nuclear weapons, whose components are traceable and relatively easy to monitor, "autonomous weapons" could spread quickly and fundamentally alter the character of warfare.
"They require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce," the letter claims.
It also called for clearer guidelines between academic AI research and military research.
"Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits."
This is not the first time leading scientists and tech figures have sounded the alarm on the implications of artificial intelligence. Last August, Elon Musk warned that artificial intelligence is "potentially more dangerous than nukes." Stephen Hawking cautioned late last year that "the development of full artificial intelligence could spell the end of the human race." He signed on to a similar letter in January, arguing for artificial intelligence regulations that would require robots to follow human commands.
Earlier this year, Microsoft founder Bill Gates echoed these worries in a Reddit "ask me anything" session.
"I am in the camp that is concerned about super intelligence," he replied in answer to a question about the threat posed by sophisticated AI. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."
In March, Steve Wozniak told the Australian Financial Review that "If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently."
"The future is scary and very bad for people," he added.
Watch the VICE News documentary Israel's Killer Robots:
But not everyone shares this doomsday vision. John Markoff, a Pulitzer Prize-winning tech journalist for the New York Times, told the online scientific magazine Edge earlier this month that he thinks it is overblown.
"Gates and Musk and Hawking have all been saying that this is an existential threat to humankind. I simply don't see it," he said. "If you begin to pick it apart, their argument and the fundamental argument of Silicon Valley, it's all about this exponential acceleration that comes out of the semiconductor industry."
This acceleration, Markoff noted, has plateaued or at least "paused." While engineers have made gains in making machines recognize patterns, cognition remains a daunting challenge.
"My sense, after spending two or three years working on this, is that it's a much more nuanced situation than the alarmists seem to believe," Markoff said. "There are two things to consider: One, the pace is not that fast. Deploying these technologies will take more time than people think. Two, the structure of the workforce may change in ways that means we need more robots than we think we do, and that the robots will have a role to play."
"In 2045, it's going to look more like it looks today than you think," he added.
Photo via Pixabay