Existing unmanned robotics still rely on humans to pull the trigger, but in the future fully autonomous weapon systems could select and execute targets on their own.
This Thursday, Human Rights Watch and the Harvard Law School jointly issued a 38-page report calling for a global ban on what they call "killer robots," otherwise known as fully autonomous weapons systems. The report comes just days before the United Nations is set to convene an "Inhuman Weapons Convention" in Geneva on April 13 to discuss how to deal with the imminent reality of lethal, humans-free robots. If the report's authors get their way, they'd like to see the UN create treaties to block nations from developing or using such technologies, and to encourage domestic governments to enact similar legislation to prevent a terrible future.
The killer robots referenced in the report lie somewhere on the continuum between drones and the Terminator. Whereas existing unmanned robotics still rely on humans to pull the trigger (keeping human agency in the loop), fully automated weapons systems would be able to locate, select, and execute their own targets with only human oversight and overrides at best, or without any human intervention at worst. Some experts argue that human override controls on otherwise fully autonomous robots would be futile given the speed of robotic decision-making. "A human has veto power, but it's a veto power that you have about a half second to exercise," said Brookings Institute expert Peter Singer, in conversation with The Verge. "You're mid-curse word."
At the core of the report are not concerns over the inevitable robotic uprising and humanity's apocalyptic cleansing from the earth, but deep ethical and legal concerns about how such machines would fit into existing legislation. Under existing treaties and norms, the authors argue, a fully autonomous robot's programmers, manufacturers, and commanders would be free of responsibility if their machine wound up killing innocents unless it could be proven that they sent it into an area with the express intent of such an outcome. The impunity of glitches and miscalculations would leave no site for condemnation and accountability, possibly eliminating barriers to anything from questionable tactical calls to outright war crimes in military circles.
"[Such robots would] challenge longstanding notions of the role of arms in armed conflict," the report reads. "And for some legal analyses, they would be more akin to a human soldier than an inanimate weapon. On the other hand, fully autonomous weapons would fall far short of being human [and thus accountable beings that can be punished and governed]."
Some argue that this report and related works issued by the Campaign to Stop Killer Robots (a collaboration of HRW and about 50 other NGOs) sensationalizes this strictly legal concern by calling up images of Cylons and HAL 9000, prompting readers to knee-jerk hysteria. They believe that the technology under discussion is far enough off even in its most rudimentary (and non-apocalyptic) form that a ban is uncalled for, although new legislation may be required.
Anti-autonomous weapons advocates do admit that they came up with the killer robot brand to gain attention, but they argue that their work is otherwise focused on real and imminent threats.
"We put killer robots in the title of our report to be provocative and get media attention," Mary Wareham of the CSKR told The Atlantic in 2014 regarding a previous document. "But we're trying to be really focused on what the real life problems are, and killer robots seemed to be a good way to begin the dialogue."
Advocates of controls on autonomous robotics development point out that the technology is a lot closer than many of us may think. China, Germany, India, Israel, South Korea, Russia, the United Kingdom, and the United States have all been implicated in the development of artificially intelligent weapons technologies in recent years. Apparently in 2013, the US Department of Defense issued a directive on the (desired) development of such systems. They and researchers from many nations have developed a host of near-autonomous fighting systems, some of which only feature human controllers as a courtesy to decision making, suggesting that fully autonomous devices really could be just around the corner—some think just five to 30 years out. And even gentle Canada has, as of last year, expressed interest in these technologies, sidestepping any concerns about ethics and liability in their internal reports on the subject.
"Fully autonomous weapons do not yet exist," reads the HRW and HLS report, "but technology is moving in that direction and precursors are already in use or development."
Plus, it appears that despite accusations of paranoia-fueled campaign support, many Americans, when surveyed on the subject, attribute their concerns over automatic weapons systems to ethical and procedural issues rather than to visions of Skynet and Arnold Schwarzenegger.
Yet even if the anti-autonomous weapons campaign's concerns are real and imminent and their support comes from informed and moderate people, not from raving sci-fi conspiracy theorists, the movement has had little luck in achieving bans on such systems in the past. Although the CSKR has only existed since 2013, some activists have been pushing this issue since the 1980s. And while they have raised attention and speculation on the subject in the media, a previous attempt to raise the issue at the UN (in a similar debate last year) resulted in no real action.
Part of this probably owes to the fact that killer robots (as of yet unable to defend themselves) have very eloquent advocates highlighting just how useful they could be at reducing war crimes.
"There are a number of ways these things can be deployed where they may even be more accurate than humans," CBC recently quoted Steven Groves of the Heritage Foundation as saying. "They don't get scared. They don't get mad. They don't respond to a situation with rage—and we've seen some tragic consequences of that happening in Iraq and elsewhere."
Advocates also argue that people should stop thinking of killer robots in terms of things that can just go off and do whatever they want like they had their own minds, arguing that they could be ruled by hard and reliable algorithmic restrictions—which may theoretically put clear blame back on human commanders for pointing tools operated by set procedures in inhuman directions.
"At some level, a toaster is autonomous," said Ronald Arkin, perhaps the foremost killer robot proponent, to The Verge. "You can task it to toast your bread and walk away... That's the kind of autonomy we're talking about [with respect to fully autonomous weapons systems]."
However, this year could go a lot better for the recent report's authors given the critiques of the dangers of artificial intelligence by tech giants over the past six months. In November, inventor par excellence Elon Musk expressed his concern that AI, like that undergirding autonomous weapons, could escape our grasps within five to ten years. This January, Bill Gates and Stephen Hawking, among others, echoed his concerns. Their visions range toward the apocalyptic, but at their core they express doubts that the kind of reliability and control Arkin talks about could really be exercised over any AI, much less killer robots.
That said, there are still so many unknowns in the issue of killer robots, and there's so much vested interest among military establishments in their development, that a full and preemptive ban on the technology is unlikely. However Arkin and others have expressed a willingness to put a moratorium on the research, development, and use of such machines until solid legal norms can be worked out. Perhaps that's the big win apocalypticists and ethicists can expect at the UN this year—or if not this year, then sometime soon. Hopefully before the rise of the robot overlords.
Follow Mark Hay on Twitter.