Image: Stop the Killer Robots
Earlier this month, the US military invited robotics companies out to Fort Benning to demonstrate the latest robotic warfare tech. The star attraction was a wheeled, remote-controlled, machine gun-toting robot that could use thermal imaging to detect foes. At the press of a button from a safely hidden operator, it can fire a devastating volley at will.
Just today, Amnesty International released a report that found that American drone strikes had caused up to 900 civilian deaths. All told, it hasn't been a great month for the anti-killer robot crowd.
Now, neither of the weapons above are so-called lethal autonomous robots (LARs)—there were gunners peering through a screen with an Xbox controller in hand—but they're not far off. It's all still got to be a little disheartening for the poor activists behind anti-autonomous weapons groups like Human Rights Watch, the International Committee for Robot Arms Control, and the Stop the Killer Robots campaign. The groups are pleading with the UN, the international community, anyone who will listen, really, to take their finger off the trigger before there isn't a trigger at all anymore.
Meanwhile, the line between a remote-controlled killer robot and an automatically killing robot is drawing ever thinner.
Yesterday, activists, scholars, and policymakers met at the UN to try to convince the body to limit or ban the use of autonomous weaponized robots in the international arena. It was one of the first times that the groups had been granted such an audience with the UN General Assembly First Committee on Disarmament and International Security.
According to ComputerWorld, "13 countries, including Canada, Egypt, the U.S., the U.K., India, Ireland and S. Korea were represented at the U.S. meeting on Monday." Some of them—including France, Egypt, and Pakistan—have issued statements of support. But issues of support don't stop drones from firing a halo of missiles into a hillside community.
Still, it was perhaps the most significant achievement of a movement that's been trying to get the world's attention all year. First, organizers staged a well-choreographed demonstration—mock killer robot and all—in London back in the winter.
More recently, organizers collected 272 signatures from preeminent scientists and technologists hailing from 37 countries to sign a statement calling for an explicit ban on lethal autonomous warfare.
And Human Rights Watch issued an updated report to coincide with the meeting at the UN. It argues (again) that "artificial intelligence (AI) and other technologies will soon make possible the development of fully autonomous weapons, which would revolutionize the way wars are fought." And probably not for the better.
"If one nation acquires these weapons, others may feel they have to follow suit to avoid falling behind in a robotic arms race," SKR writes. "Furthermore, the potential deployment and use of such weapons raises serious concerns about protecting civilians during armed conflict."
Think of those 900 innocent civilian casualties again—hundreds dead, written off as unacknowledged collateral damage. Mostly because the US military deemed it less risky to have a robot fire missiles at a distance than to send in real flesh-and-blood troops. Killer robot advocates (aka the defense industry and militaries) argue that this sort of thing saves lives. With robots, soldiers will experience fewer messy ground campaigns and fewer firefights. But tell that to the thousands of Pakistani and Afghani dead, injured, or widowed by steely flying robot Predators.
And the strikes keep coming. Four were killed in another Pakistan drone strike just weeks ago. They are killing grandmothers, children, day laborers in the wrong place at the wrong time. Again, they're not autonomous. But they're getting closer. The Amnesty International and Human Rights Watch reports are another reminder that we've become complacent with the use of unmanned robotic warfare; complacent enough that the release of both probably won't affect the military's drone policy much.
There are already plenty of robotic technologies that could grant militaries the capacity to kill autonomously. In fact, they're out there right now. At this point, it's merely a matter of ethics. As such, autonomous warbots could end up going the way of chemical weapons or of remotely-piloted weapons. One has an international ban that is more or less upheld by the United Nations; the other is something we sort of shake our heads at the TV and shrug at. But the trajectory seems to be pointed towards more of the same: Robots are doing more of the killing than ever, even if they're not making snap decisions themselves yet.
"Because of these concerns, fully autonomous weapons should be prohibited before it is too late to change course," the Stop the Killer Robots campaign argues. "Nations should agree that any decision to use lethal force against a human being should be made by a human being."
In terms of hard numbers—the body count in the drone wars ticks ever upward—the campaign to stop killer robots is not going well. Yet the UN meetings and attendent side events were packed. These groups are indeed raising awareness and winning news cycles. Only will tell if it's enough to stop the rise of an era where our war machines decide to kill for us.