FYI.

This story is over 5 years old.

Tech

Or Maybe We Shouldn't Ban Killer Robots

We should probably be arguing about war itself and not hypothetical future war technologies.
Image: Sgt. Sarah Dietz, US Marine Corps

Earlier this week a couple thousand or so scientists, technologists, and enthusiasts, led by astrophysicist Stephen Hawking and noted AI doomsayer Elon Musk, signed a letter calling for a ban on evil robots. Or, rather, "offensive autonomous weapons." Because of course they did: It's a ban on killer robots.

Great. Glad we solved that.

Someone had to make the case that this is all pretty stupid, if not in those words, and that person is IEEE Spectrum editor Evan Ackerman. Ackerman argues that, no, we shouldn't ban killer robots. For one thing, there is no such thing as a ban on killer robots, or a ban on killer robots that might ever possibly mean anything to anyone who might be interested in building killer robots—for purposes of good, evil, and-or money. Unlike, say, chemical or biological weapons (which are banned), there are virtually no barriers to building killer robots.

Advertisement

That is, the technology is cheap and plentiful and can be advanced, manufactured, and distributed using the most unassuming of materials. What's more, designing killer robots looks a whole lot like, well, designing robots.

There's simply too much commercial value in creating quadcopters (and other robots) that have longer endurance, more autonomy, bigger payloads, and everything else that you'd also want in a military system. And at this point, it's entirely possible that small commercial quadcopters are just as advanced as (and way cheaper than) small military quadcopters, anyway. We're not going to stop that research, though, because everybody wants delivery drones (among other things). Generally speaking, technology itself is not inherently good or bad: it's what we choose to do with it that's good or bad, and you can't just cover your eyes and start screaming "STOP!!!" if you see something sinister on the horizon when there's so much simultaneous potential for positive progress.

Fine, but the futility of a ban isn't really an argument against a ban. The better argument has to do with the potential of armed autonomous robots to make war safer. That is, algorithms don't make the same mistakes as human soldiers, and-or aren't subject to the same "fog of war." We can be assured of a robot's judgment because a robot has no other judgment aside from the one we give it. Or, rather, we can at some point in the future be assured of this because the technology doesn't really exist yet. We're talking about an ideal, exhaustively tested future robot. Which, to be clear, is the robot that we're banning.

Advertisement

Ackerman continues:

I do agree that there is a potential risk with autonomous weapons of making it easier to decide to use force. But, that's been true ever since someone realized that they could throw a rock at someone else instead of walking up and punching them. There's been continual development of technologies that allow us to engage our enemies while minimizing our own risk, and what with the ballistic and cruise missiles that we've had for the last half century, we've got that pretty well figured out. If you want to argue that autonomous drones or armed ground robots will lower the bar even farther, then okay, but it's a pretty low bar as is. And fundamentally, you're then placing the blame on technology, not the people deciding how to use the technology.

And that's the point that I keep coming back to on this: blaming technology for the decisions that we make involving it is at best counterproductive and at worst nonsensical. Any technology can be used for evil, and many technologies that were developed to kill people are now responsible for some of our greatest achievements, from harnessing nuclear power to riding a ballistic missile into space. If you want to make the argument that this is really about the decision to use the technology, not the technology itself, then that's awesome. I'm totally with you. But banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil: we'd need a much bigger petition for that.

Someone had to make the rebuttal and I'm glad they did. Since we're taking stands on hypothetical future technologies, I'll just assume a hypothetical future where we can have it both ways or something a lot like both ways. If we're programming AI to differentiate between civilians and combatants (etc.) I think we can employ that same AI to aid human soldiers in making that same differentiation, which is an idea we're already approaching with smart weapons. But any of this only ever works if the programmers of those AIs (or those who direct them) and the soldiers they're assisting (or those who command them) care about not killing civilians in the first place. That's hardly a given.

The argument is ultimately about war and not technology. Believe it or not the world is becoming a more humane and, on average, less war-friendly place. It's natural that we'd reach this peculiar state in which we try to imagine war as a "humane" or "safe" thing, no matter that just 60 or so years ago the pillars of Western civilization were in a race to firebomb each other's cities. We don't want war anymore, but we don't know what else there is yet.

And so we argue about robots.