The Legal and Ethical Ramifications of Letting Police Kill Suspects With Robots
Screengrab: Sidney Johnson/CentralTrack

The Legal and Ethical Ramifications of Letting Police Kill Suspects With Robots

Dallas police used a bomb robot to kill a suspect in what was "essentially a jury-rigged version of a drone strike"—where do we go from here?
July 9, 2016, 1:30pm

Thursday night, after a shocking night of violence in which five police officers were killed by snipers, Dallas police took the unprecedented step of using a remote-controlled "bomb robot" to kill one of the suspects. Now that law enforcement in America have killed a suspect remotely, it's important to consider the legality of the decision—and what might happen next time.

State laws generally allow law enforcement to legally use lethal force against a suspect if he or she poses an "imminent threat" to the officer or other innocent parties, which is underscored by a standard of whether the force is "proportional and necessary." A 1985 Supreme Court case called Tennessee v. Garner allows for deadly force if a fleeing suspect poses "a significant threat of death or serious physical injury to the officer or others."

Does the means of killing matter for that legal standard? In this case, probably not, according to several legal experts I spoke to. The bomb disposal robot that turned into an improvised remotely triggered killing machine wasn't autonomous and can, in this instance, be looked at as a tool that was used to diminish the threat suspect Micah Johnson posed to Dallas police officers.

"It might be justified to use remotely controlled robots to apply lethal force where such force is justified," Jay Stanley, a senior legal analyst at the American Civil Liberties Union told me. "As a legal matter, the choice of weapon in a decision to use lethal force does not change the constitutional calculus, which hinges on whether an individual poses an imminent threat to others, and whether the use of lethal force is reasonable under the circumstances."

"It is essentially a jury-rigged version of a drone strike"

Dallas police chief David Brown told reporters Friday that the force "saw no other option than to use our bomb robot" to kill Johnson, and said that prior to using the bomb, Johnson and officers on the force exchanged gunfire.

"It is essentially a jury-rigged version of a drone strike," Ryan Calo, a University of Washington School of Law professor specializing in cyber and robotic law, told me. "If they would have been justified in throwing a grenade, then they're likely justified in doing this, which was quite frankly a creative thing."

The fact remains, however, that law enforcement using a robot to remotely kill a suspect on American soil without a trial is widely believed to be without precedent in US history and ushers in a new era of policing. I consulted with four top technology lawyers—Calo, Stanley, Elizabeth Joh of UC Davis School of Law, and Ian Kerr, Canada Research Chair in Ethics Law and Technology at the University of Ottawa, who each said that remote killing in the United States raises a host of new questions and scenarios.

Dallas police have "reconfigured the realm of the possible"

There is a long history of new technologies being used for the first time in extreme cases and slowly being normalized over time. Brown said Dallas had "no other option," Thursday night, but a tactic that was once reserved as a last resort can quickly become a first or second option if it's effective and protects officers' lives.

There will be a temptation to weaponize these machines so as to potentially reduce risk to officers

"There will be a temptation by some people to reduce this down to a functional equivalence—'If a police officer could justifiably go in and if it was deemed necessary to kill the assailant, why couldn't a robot controlled by a police officer do it?" Kerr, who edited the book Robot Law with Calo, told me. "There will be a temptation to weaponize these machines so as to potentially reduce risk to officers, but what it does in the long run is it changes the calculus about decisions such as when to use lethal force in hostage situations."

"There's a road we're starting to go down here … by taking a robot originally designed to disarm bombs and using it to blow people up, the Dallas Police end up reconfiguring the realm of what is possible," he continued. "And, as we have seen by their response, expanding the arsenal of possibility in this way makes it easier to recalibrate the calculus regarding which actions are necessary. Very quickly the argument moves from 'we can use a robot to blow him up' to 'we saw no other option but to use our bomb robot.'"

In the past, we've seen standard mission creep with most technologies that law enforcement use—surveillance tools that were developed to be used against terrorists and violent offenders have been turned on innocent civilians, for example.

"Because ground robots may allow deadly force to be applied more safely and easily, they raise the danger that they will be overused," Stanley said. "When things get easier to do, they tend to be done too much. Remote uses of force raise policy issues that should be carefully considered and addressed by our society as technology advances and should remain confined to extraordinary situations."

"Once you send a robot in, compared to a human, you're no longer thinking about it as a human process of negotiation"

Telepresence and killing

Remotely interacting with a suspect—whether the robot is negotiating with or using force against him or her—fundamentally changes the interaction. This is of course part of the point; by removing human officers from the area under threat, you're removing their risk. But remotely responding to rapidly-changing, volatile scenarios is something that human police officers are trained to do. A police officer can elect to shoot a suspect, or tase them, or negotiate with them, or otherwise subdue them. The robot used Thursday had essentially two remote-controlled options: detonate or don't detonate.

"Once you send a robot in, compared to a human, you're no longer thinking about it as a human process of negotiation," Kerr said. "A human can think—is this guy going to lay down arms or not? This can only really be done once they've decided this is far past the point of negotiation."

Legally speaking, courts have in the past taken into account whether or not a human is actually present to make decisions about things such as ownership. Case law on the subject is all over the place, but for example, a 1990 court case established "telepossession" standards for the robotic exploration of shipwrecks.

"Aggressive use of robots and telepresence can cause legal changes," Calo said.

Robots can dehumanize killing

Kerr works with the Campaign to Stop Killer Robots, an organization that lobbies groups like the United Nations to prevent a future where autonomous killing machines are used in war. There was nothing autonomous about the robot that killed Johnson, but, then again, there's not much autonomous about a drone strike in the Middle East, either. In both cases, however, the act of killing is spread out among multiple people along a chain of command.

"How do we ensure that such robots aren't taken over by third parties?"

"With a drone strike, you've distributed the decision making process, so you have intel on the ground, a commander somewhere else, and a drone pilot," Kerr said. "That is in a sense dehumanizing because it makes it easier for the each person to play their role without taking into account that we're making decisions about human lives."

What are the guidelines for remotely killing a human?

Chief Brown said Dallas police "placed a device on [the robot's] extension" that later exploded, suggesting that the robot was improvised on the fly. Given the extreme circumstances, it's unlikely that the Dallas police department has published guidelines about when it's OK to remotely kill a suspect (we've asked and will update if we hear back). Law enforcement will inevitably need those.

"With any new technology that the police use, what precautions should be taken to make sure that things don't go badly wrong? The Dallas robot appears to have been a jury-rigged one," Joh told me. "But if police robots become part of the future, how do we ensure that such robots aren't taken over by third parties? The current landscape of easily hacked devices isn't very assuring in this regard."

What does the future hold?

Many people in the robotics industry say that robots do "dull, dirty, and dangerous work" that humans shouldn't have to do. Hostage standoffs, shootouts, and active shooter situations are without question dangerous. If the technology exists to allow police officers to do their jobs more safely, why wouldn't they employ the use of robots?

The most important consideration in a situation in which a robot could be used, Kerr says, is whether a human police officer would kill a suspect if the robot weren't available.

"The question is, what would happen if it was not possible to send a robot? Would we have still sent in a human?" he said. "If the answer is different, then it has reframed everything."