Amazon Puts 1-Year Ban on Police Use of its Biased Facial Recognition Software

"Rekognition" has been shown time and time again to be ineffective and biased. Rather than shelving it altogether, Amazon is putting a one-year moratorium on police use of it.

Amazon announced that it is placing a one-year moratorium on police use of Rekognition, its facial recognition software that has repeatedly been shown to be biased against Black and brown people.

The move comes during widespread Black Lives Matter protests after the police killing of George Floyd, and after IBM announced that it would stop development of facial recognition software and condemned racially biased surveillance software.


“We’re implementing a one-year moratorium on police use of Amazon’s facial recognition technology. We will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families," Amazon wrote. "We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested."

It's notable that rather than fix its notorious technology, or discontinue it, Amazon is instead passing the buck to Congress here. It is also notable that Amazon does not explicitly mention the racial bias of Rekognition, which are well documented, nor does it mention George Floyd, the protests, Black Lives Matter, or why it's putting this moratorium in place. Rekognition was in use by a handful of police departments.

Of course, Amazon still owns Ring, a home surveillance company that has partnered with more than a thousand police departments and sells its product by imbuing fear of one's neighbors. Its associated app, called "Neighbors," has a well-documented racism problem. In addition, Amazon has dodged questions regarding whether ICE has licensed Rekognition, and its statement makes no mention of the immigration agency nor other federal agencies. Amazon also provides the technical backbone for ICE, hosting data for the agency and its surveillance partners such as Palantir.

Even considering that all facial recognition software has implicit bias, and has been deployed disproportionately against Black and brown people, Rekognition is still an outlier in terms of just how bad it is, and how insidiously Amazon marketed it.

Last year, Amazon claimed that Rekognition could detect “fear” and other emotions, despite widespread skepticism from experts over such claims by technology companies. It has pitched the tech to ICE, and its own employees have publicly protested it. In tests done by the ACLU, it misidentified one-in-five California lawmakers. In a test on the U.S. Congress, it matched 28 lawmakers with mugshots that weren't actually them; the false match rate disproportionately flagged Congress members of color as criminals.

Training slides for Rekognition obtained by the ACLU and first published by Motherboard in 2018 matched a photo of O.J. Simpson with the mugshot of a white man. In emails with police published at the time, Amazon officials joked about making police sign non disclosure agreements about the use of the technology, badgered police who didn’t immediately sign NDAs, and edited police blog posts about the use of the technology to make them more favorable to Amazon.

So, while it’s good that Amazon will stop letting police use its dystopian, ineffective facial recognition for a year, let’s say that maybe we shouldn’t hold our breath that the company will ultimately do the correct thing here and discontinue its use by police indefinitely.