Tech

Over 1,000 AI Experts Condemn Racist Algorithms That Claim to Predict Crime

Over 1,000 AI Experts Condemn Racist Algorithms That Claim to Predict Crime

Over 1,000 technologists and scholars are speaking out against algorithms that attempt to predict crime based solely on a person’s face, saying that publishing such studies reinforces pre-existing racial bias in the criminal justice system.

The public letter has been signed by academics and AI experts from Harvard, MIT, Google, and Microsoft, and calls on the publishing company Springer to halt the publication of an upcoming paper. The paper describes a system that the authors claim can predict whether someone will commit a crime based solely on a picture of their face, with “80 percent accuracy” and “no racial bias.”

Videos by VICE

“There is simply no way to develop a system that can predict ‘criminality’ that is not racially biased, because criminal justice data is inherently racist,” wrote Audrey Beard, one of the letter’s organizers, in an emailed statement. The letter calls on Springer to retract the paper from publication in Springer Nature, release a statement condemning the use of these methods, and commit to not publish similar studies in the future.

This is not the first time AI researchers have made these dubious claims. Machine learning researchers roundly condemned a similar paper released in 2017, whose authors claimed the ability to predict future criminal behavior by training an algorithm with the faces of people previously convicted of crimes. As experts noted at the time, this merely creates a feedback loop that justifies further targeting of marginalized groups that are already disproportionately policed.

“As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system,” the letter states. “These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences […] Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

The letter is being released as protests against systemic racism and police violence continue across the US, following the deaths of Breonna Taylor, George Floyd, Tony McDade, and other Black people killed by police. The technologists describe these biased algorithms as part of a “tech-to-prison pipeline,” which enables law enforcement to justify discrimination and violence against marginalized communities behind the veneer of “objective” algorithmic systems.

The worldwide uprisings have revived scrutiny of algorithmic policing technologies such as facial recognition. Earlier this month, IBM announced it would no longer develop or sell facial recognition systems for use by law enforcement. Amazon followed by putting a one year moratorium on police use of its own facial recognition system, Rekognition. Motherboard asked an additional 45 companies whether they would stop selling the technology to cops, and received mostly non-responses.

Update: This article was updated to clarify that the paper in question will appear in Springer Nature, a book series published by the Springer publishing company, and not Nature, the scientific journal owned by Springer.

Update 2: In a message sent to Motherboard, a representative from Springer Nature confirmed that the paper in question has been rejected.