Tech

University Deletes Press Release Promising ‘Bias-Free’ Criminal Detecting Algorithm

Researchers in Pennsylvania—including a former NYPD cop—are the latest to dubiously claim they can predict future crime based on a person's face.
Janus Rose
New York, US
University Deletes Press Release Promising ‘Bias-Free’ Criminal Detecting Algorithm
Image: Getty Images

A university in Pennsylvania has deleted a press release claiming its researchers had developed a system to “detect criminality” from human facial features with zero racial bias, a problem that has persisted in other forms of predictive policing and machine learning technology adopted by law enforcement around the world.

According to the now-deleted news release, Harrisburg University researchers—which include a former NYPD police officer—claim to have made an algorithm “with 80 percent accuracy and with no racial bias” that is able to “predict if someone is a criminal based solely on a picture of their face.”

Advertisement

“Crime is one of the most prominent issues in modern society,” said Jonathan W. Korn, a Ph.D. student and former NYPD officer, in a quote from the deleted post. “The development of machines that are capable of performing cognitive tasks, such as identifying the criminality of [a] person from their facial image, will enable a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime from occurring in their designated areas.”

After the dubious claims attracted ridicule on Twitter, the University deleted its tweet promoting the post, and the news release itself disappeared shortly after.

“The post/tweet was taken down until we have time to draft a release with details about the research which will address the concerns raised,” Nathaniel J.S. Ashby, one of the Harrisburg researchers, told Motherboard in an email. The school later made a follow up post saying that the news release was taken down "at the request of the faculty involved in the research," but did not address any of the criticism it had received.

The problem of bias in machine learning has been well-documented, and claims of “bias-free” algorithms have been thoroughly debunked. In 2016, a group of Chinese researchers made similarly outrageous claims, creating a face-scanning system that supposedly detected whether people would commit crimes. But as machine learning experts noted at the time, the facial data used to train the system was taken from incarcerated people—meaning the algorithm was simply regurgitating the biases of the criminal justice system, creating a feedback loop to determine what criminals look like.

The deleted post is part of a disturbing trend of researchers using supposedly “objective” algorithms to promote flawed and discriminatory technology that's been shown time after time to reflect the biases of the people developing it and the data it uses. While it’s not yet clear what clarifications the researchers plan to make, it’s obvious that these kinds of pseudoscientific claims don't stand up to scrutiny.

This article was updated to include a follow-up statement from Harrisburg University.