Detroit police have used highly unreliable facial recognition technology almost exclusively against Black people so far in 2020, according to the Detroit Police Department’s own statistics. The department’s use of the technology gained national attention last week after the American Civil Liberties Union and New York Times brought to light the case of Robert Julian-Borchak Williams, a man who was wrongfully arrested because of the technology.
In a public meeting Monday, Detroit Police Chief James Craig admitted that the technology, developed by a company called DataWorks Plus, almost never brings back a direct match and almost always misidentifies people.
“If we would use the software only [to identify subjects], we would not solve the case 95-97 percent of the time,” Craig said. “That’s if we relied totally on the software, which would be against our current policy … If we were just to use the technology by itself, to identify someone, I would say 96 percent of the time it would misidentify."
Todd Pastorini, a general manager at DataWorks Plus, told Motherboard that it does not keep statistics on the software's accuracy in real-world use, and it does not specifically instruct law enforcement how to use the software.
"There's no statistics for that," Pastorini said. "The matter is the quality of the probes used. I’m very reluctant based on the last New York Times article I was misquoted or slightly misrepresented based on the context that was used. You might know how a shovel works—you stick it in the ground to pick up dirt and you might use it as a weapon. Facial recognition has been weaponized by the media to some degree. I understand the chief’s comment, but unfortunately many people don’t."
Pastorini likened DataWorks Plus' software to automated fingerprint identification systems, where dozens or hundreds of potential matches are returned. It "does not bring back a single candidate," he said. "It's hundreds. They are weighted just like a fingerprint system based on the probe [and what's in the database]."
The result of this, according to Detroit's own police officers, is that they are ultimately making the decision to question and investigate people based on what the software returns and a detective's judgment. This means that people who may have had nothing to do with a crime are ultimately questioned and investigated by police. In Detroit, this means, almost exclusively, Black people.
So far this year (through June 22), the technology had been used 70 times, according to publicly released data by the Detroit Police Department. In 68 of those cases, the photo fed into the software was of a Black person; in two of the cases, the race was listed as 'U,' which likely means unidentified (in other reports from the police, U stands for unidentified); the Detroit Police Department did not respond to a request to clarify. These photos were largely pulled from social media (31 of 70 cases), or a security camera (18 of 70 cases).
Several cities have banned police from using facial recognition software, which has well-known racial bias issues (and many false-positive issues as well). Detroit, however, has a very public debate in 2019 about the use of facial recognition, and instead decided to regulate its use rather than ban it altogether. Late last year, the city adopted a policy, which bans the use of facial recognition to “surveil the public through any camera or video device,” bans its use on livestream and recorded videos, and restricts (but does not ban) its use at protests. According to the policy, the software must be used only “on a still image of an individual,” and can only be used as part of an ongoing criminal investigation. The software checks images across a state database of photos, which include mugshot images. As part of these regulations, the police department is required to release weekly reports about the use of the technology, which show that it has been almost exclusively used on Black people.
Williams was arrested before the policy went into practice. Craig said during the meeting that the media it ran through DataWorks’ facial recognition system was “a horrible video. It was grainy … it would have never made it under the new policy … if we can’t obtain a good picture, we’re not going to push it through to the detective.”
Craig and his colleague, Captain Aric Tosqui, said that they want to continue using facial recognition because they say it can be a tool to assist investigators even if it doesn’t often lead to arrest. But even when someone isn’t falsely arrested, their misidentification through facial recognition can often lead to an investigator questioning them, which is an inconvenience at best and a potentially deadly situation at worst. According to Tosqui, the technology has been used on a total of 185 cases throughout the years. “The majority of the cases the detective reported back that [the match] was not useful.”
Despite these problems, DataWorks Plus said that it does not guide law enforcement on how to best use the software. "We don't tell our customers how to use the system," Pastorini said. "There’s already law enforcement policies. It is my experience the clearer the image, clearly is going to affect the likelihood of a more solid result."
The Detroit Police Department did not respond to a request for further comment. In recent months, there has been a new movement by city council members to ban the use of the technology.
Jordan Pearson contributed reporting.