Tech

Las Vegas Cops Used ‘Unsuitable’ Facial Recognition Photos To Make Arrests

Records obtained by Motherboard show the police department used sub-par images in almost half of its facial recognition searches, increasing the chance of misidentifying suspects.
Las Vegas Cops Used ‘Unsuitable’ Facial Recognition Photos To Make Arrests

Despite its growing reach and sophistication, facial recognition remains a fickle technology. It works best when fed with a frontal shot of the subject’s head, in good lighting, taken from a couple feet away. 

When deployed for police surveillance and investigations, as they often are, facial recognition systems rarely have the luxury of such easy starting points. The search material—known as probe images—are more often taken from grainy CCTV footage, and suspects have a tendency to wear hats and not look directly into the camera. 

Advertisement

As new records about one popular police facial recognition system show, the quality of the probe image dramatically affects the likelihood that the system will return probable matches. But that doesn’t mean police don’t use bad pictures anyway. According to documents obtained by Motherboard, the Las Vegas Metropolitan Police Department (LVMPD) used “non-suitable” probe images in almost half of all the facial recognition searches it made last year, greatly increasing the chances the system would falsely identify suspects, facial recognition researchers said.

In 2019, the LVMPD conducted 924 facial recognition searches using the system it purchased from Vigilant Solutions, according to data obtained by Motherboard through a public records request. Vigilant Solutions—which also leases its massive license plate reader database to federal agencies—was bought last year by Motorola Solutions for $445 million. 

Of those searches, 471 were done using images the department deemed “suitable,” and they resulted in matches with at least one “likely positive candidate” 67% of the time. But 451 searches, nearly half, were run on “non-suitable” probe images. Those searches returned likely positive matches—which could mean anywhere from one to 20 or more mugshots, all with varying confidence scores assigned by the system—only 18% of the time. 

“Using subpar images raises the risk that people will be falsely identified, people will be falsely arrested, and there will be potentially life changing charges for crimes they didn’t commit,” Matt Cagle, a technology and civil liberties attorney with the ACLU of Northern California, told Motherboard. “This is potentially scandalous. The proportion of non-suitable image searches here is shocking.”

Advertisement

From January through November 2019, LVMPD used non-suitable probe images to identify suspects in at least 73 cases, according to an internal slideshow also obtained by Motherboard.

In a statement, LVMPD emphasized that its investigators use facial recognition results as a lead, not a definitive identification, and that trained examiners also perform their own analyses of potential matches without the assistance of software. Those human analyses took, on average, 50 minutes each in 2019, a department spokesman said. “Some ‘unsuitable’ images will not be recognized by the computer system and will not return any possible matches. However, the system may produce possible matches on an ‘unsuitable’ probe image. The subsequent investigation of a likely candidate may reveal other evidence.”

When LVMPD examiners encounter a non-suitable probe image, department policies say they should first run a regular search with that image. If that returns no likely matches, they are instructed to apply filters to the image and search again. The Vigilant Solutions’ facial recognition system comes with a variety of photo editing and graphic design filters that allow investigators to edit poor quality images in order to return more results. Any possible matches are then sent on to the detective or officer who asked for the search for further investigation.

“If they have a non-suitable probe photo, they should reject the search. They should not even run the search in the first place,” Clare Garvie, a senior associate at the Georgetown Law Center on Privacy and Technology, told Motherboard. Searches conducted using altered images “should be considered incredibly unreliable.”

Advertisement

Even under ideal conditions, machine learning experts have criticized the use of facial recognition by law enforcement. Facial recognition systems have consistently been shown to exhibit racial bias, and the technology misidentifies people of color and women at far higher rates. Portland, Maine recently became the thirteenth US city to ban facial recognition, and a bill moving through Congress would forbid the technology’s use by federal law enforcement agencies. Privacy advocates have further argued that the technology shouldn’t be allowed to exist at all.

In its statement, the LVMPD acknowledged its investigators have made arrests following investigations that included, at least in part, a facial recognition search based on a non-suitable probe image. As an example of how the process works, an LVMPD spokesman said detectives investigating the sexual assault of a juvenile were able to make an arrest the same night the crime was reported with the help of an emergency facial recognition request. The spokesman did not provide any details about the incident—such as the name of the person arrested, the location, or date—that could help corroborate the account.

The internal LVMPD slideshow cites several other cases in which its facial recognition system assisted in arrests, also without providing details necessary to identify the incidents—with the exception of a September 2019 homicide, for which LVMPD arrested then-27-year-old Alexander Buzz. LVMPD redacted all the probe images, including the one it used to identify Buzz, in the slideshow before sending a copy to Motherboard.

Clark Patrick, the Las Vegas attorney representing Buzz, told Motherboard that neither the LVMPD nor the Clark County District Attorney’s office ever informed him that investigators identified Buzz as a suspect using, at least in part, facial recognition technology. The Clark County District Attorney’s office did not respond to an interview request or written questions.

The surveillance video that the district attorney’s office intends to use as evidence in Buzz’s upcoming trial is fairly good quality, Patrick said, but he would have altered his defense strategy had prosecutors informed him of the role Vigilant Solution’s technology played in his client’s arrest by, for example, choosing not to waive Buzz’s right to a preliminary hearing during which prosecutors might have been obligated to disclose further details about the probe image.

In June, the New York Times reported on what appears to be the first documented case of police making a false arrest based on the results of a facial recognition search. There are likely more cases like it, Cagle said, but they haven’t come to light because police and prosecutors don’t reveal when or how they used facial recognition technology to defendants and their attorneys. “The public deserves to know, and criminal defendants need to know, whether people are being arrested because of the flawed use of this flawed technology.”