Tech

UCLA Abandons Plans to Use Facial Recognition After Backlash

“Let this be a lesson to other school administrators: if you try to experiment on your campus with racist, invasive surveillance technology, we will come for you. And we don't lose."
Ahead of a national day of action led by digital rights group Fight for the Future, UCLA has abandoned its plans to become the first university in the United States to adopt facial recognition technology.    In a statement shared with Fight for the Future
Icon Sportswire / Contributor

Ahead of a national day of action led by digital rights group Fight for the Future, UCLA has abandoned its plans to become the first university in the United States to adopt facial recognition technology.

In a statement shared with Fight for the Future's Deputy Director Evan Greer, UCLA’s Administrative Vice Chancellor Michael Beck said the university "determined that the potential benefits are limited and are vastly outweighed by the concerns of the campus community."

Since last year, UCLA has been considering using the university's security cameras to implement a facial recognition surveillance system. These plans have been dogged by student criticism, culminating in an editorial in the Daily Bruin, UCLA’s student newspaper, that argued the system would "present a major breach of students' privacy" while creating "a more hostile campus environment" by "collecting invasive amounts of data on [UCLA's] population of over 45,000 students and 50,000 employees.”

In an attempt to highlight the risks of using facial recognition on UCLA's campus, Fight for the Future used Amazon's facial recognition software, Rekognition, to scan public photos of UCLA's athletes and faculty, then compare the photos to a mugshot database. Over 400 photos were scanned, 58 of which were false positives for mugshot images—the software often gave back matches with "100% confidence" for individuals "who had almost nothing in common beyond their race"

These results square with what we have witnessed time and time again—that predictive algorithms are fundamentally riddled with racial and gender bias. Whether it’s the federal government’s own National Institute of Standards and Technology examining facial recognition algorithms, independent researchers looking at healthcare algorithms, or lawmakers raising the alarm about discrimination via housing algorithms, the outcomes are so bad that even members of Congress are proposing to ban the technology.

"UCLA thought they could get away with this. They even claimed our campaign was misinformation. When we made it clear we weren't going to back down, they folded like a tent,” Greer told Motherboard. “Let this be a lesson to other school administrators: if you try to experiment on your campus with racist, invasive surveillance technology, we will come for you. And we don't lose."