Tech

Researchers Want to Protect Your Selfies From Facial Recognition

'Fawkes' may be the most advanced system yet for fooling facial recognition tech like Clearview AI—until the algorithms catch up.
Researchers Want To Protect Your Selfies From Facial Recognition
Bloomberg / Getty Images

Researchers have created what may be the most advanced system yet for tricking top-of-the-line facial recognition algorithms, subtly modifying images to make faces and other objects unrecognizable to machines.

The program, developed by researchers from the University of Chicago, builds on previous work from a group of Google researchers exploring how deep neural networks learn. In 2014, they released a paper showing that “imperceptible perturbations” in a picture could force state-of-the art recognition algorithms to misclassify an image. Their paper led to an explosion of research in a new field: the subversion of image recognition systems through adversarial attacks.

Advertisement

The work has taken on new urgency with the widespread adoption of facial recognition technology and revelations that companies like Clearview AI are scraping social media sites to build massive face databases on which they train algorithms that are then sold to police, department stores, and sports leagues.

The new program—named “Fawkes,” after the infamous revolutionary whose face adorns the ubiquitous protest masks of Anonymous—“cloaks” a picture by adding a small number of pixels to the image. While the changes are imperceptible to the human eye, if the cloaked photo is used to train an algorithm—by being scraped from social media, for example—it will cause the facial recognition system to misclassify an image of the person in question. Fawkes-cloaked images successfully fooled Amazon, Microsoft, and Megvii recognition systems 100 percent of the time in tests, the researchers reported in a new paper.

Amazon did not respond to a request for comment. Microsoft and Megvii declined interview requests.

“[Fawkes] allows individuals to inoculate themselves against unauthorized facial recognition models at any time without significant[ly] distorting their own photos, or wearing conspicuous patches,” according to the paper. Shawn Shan, one of the lead authors, said the team could not comment further on their research at the time because it is undergoing a peer review process.

The team acknowledged that Fawkes is far from a perfect solution. It relies on a recognition system being trained on cloaked images, but most people already have dozens, if not hundreds, of photos of themselves posted online that the systems could have already drawn from. Fawkes’ success fell to 39 percent when an algorithm’s training set included less than 85 percent cloaked photos for a particular person, according to the paper.

Advertisement

“It’s a very, very nice idea, but in terms of the practical implications or widespread use it’s not clear how widely it will be adopted,” Anil Jain, a biometrics and machine learning professor at Michigan State University, told Motherboard. “And if it is widely adopted, then the face recognition systems will do some modifications to their algorithms so it will be avoided.”

There is a definite, and growing, demand for anti-surveillance tools that can defeat image recognition, as demonstrated by the wide range of glasses, hats, t-shirts, and patterns created for the purpose. And apart from its general creepiness, facial recognition poses direct physical threat to activists, minority groups, and sex workers, often harming them in unexpected ways.

Liara Roux, a sex worker and activist, told Motherboard that porn performers have experienced a crackdown from companies that use facial recognition for identity verification when attempting to use the services under their real names for purposes unrelated to their professional work.

“A lot of performers are having this issue where they’re being denied Airbnb rentals,” they said. “It’s such a pervasive thing for people in the industry, to kind of out of the blue have their accounts shut down.”

Fawkes is not the first attempt at an anti-facial recognition tool for the public. In 2018, a group of engineers and technologists working in Harvard’s Berkman Klein Center for Internet and Society published EqualAIs, an online tool that deployed a similar technique. The method of subtly altering inputs, like photos, to subvert a deep neural network’s training is known as a poison attack.

“In theory, on a small scale, it should be a functioning shield.” Daniel Pedraza, the project manager for EqualAIs, told Motherboard. The creators, whose primary goal was to start a conversation over the power of biometric data, haven’t been updating the tool, so it may no longer be as effective as it was in 2018. “For us, the win was people talking about it. I think the issue is on the forefront of more peoples’ minds today than it was back in 2018, and it will continue to be so. The fact that people are learning the machines are fallible is super helpful.”

Fawkes isn’t available for public use yet, and if and when it is, experts say it likely won’t take long before facial recognition vendors develop a response. Most of the research into new poison attacks, or other forms of adversarial attacks against deep neural networks, is as focused on ways to detect and defeat the methods as it is on creating them.

Like many areas of cybersecurity, experts say facial recognition is poised to be a cat-and-mouse game.

“Sooner or later, you can imagine that as the level and sophistication of the defense increases, the level of the attackers also increases,” Battista Biggio, a professor in the Pattern Recognition and Application Lab at the University of Cagliari, in Italy, told Motherboard. “This is not something we are going to solve any time soon.”