FYI.

This story is over 5 years old.

Tech

AI-Fooling Glasses Could Be Good Enough to Trick Facial Recognition at Airports

Adversarial objects, for your face.
Image: On the left, actor Owen Wilson. On the right, a mystery man. (Carnegie Mellon University/UNC Chapel Hill)

In the not-too-distant future, we’ll have plenty of reasons to want protect ourselves from facial detection software. Even now, companies from Facebook to the NFL and Pornhub already use this technology to identify people, sometimes without their consent. Hell, even our lifelines, our precious phones, now use our own faces as a password.

But as fast as this technology develops, machine learning researchers are working on ways to foil it. As described in a new study, researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill developed a robust, scalable, and inconspicuous way to fool facial recognition algorithms into not recognizing a person.

This paper builds on the same group’s work from 2016, only this time, it’s more robust and inconspicuous. The method works in a wide variety of positions and scenarios, and doesn’t look too much like the person’s wearing an AI-tricking device on their face. The glasses are also scalable: The researchers developed five pairs of adversarial glasses that can be used by 90 percent of the population, as represented by the Labeled Faces in the Wild and Google FaceNet datasets used in the study.

It’s gotten so good at tricking the system that the researchers made a serious suggestion to the TSA: Since facial recognition is already being used in high-security public places like airports, they’ve asked the TSA to consider requiring people to remove physical artifacts—hats, jewelry, and of course eyeglasses—before facial recognition scans.

It’s a similar concept to how UC Berkeley researchers fooled facial recognition technology into thinking a glasses-wearer was someone else, but in that study, they toyed with the AI algorithm to “poison” it. In this new paper, the researchers don’t fiddle with the algorithm they’re trying to fool at all. Instead, they rely on manipulation of the glasses to fool the system. It’s more like the 3D-printed adversarial objects developed by MIT, which tricked AI into thinking a turtle was a gun by adjusting a few pixels on an image of a turtle. Only this time, it’s tricking the algorithm into thinking one person is another, or not a person at all.

Making your own pair of these would be tricky: This group used a white box method of attack, which means they knew the ins and outs of the algorithm they were trying to fool. But if someone wanted to mass-produce these for the savvy privacy nut or malicious face-hacker, they’d have a nice little business on their hands. In the hypothetical surveillance future, of course.