Tech

This Software Will Give You a Fake Face to Protect Your Privacy

DeepPrivacy masks your real face with a flurry of a million other faces.
A screenshot of DeepPrivacy working.

In A Scanner Darkly, Phillip K. Dick imagined a "scramble suit" that projected the likenesses of millions of other people onto the wearer. Science fiction writers have long mused over how we'll protect ourselves in a surveillance dystopia, and now that we're here—in a time when anyone's face can be swapped or remodeled to create a new reality—those fantasies sound a lot less far-fetched.

Researchers at the Norwegian University of Science and Technology developed a program they've named DeepPrivacy, which anonymizes your face by masking it with a combination of more than a million other people's faces.

Advertisement

They say it does this in real-time, to realistically superimpose new faces onto your body while you move and talk, for anonymized streaming videos.

The researchers published their paper to the arXiv preprint server on September 10.

DeepPrivacy uses technology similar to how deepfakes work: The program uses generative adversarial networks (GANs) to swap the original face, comparing it to the millions of others in the researchers' custom-built “Flickr Diverse Faces” dataset. The new dataset consists of 1.47 million faces from the YFCC100M dataset, which is made up of 100 million Creative Commons-licensed face images on Flickr.

Right now, their results are more surreal than realistic. Faces in the researchers' results examples are a shifting, jittering blur of other people's expressions and features, and not a stable visage that anyone would feel comfortable talking to over Skype or watching on a Twitch stream. A video the researchers posted of a panel of actors with DeepPrivacy applied shows mostly blurred faces for each person. They're anonymized, but they're not realistically someone else, like a deepfake might be.

"The DeepPrivacy GAN never sees any privacy sensitive information, ensuring a fully anonymized image," the researchers wrote on their Github repository, where the full code for the program is available. "It utilizes bounding box annotation to identify the privacy-sensitive area, and sparse pose information to guide the network in difficult scenarios."

Advertisement

But the part of the face that's anonymized leaves out features like ears, which can be used by forensic analysts to identify a person.

“Obscuring the inner facial region can provide protection against many conventional face recognition algorithms, but future-proofing visual privacy is nearly impossible," Adam Harvey, a Berlin-based computer vision and privacy researcher, told Motherboard. "Anonymizing additional features including the ears, hair, clothing, and head shape may also become important to reduce emerging threats from biometric analysis."

Although DeepPrivacy is a work in progress, it raises new questions on how technology like deepfakes and algorithms that change our identities can be used for good, instead of for harassment or media manipulation. People have been formulating new ways to avoid detection from facial recognition and anonymizing themselves for years, like low-tech anti-surveillance camouflage.

An algorithm that doesn't gather original face data, and automates the process of avoiding surveillance or identification online, could be a huge privacy improvement for streamers on gaming and sex work platforms.

But it also brings up longstanding concerns about how datasets are used by the AI research community. The YFCC100M dataset uses photos that Flickr users uploaded under a Creative Commons license, which means they're legally free for anyone to use. However, the owners might not have realized they were consenting to having their images scraped into a new dataset and applied to a project like DeepPrivacy. The Diversity in Faces dataset, which IBM built earlier this year with the goal of addressing biased algorithms, was also made up of Flickr fair use images—and received criticism from privacy researchers for using images without people's consent.

“People gave their consent to sharing their photos in a different internet ecosystem,” Meredith Whittaker, co-director of the AI Now Institute, told NBC about the IBM Diversity in Faces dataset. “Now they are being unwillingly or unknowingly cast in the training of systems that could potentially be used in oppressive ways against their communities.”

Layering more technology onto an already-flawed system won't solve problems of privacy, consent, or surveillance. But combined with the work anti-surveillance activists are doing to kick facial recognition out of their cities, works like DeepPrivacy are an interesting start toward privacy that's baked into the tech we use every day.