This Program Makes It Even Easier to Make Deepfakes

Unlike previous deepfake methods, FSGAN can generate face swaps in real time, with zero training.
August 19, 2019, 3:50pm
An example of the FSGAN in action.
Nirkin et. al

A new method for making deepfakes creates realistic face-swapped videos in real-time, no lengthy training needed.

Unlike previous approaches to making deepfakes—algorithmically-generated videos that make it seem like someone is doing or saying something they didn’t in real life—this method works on any two people without any specific training on their faces.

Most of the deepfakes that are shared online are created by feeding an algorithm hundreds or thousands of images of a specific face. The algorithm "trains" on that specific face so it can swap it into the target video. This can take hours or days even with access to expensive hardware, and even longer with consumer-grade PC components. A program that doesn’t need to be trained on each new target is another leap forward in making realistic deepfakes quicker and easier to create.

“Our method can work on any pair of people without specific training,” the researchers said in a video presenting their method. “Therefore, we can produce real-time results on unseen subjects.”

Researchers from Bar-Ilan University in Israel and the Open University of Israel posted their paper, "FSGAN: Subject Agnostic Face Swapping and Reenactment," to the arXiv preprint server on Friday. On their project page, the researchers write that the open-source code is impending; in the paper, they say that they're publishing the details of this program because to suppress it "would not stop their development," but rather leave the public and policymakers in the dark about the potential misuse of these algorithms.

In a video demonstrating the FSGAN program, the researchers show how it can overcome hair and skin tone to swap faces seamlessly:

Similar to how the single-shot method developed by Samsung AI used landmarks on the source and target's faces to map the Mona Lisa's face to make her "speak," FSGAN pinpoints facial landmarks, then aligns the source face to the target's face.

The FSGAN program itself wasn't cheap or easy to make: The researchers say in their paper that it required eight Nvidia Tesla v100 GPU processors—which can cost around $10,000 each for consumers—to train the generative adversarial network that the program then uses to create deepfakes in real-time.

On their project website, the researchers say that the project code will eventually be available on GitHub, a platform for open-source code development. Assuming the researchers make a pre-trained AI model available, it’s likely that using it at home won’t be as resource-intensive as it was to train it from scratch in a lab.

"Our method eliminates laborious, subject specific data collection and model training, making face swapping and reenactment accessible to non-experts," the researchers wrote. "We feel strongly that it is of paramount importance to publish such technologies, in order to drive the development of technical counter-measures for detecting such forgeries, as well as compel lawmakers to set clear policies for addressing their implications."