Synthetic and real faces. Credit:  Sophie J. Nightingale at Lancaster University in the UK and Hany Farid at University of California, Berkeley
Synthetic and real faces. Credit: 

Sophie J. Nightingale at Lancaster University in the UK and Hany Farid at University of California, Berkeley

Tech

People Trust Deepfake Faces More Than Real Faces

As more companies experiment with AI-generated faces for marketing and more, the line between real and fake becomes more blurred.

Most people are pretty bad at telling a real face apart from a fake, algorithmically-generated one, according to a new study. 

In a study published Tuesday in the peer-reviewed journal PNAS, researchers Sophie J. Nightingale at Lancaster University in the UK and Hany Farid at University of California, Berkeley, conducted three experiments to determine if, and how, people differentiated real faces from algorithmically-generated ones, also known as deepfakes. 

Advertisement

“Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces,” Farid and Nightingale wrote. 

"A representative set of matched real and synthetic faces" from the study

"A representative set of matched real and synthetic faces" from the study

In one experiment, the researchers asked 315 participants to look at faces and determine whether they were algorithmically generated or photos of real people. The group had an accuracy rate of 48.2 percent. In a second experiment, 219 new participants were given feedback about their guesses as they went along; this improved their score a little bit, up to 59 percent. 

Most interestingly, a third experiment showed 223 participants more faces, but this time asked them to rate the faces on a scale of perceived trustworthiness—the thinking being that since people are pretty good at making snap judgements about whether to trust someone based on faces alone, this might correlate with how they detect fake and real faces. People rated fake faces as being slightly more trustworthy than real ones. The researchers posit in the paper that this is because fake faces represent a composite of the average face. 

White faces, both male and female, were the most likely to be classified incorrectly, with male faces scoring less accurately than female. “We hypothesize that White faces are more difficult to classify because they are overrepresented in the StyleGAN2 training dataset and are therefore more realistic,” the researchers wrote.

Since deepfakes first came onto the scene in 2017, there have been lots of attempts at detecting deepfakes, and plenty of startups, private company efforts, and government budgets created just for the purposes of “fighting deepfakes.” Today, there are even companies creating deepfakes for commercial use. The researchers for this study write that “current techniques are not efficient or accurate enough to contend with the torrent of daily uploads” at this point, so it’s mostly up to individuals to be able to discern a fake face from a real person on the internet. Unfortunately, we’re still not very good at it.