If you’re interviewing someone for a job online, make sure you are paying attention to see if the person’s words line up with their lip movements. According to the FBI, people have reported suspicious online interviews, and believe they could be deepfakes.The FBI Internet Crime Complaint Center (IC3) released a public service announcement on Tuesday, warning of “an increase in complaints reporting the use of deepfakes and stolen Personally Identifiable Information (PII) to apply for a variety of remote work and work-at-home positions.”
Deepfakes are algorithmically generated videos or images that can be used to fake a person’s likeness, making it appear like they’re saying or doing something that they have never done. According to the FBI, they’ve received reports from people working in information technology and computer programming, database, and software related roles, claiming that job applicants are using deepfakes in video interviews. What these positions have in common is their ability to access customer information, financial data, corporate IT databases, and proprietary information. While the FBI has not shared statistics with the public, such as whether deepfakes are actually being used in these complaints, how many people allegedly using deepfakes are successfully recruited into the roles, and if any information has been compromised, it did report that people have claimed that private information has been stolen to create applicant’s fake identities, and pass pre-employment background checks. Many open source software frameworks for creating deepfakes available have emerged online, including DeepFaceLab, DeepFaceLive, and FaceSwap. Creating deepfakes has become more accessible to the general public since Motherboard uncovered the first deepfake made by casual consumers in 2019—increasing the potential for misleading information to spread as the truth.
This isn’t the first time that deepfakes have been used for malicious or deceptive purposes. There have been multiple incidents where deepfakes have been used to create non-consensual pornography. A 2019 report titled The State of Deepfakes revealed, “[A] key trend we identified is the prominence of non-consensual deepfake pornography, which accounted for 96% of the total deepfake videos online.”Deepfakes have also been used to commit acts of fraud and affect political outcomes. Senior financial controllers at the security firm Symantec sent millions of pounds to cybercriminals, who tricked them using deepfaked audio of different executives. In 2018, a Belgian political party created a deepfake video of Donald Trump encouraging Belgians to withdraw from the Paris climate change agreement. This video quickly became viral, provoking anger from many Belgians who were unaware of the video’s fabrication. These examples reiterate the dangers of artificial intelligence, which can not only used to harm people with misleading information, but perpetuate discriminatory and biased systems in place—in particular, targeting women. The FBI shared some observations that people reported that gave away the deepfakes, such as that the actions and lip movement of the person being interviewed were not coordinating with the audio of their speech. In addition, auditory things like coughing and sneezing were not aligned with the person’s actions on the screen.With it being increasingly easier to create a deepfake, online resources have popped up to help people detect deepfakes. The Massachusetts Institute of Technology (MIT) has launched a research project and website called Detect Fakes that helps people spot deepfakes. In the project description, MIT presents eight questions to help people determine if a video is a deepfake. Questions are largely based on what the subject in the video looks like, including “Does the skin appear too smooth or too wrinkly?” and “Do shadows appear in places that you would expect?”. The likelihood that you would be able to spot a deepfake being used in an interview is high. While creating deepfakes is easy, the work that it takes to create a perfect and believable deepfake is difficult. However, if you do, the FBI is asking people to report it to the IC3.