Lawmakers Demand Intelligence Community Release a Report on Deepfakes

Three representatives asked the Director of National Intelligence to create a report about how deepfakes could be used against the U.S. by hostile nations.
September 13, 2018, 3:21pm

On Thursday morning, three lawmakers sent the Director of National Intelligence a warning about the impending dangers of deepfakes—an algorithmically-generated face swapping method used for everything from porn to Star Wars movies.

Three representatives—Adam Schiff (D-Calif.), Stephanie Murphy (D-Fla.) and Carlos Curbelo (R-Fla.)—sent the letter to DNI director Dan Coats as a plea and a warning: That deepfakes could be used against the U.S. by hostile nations.


“We request that the Intelligence Community report to Congress and the public about the implications of new technologies that allow malicious actors to fabricate audio, video and still images,” they wrote.

The lawmakers outline that the report should assess any possible counter-measures and recommendations to Congress, describe any confirmed or suspected uses of deepfakes by foreign governments that have already happened, and outline the technological countermeasures by the U.S. government or the private sector.

The full letter can be found, here.

The signees go on to state:

“Forged videos, images or audio could be used to target individuals for blackmail or for other nefarious purposes Of greater concern for national security, they could also be used by foreign or domestic actors to spread misinformation. As deep fake technology becomes more advanced and more accessible, it could pose a threat to United States public discourse and national security, with broad and concerning implications for offensive active measures campaigns targeting the United States.”

They want the report completed no later than mid-December.

This is far from the first time we’ve heard lawmakers raise the deepfakes alarm. Florida Sen. Marco Rubio brought it up during William Evanina’s confirmation as director of the National Counterintelligence and Security Center in May. Virginia Sen. Mark Warner said during the Senate Intelligence Committee hearing on foreign influence on social media that the potential for disinformation that deepfakes have opened up “should frighten us all.” In that same hearing, Maine Sen. Angus King brought it up again, asking Facebook COO Sheryl Sandberg what the platform planned to do to combat deepfakes.

In April, the US Defense Advanced Research Projects Agency (DARPA)’s Media Forensics department awarded nonprofit research groups contracts to find news ways to automatically detect digital video manipulations.

Read more: There is No Tech Solution to Deepfakes

It’s fascinating to watch more and more lawmakers start using deepfakes as a talking point. They certainly have a lot to lose in the dystopian scenario of a strategic, artificial-intelligence powered disinformation campaign. As AI scholars have told me, the potential power of deepfakes does threaten our sense of truth and trustworthiness in media—and it’s well worth Congress looking into, and giving that information to the public.

But as I’ve written previously, the serious threat might not be on democracy, but on how we as a society respond to the questions of consent, online moderation, and ownership of one’s own body.