Skip to main content

U.S. Congress targets deepfakes ahead of 2020 election

Watch all the Transform 2020 sessions on-demand here.


The House Permanent Select Committee on Intelligence in the U.S. Congress today spoke with a panel of legal and AI experts to discuss the potential of AI-powered manipulated media like deepfakes to degrade trust in democratic institutions and news media, and enable attacks by foreign adversaries on people’s sense of reality.

Participants wondered out loud what will happen if deepfakes to defame people like Democratic presidential nominee Joe Biden are released in the coming election year or if media, intelligence, or military organizations can no longer verify facts about events seen in a video.

In business, a deepfake scenario considered at the hearing was the release of a malicious video imitating a CEO ahead of an IPO to tank a company’s stock value on its first day of trading.

“Even if later convinced that what they’ve seen is a forgery, that person may never lose completely that lingering, negative impression,” committee chair Rep. Adam Schiff (D-CA) said.


June 5th: The AI Audit in NYC

Join us next week in NYC to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Schiff and multiple other members of Congress called for federal action on deepfakes last fall.

“If an information consumer does not know what to believe, they can’t tell fact from fiction, then they will either believe everything or they will believe nothing at all. If they believe nothing at all, that leads to long-term apathy, and that is destructive for the United States,” Foreign Policy Research Institute fellow Clint Watts told the committee.

Social media companies and state and national election officials must be ready for deepfake video to emerge on Election Day 2020 that claim malfunctions of voting machines, Watts said, and warned China may surpass Russia in deepfake capabilities. Sanctions to disrupt troll farms, quick refute of misinformation by authorities, and cyber retaliation should be considered in response.

“I do think that the time for offensive cyber is at hand,” he said.

The intelligence committee hearing comes after a number of high-profile incidents of manipulated media, such as a doctored video of House Speaker Nancy Pelosi circulated by President Trump and defended by Facebook, and a deepfake of Facebook CEO Mark Zuckerberg posted on Instagram earlier this week, both of which were discussed at length.

The Associated Press reported that computer-generated images are being used for fake LinkedIn profiles connected to U.S. public policy and diplomacy officials.

A Pew Research Center American Trends Panel survey conducted between February and March, found that fake news was seen as a bigger problem than violent crime, racism, illegal immigration, or terrorism.

Experts who spoke with the committee unanimously proposed common standards and sharing between social media companies like Facebook and Twitter to combat deepfakes. Danielle Citron, professor at University of Maryland Francis King Carey School of Law, also wants Section 230 of the Communications Decency Act to be amended so that social media platforms must adopt “reasonable content moderation” in order to retain legal immunity from libel lawsuits.

“We don’t want to get into that place where we have a non-functioning marketplace of ideas,” Citron said. “When Justice Oliver Wendell Holmes came up with the notion of the marketplace of ideas, he was a cynic. He wasn’t suggesting that truth would always out, and he worried about humanity, but the broader endeavor at the foundation of our democracy is that we can have a set of accepted truths so we can have real meaningful policy conversations. We can’t give up on the project.”

Research efforts are underway in a variety of settings to combat deepfakes. As head of DARPA’s Media Forensics team last year, Artificial Intelligence Institute director Dr. David Doermann and researchers began to create AI systems for detection of deepfakes by following things like irregular eye movement, and Allen Institute for Artificial Intelligence shared Grover, a deepfake detection algorithm that claims 92% accuracy.

Deepfake detection workshops will be held in the coming days at the International Conference of Machine Learning (ICML) and Computer Vision Pattern Recognition (CVPR), two major annual AI conferences for researchers taking place in Long Beach, California.

Still, Doermann says there’s an order of magnitude more generative models for manipulation, such as Facebook’s MelNet that sounds like Bill Gates or Nvidia’s StyleGAN making fake human faces, than there are models to detect manipulation.

Doermann agrees common standards between social media companies are needed, but that individuals are on the front line.

“We need to get the tools and he process in the hands of individuals, rather than relying completely on the government or on social media platforms to police content,” he said. “The truth of the matter is the people that share this stuff are part of the problem, even though they don’t know it.”

To this end, Doermann supports efforts to educate the public, but raised the possibility that ultimately, deepfakes “may be a war that can never be won.”

Experts that spoke with the panel include OpenAI policy director Jack Clark. OpenAI is a nonprofit created in 2015 with more than $1 billion in support that in February declined to share code for its GPT-2 language understanding model in fear that it might be misused.

Clark thinks large tech platforms should develop and share tools for the detection of malicious synthetic media at both the individual account and platform level, working together as they do today on things like cybersecurity. Standardization could help social media companies who appear to be “groping around in the dark” for policy to solve a hard problem.

“We will continue to be surprised by technological progress in this domain, because the lore of a lot of this stuff is all of these people [AI practitioners] think they’re the Wright Brothers, and you know they feel that, and they’re all busily creating stuff and figuring out the second order effects of what they build is difficult,” Clark said. “So I think that we do need to build infrastructure so that you have some third party measuring the progression of these technologies so you can anticipate the other things in expectation.”

The panel almost unanimously agreed that there are positive applications of generative models beyond misinformation. Synthetic data made to mimic other visual training data is being used by companies like Landing.ai and Element AI to improve training AI systems with low amounts of data. Synthetic data for training AI systems can be cheaper and easier to collect than real-world data.

The panel also reminded the committee that essentially any novice with a decent computer can access freely available open source software to generate deepfakes. The term deepfake originates from software made available in 2017 that was used to place the faces of celebrities on the bodies of porn stars.

Citron also urged political candidates to avoid use of deepfakes when engaging in political combat in the coming election year. In a conversation with George Stephanopoulos aired by ABC News Wednesday, President Trump said he would be willing to use information provided by a foreign government against opponents in an election.

Along with concern about fake news, the U.S. public is also concerned about election interference by foreign adversaries like Russia in the 2020 election.

The Mueller Report released in April concluded that the GRU branch of the Russian military and misinformation campaign by the Internet Research Agency on social media platforms like Facebook and Twitter was a coordinated attempt to favor Donald Trump in the 2016 presidential election.

Russian entities sought to sow discord among U.S. citizens by imitating anti-immigration and Black Lives Matter organizations, a fact Mueller warned against in his sole public remarks since the Special Counsel position was created in 2017.

Hearings to discuss deepfakes today are the latest attempt by the U.S. Congress to better understand how to regulate and defend against the misuse of artificial intelligence. The FBI and TSA recently attended a House Oversight and Reform Committee to hear criticism of their respective facial recognition programs. In agreement with AI experts who audited Amazon’s Rekognition and facial recognition used by police, last month a bipartisan group of conservative and liberal members of Congress agreed it may be time for a national moratorium on facial recognition software.

On Wednesday, Democratic presidential candidate Senator Elizabeth Warren (D-MA) sent a letter to a series of finance-related federal government organization demanding answers to questions about algorithmic bias in lending markets.