Tech companies are researching new techniques to detect deepfake videos and stop their spread on social media, even as the technology to create them quickly evolves. Last year, Facebook participated in a “Deepfake Detection Challenge” and, along with other tech firms like Google and Microsoft, offered a bounty for outside researchers who develop the best tools and techniques to identify A.I.-generated deepfake videos.
Because Facebook is the No. 1 platform for sharing false political stories, according to disinformation researchers, it has an added urgency to spot and halt novel forms of digital manipulation. Renée DiResta, the technical research manager for the Stanford Internet Observatory, which studies disinformation, pointed out that a challenge of the policy was that the deepfake content “is likely to have already gone viral prior to any takedown or fact check.”
On Wednesday, Ms. Bickert, Facebook’s vice president of global policy management, is expected join other experts to testify on “manipulation and deception in the digital age” before the House Energy and Commerce Committee.
Ms. DiResta urged lawmakers to “delve into the specifics around how quickly the company envisions it could detect or respond to a viral deepfake, or to the ‘shallowfakes’ material which it won’t take down but has committed to fact-checking.”
Subbarao Kambhampati, a professor of computer science at Arizona State University, described Facebook’s effort to detect deepfakes as “a moving target.” He said that Facebook’s automated systems for detecting such videos would have limited reach, and that there would be “significant incentive” for people to develop fakes that fool Facebook’s systems.
There are many ways to manipulate videos with the help of artificial intelligence, added Matthias Niessner, a professor of computer science at the Technical University of Munich, who works with Google on its deepfake research. There are deepfake videos in which faces are swapped, for instance, or in which a person’s expression and lip movement are altered, he said.
“The question is where you draw the line,” Mr. Niessner said. “Eventually, it raises the question of intent and semantics.”
David McCabe reported from Washington, and Davey Alba from New York.