Facebook Inc is teaming up with Microsoft Corp, the Partnership on AI coalition and academics from several universities to launch a contest to better detect deepfakes, the company said in a blog post on Thursday.
The social media giant is putting US$10 million into the “Deepfake Detection Challenge,” which aims to spur detection research. As part of the project, Facebook is commissioning researchers to produce realistic deepfakes to create a data set for testing detection tools.
The company said the videos, which will be released in December, will feature paid actors and that no user data would be utilized.
In the run-up to the U.S. presidential election in November 2020, social platforms have been under pressure to tackle the threat of deepfakes, which use artificial intelligence to create hyper-realistic videos where a person appears to say or do something they did not.
While there has not been a well-crafted deepfake video with major political consequences in the United States, the potential for manipulated video to cause turmoil was recently demonstrated by a “cheapfake” clip of House Speaker Nancy Pelosi, manually slowed down to make her speech seem slurred.
In August, the Democratic National Committee demonstrated the threat from deepfake videos by creating one of its own Chairman Tom Perez, to make the audience at hacker convention Def Con think the real Perez had Skyped into the conference.
“They (deepfakes) lower the bar for an adversary that wants to create manipulated media,” said Matt Turek, who runs DARPA’s Media Forensics program.
Deepfake technology has also been used to make fake celebrity porn videos.
Some researchers are working on systems to authenticate a video or image at the point of capture through digital watermarking. But the rapid evolution of deepfake technology has created an arms race between deepfake creators and those trying to detect videos.
“It’s a cat-and-mouse game. If I design a detection for deepfakes, I’m giving the attacker a new discriminator to test against,” said Siddharth Garg, an assistant professor of computer engineering at New York University’s Tandon School.
The technology is also growing more accessible for less-skilled creators. Last week, a Chinese app called Zao that allows users to convincingly morph their faces onto movie stars rocketed to the top of the country’s app store, though it also faced a backlash over privacy concerns.
Some online deepfake creators have also been tapping into this market for easy-to-make deepfakes. Machine-learning enthusiasts based in countries from Poland to Japan are making it easier for people to access custom deepfakes. They are uploading step-by-step YouTube tutorials, charging $30 for 50 words of an AI-powered Trump voice impersonation and running self-service websites that churn out deepfakes.
The Deepfake Detection Challenge is not the first time that Facebook, which does not currently have a specific policy regarding deepfake videos, has funded academic research into the threat.
House Intelligence Committee Chairman Adam Schiff demanded in July that Facebook, Twitter and Alphabet Inc’s Google, which owns YouTube, share their plans to tackle deepfakes. Facebook said it is spending $7.5 million on teams at the University of California, Berkeley, the University of Maryland and Cornell University in response to the threat.
One of these teams, run by UC Berkeley Professor Hany Farid, is building “soft biometric models,” which map real politicians’ facial quirks, from Senator Bernie Sanders’ eyebrow jumps to Senator Elizabeth Warren’s head turns, to detect if a new video is fake.
Facebook’s new contest, which builds on its ties with these researchers, will involve academics from Cornell Tech, MIT, University of Oxford, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY.
In a statement on Thursday, Schiff called the contest a “promising step.”