Facebook researchers would like to be able to detect the deepfakes their own platform helps spread.

On Friday, the company published a blog post summarizing months of collaborative work aimed at developing a framework for automatically detecting the form of machine-learning generated manipulated media known as deepfakes. Called the Deepfake Detection Challenge, the project brought together thousands of participants in a shared effort to moderate, but not overwhelming, success. 

Specifically, the blog post notes that of the over 35,000 models submitted, the top performer (against real-world examples) was able to detect deepfakes with an accuracy of 65.18 percent. So, better than a coin toss!

“[The] DFDC results also show that this is still very much an unsolved problem,” reads the post in part. “None of the 2,114 participants, which included leading experts from around the globe, achieved 70 percent average precision on unseen deepfakes in the black box data set.”

A screenshot of a "detect the deepfakes" example from Facebook's blog post.
A screenshot of a “detect the deepfakes” example from Facebook’s blog post.

Facebook’s own researchers struggled with this problem as well.

“Facebook researchers participated in the challenge (though they were not eligible for prizes because of our role in organizing the competition),” notes the blog post. “The team’s final submission did not appear on the final leaderboard due to run time issues when it was evaluated on that private test set.” 

Notably, Facebook officially banned deepfakes in Jan. 2020. However, even with the ban, under some circumstances, politicians are still allowed to post manipulated media to the platform. 

Facebook clearly wants to be able to accurately and automatically detect deepfakes at scale, while at the same time wash its hands of the misinformation its platform spreads. Having your cake and eating it too must be nice.



Please enter your comment!
Please enter your name here