Back Industry News

Facebook's fact-checkers train AI to detect "deep fake" videos Posted on Sep 14 - 2018

Share This :

So-called "deep fakes" are now a major concern for US lawmakers worried that AI-manipulated videos depicting people doing or saying things they never did could become a national security threat.

Following last week's hearing where Facebook COO Sheryl Sandberg was asked how Facebook would warn users about deep fake videos, the company has announced it is now expanding its review of articles with fact-checking partners to video and images.

All 27 of Facebook's fact-checking partners in 17 countries will be able to contribute to reviews. US fact-checking parties include the Associated Press, factcheck.org, Politifact, Snopes, and conservative paper The Weekly Standard.

Facebook says it has built a machine-learning model to detect potentially bogus photos or videos, and then sends these to its fact-checkers for review. Third-party fact-checking partners can use visual verification techniques, including reverse image searching and image metadata analysis to review the content.

"Fact-checkers are able to assess the truth or falsity of a photo or video by combining these skills with other journalistic practices, like using research from experts, academics or government agencies," said Facebook product manager Antonia Woodford.

Facebook intends to use its collection of reviewer ratings of photos and videos to improve the accuracy of its machine-learning model in detecting misinformation in these media formats.

It's defined three types of misinformation in photos and video, including: manipulated or fabricated content; content that's presented out of context; and false claims in text or audio.

Facebook offers a high-level overview of the difficulties identifying false information in image and video content compared to text, and some of the techniques it's using to overcome them. But overall the impression is that Facebook isn't close to having an automated system for detecting misinformation in video and photos at scale.

Currently, it's using OCR to extract text from photos, such as a bogus headline on a photo, in order to compare the text to headlines from fact-checkers' articles. It's also developing ways to detect if a photo or video has been manipulated. For this, it's using audio transcription to compare whether the text it extracts from audio matches claims in text that fact-checkers have previously debunked.

"At the moment, we're more advanced with using OCR on photos than we are with using audio transcription on videos," said Facebook product manager Tessa Lyons.

As with articles, Facebook will focus on identifying duplicates of false videos and photos once a fact-checker has confirmed it as false. View More

x

Get the Global Big Data Conference
Newsletter.

Weekly insight from industry insiders.
Plus exclusive content and offers.