Keyboard with Facebook and fake news buttons showing Facebook ban of deepfake videos ahead of 2020 election
Facebook Bans Deepfake Videos Ahead of 2020 Election by Scott Ikeda

Facebook Bans Deepfake Videos Ahead of 2020 Election

So-called “deepfake” videos, or those that switch one face with another in a realistic way using machine learning tools, are now banned from Facebook.

There has been tremendous theoretical concern about the disinformation potential of deepfakes, particularly as pertains to the 2020 US presidential election. However, thus far the technology has not shown to be advanced or user-friendly enough to create any serious incidents of damaging “fake news.”

Facebook appears to be getting out ahead of something that will eventually develop into a serious concern, but critics believe that Facebook’s focus is on the wrong threat.

The looming threat of deepfake videos

Ever since the concept was coined in late 2017, deepfakes have been a matter of public concern. The nightmare scenario has been easy-to-use programs that allow anyone to effortlessly replace one face with another in videos, but at this point the technology is still not that advanced. It takes considerable expensive compute power to make a video of any substantial length, it requires some advanced technical knowledge, and even when done well is still usually somewhat “off-looking” and identifiable as a fake.

The technology is expected to advance rapidly, however, and deepfakes might finally be as threatening as they have been billed to be in a year or two. Facebook’s early prohibition on them makes sense from that perspective, and also given the fact that today’s “cheapfakes” are still managing to fool some people.

Facebook’s new deepfake ban would still allow videos such as the infamous “drunk” footage of House Speaker Nancy Pelosi to slip through the cracks, however. The official wording in a company blog post requires banned videos to have been made using “artificial intelligence or machine learning that merges, replaces or superimposes content onto a video.” Additionally, the video must “mislead someone into thinking that a subject of the video said words that they did not actually say.”

Exceptions are made for parody or satire, since those forms of expression enjoy unique legal protections in the United States, and videos that have been edited solely to omit or change the order of words. For example, the latter would protect recent videos of presidential candidate Joe Biden that have been selectively edited to imply white nationalist sentiments.

Facebook’s security measures

If a piece of content like the Biden and Pelosi video examples does not meet Facebook’s standard for banning as a deepfake, it is still subject to the company’s fact-checking system. This program, which has been active since the election disinformation controversies of 2016, calls upon 50 third-party partner organizations to flag and rate the truthfulness and accuracy of such videos. Facebook then incorporates this feedback in determining how much exposure to allow each piece of content in the news feed.

However, this fact checking program has been criticized by some of Facebook’s own partners. Some of the partner organizations say there is very little transparency, and that they are unclear about how much impact their work is actually having.

As Quartz points out, there is another big hole in Facebook’s policy. Since it focuses almost entirely on misrepresentation of the words and actions of individuals, it overlooks “fake news” video that do not involve people. The Quartz article cites a number of fake videos circulating on Facebook in the wake of the Iranian missile attack on US military bases, which used old footage of unrelated conflicts. Recycled footage may well not be apparent to an average person, particularly if it is edited.

Cheapfakes that slip past Facebook’s policies can be countered by simply linking to the original video that they are edited from, but response time has consistently been an issue. The average person does not take it upon themselves to verify the source or accuracy of a video, and cheapfakes can take on a life of their own and spread widely before a publisher or platform such as Facebook steps in with mitigating measures.

How effective can Facebook’s deepfake detection be?

Facebook ramped up to banning deepfakes last year by running a detection contest in partnership with Microsoft. The company would appear to at least be competent from a technical perspective, and could possibly reap some very advanced methods of detection from this research.

But the company will also have to negotiate public pushback when it begins removing deepfake videos. It is still unclear how it will handle claims of satire or parody, for example. Removal of videos will also no doubt feed into narratives that the company has a political agenda and is suppressing certain types of speech.

Deepfakes are unarguably a serious threat on the horizon, and it’s probably best that companies take a proactive approach to combating them. As Robert Prigge, CEO of Jumio, observed about this new form of manipulated media:

“There’s no doubt that deepfakes need to be regulated – it’s far too easy to use AI to create these deceptive videos in order to perpetuate fraud and damage reputations. Facebook’s decision to ban deepfakes is a step in the right direction, but there’s a lot of work to be done. With each deepfake flagged and reviewed, Facebook’s data set will grow, making it easier to train its machine learning models to determine if media has been manipulated and needs to be removed.”

Valid questions remain about Facebook’s ability to combat the spread of misinformation during the 2020 election, however. The company has made other recent moves, such as a ban on ads discouraging participation in the US census and new security protections for the accounts of political campaigns and campaign staff, but the very recent spread of cheapfakes across the platform indicates that 2020 might play out in a manner similar to 2016 on the platform.