Facebook’s Ban on Deepfakes Is a Half-Step at Best

It’s better than nothing — but just barely

Photo: Tayler Smith. Prop styling: Caroline Dorn.

On Monday, Facebook announced a new policy against “deepfake” videos. The policy is imperfect in multiple ways, which savvy outlets were quick to highlight. In fact, the ban is even more limited than some critics may realize.

At a time when other social platforms are embracing deepfake tools for fun and profit, Facebook’s ban counts as a step in the right direction. That said, it’s far from sufficient.

Key to understanding Facebook’s policy, and its shortcomings, is how the company defines deepfakes. As the Washington Post first reported, and Facebook confirmed in its blog post, the company will now take down what it calls “manipulated misleading media” — but only if it meets specific criteria. Here are those criteria, in Facebook’s words:

“It has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:

It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”

Note that a video must meet both of these stipulations in order for Facebook to take it down. In addition, Facebook noted, “This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.”

News reports were quick to point out, rightly, that Facebook’s second criterion significantly limits the policy’s scope. The requirement that a faked video use artificial intelligence in order to qualify as a “deepfake” means that Facebook will still allow so-called “shallow fakes:” videos edited with conventional techniques to mislead people. That includes most of the misleading political videos that have made headlines in the past year, including one that slowed down a speech by House Speaker Nancy Pelosi in a way that made her appear drunk or impaired.

Such videos might still be covered under Facebook’s fact-checking policies, which limit the spread of content rated by the social network’s fact-checking partners as false or misleading, and warn users when they share it. (Those systems have their own serious limitations; the Pelosi video had already spread widely by the time Facebook’s fact-checking mechanisms kicked in.) But they won’t be taken down.

But Facebook’s criteria for a takedown are even more niche than that. Read the first part of its definition closely and you’ll see it only applies to videos that make the subject appear to have “said words they did not actually say.” So it would not cover a deepfake video that makes a person appear to have done something they did not actually do. That’s a real loophole because it would allow, for instance, a deepfake video that makes it look like a politician burned the American flag, participated in a white nationalist rally, or shook hands with a terrorist. A Facebook spokesperson confirmed to OneZero that those hypothetical examples would not be prohibited under its deepfakes policy.

Perhaps obvious, but still worth noting, is that the policy only applies to videos. Earlier this week, Rep. Paul Gosar, an Arizona Republican, tweeted a faked photo that appeared to show Barack Obama shaking hands with the Iranian president. That wouldn’t be covered by Facebook’s policy either, the company confirmed.

Finally, the exception for videos that are “parody or satire” could turn out to be the widest loophole of all. As I reported last month, fake news merchants have been using tiny “satire” tags to slide blatantly misleading viral stories past Facebook’s fact-checkers. Unless Facebook gets into the business of deciding what counts as satire — which it seems unlikely to do unless compelled to — even the narrow swath of faked videos that would otherwise be covered by Facebook’s deepfakes ban may evade enforcement by claiming to be satire.

It all adds up to a policy so specific that it would cover approximately zero of the actual faked videos that have gone viral on the platform in the past year. That makes it seem more symbolic than practical.

And yet, it’s better than nothing — which is exactly what most other social platforms have done to guard against deepfakes so far. At least the policy covers the most canonical example of a hypothetical political deepfake — one in which a public figure’s face is superimposed via A.I. on someone else to make it look like they said something they didn’t. And while it’s true that most actual deepfakes today are pornographic rather than political in nature, those are already prohibited for the most part under Facebook’s rules against nudity and pornography. The deepfake prohibition applies to politicians as well as ordinary users.

It’s no coincidence that Facebook’s announcement comes ahead of a House Energy and Commerce subcommittee hearing this week on “manipulation and deception in the digital age,” in which Facebook is among those scheduled to testify. Facebook and other tech companies like to talk about their ability to “self-regulate,” but their self-regulation often seems to come only when they face an imminent threat of actual regulation.

Senior Writer, OneZero, at Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store