Member-only story
Facebook’s Ban on Deepfakes Is a Half-Step at Best
It’s better than nothing — but just barely

On Monday, Facebook announced a new policy against “deepfake” videos. The policy is imperfect in multiple ways, which savvy outlets were quick to highlight. In fact, the ban is even more limited than some critics may realize.
At a time when other social platforms are embracing deepfake tools for fun and profit, Facebook’s ban counts as a step in the right direction. That said, it’s far from sufficient.
Key to understanding Facebook’s policy, and its shortcomings, is how the company defines deepfakes. As the Washington Post first reported, and Facebook confirmed in its blog post, the company will now take down what it calls “manipulated misleading media” — but only if it meets specific criteria. Here are those criteria, in Facebook’s words:
“It has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
Note that a video must meet both of these stipulations in order for Facebook to take it down. In addition, Facebook noted, “This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.”
News reports were quick to point out, rightly, that Facebook’s second criterion significantly limits the policy’s scope. The requirement that a faked video use artificial intelligence in order to qualify as a “deepfake” means that Facebook will still allow so-called “shallow fakes:” videos edited with conventional techniques to mislead people. That includes most of the misleading political videos that have made headlines in the past year, including one that slowed down a speech by House Speaker Nancy Pelosi in a way that made her appear drunk or impaired.
Such videos might still be covered under Facebook’s fact-checking policies, which limit the spread of content rated by the social…