Facebook Can’t Decide What Terrorism Looks Like

The social network is under pressure from all sides to create an internationally consistent and fair policy on policing possible terrorist content

MORGAN MEAKER
OneZero

--

Photo: Anadolu Agency/Getty Images

ItIt was in an audio recording released in June 2014 that Abu Bakr al-Baghdadi, the leader of the Islamic State (IS) group who was killed in a raid by American forces on October 26, publicly declared his caliphate in Iraq and Syria. That summer, the group’s territorial gains were reinforced by a torrent of slick and at times gruesome online propaganda. In July 2014, the group’s magazine, Dabiq, was launched. And in August, a video showed the murder of American journalist James Foley, marking the start of a series of beheading videos that would multiply across the internet.

Back then, in the early days of IS, Facebook’s attitude toward terrorist content was described as “a model” for other social networks. Even today, the company’s spokespeople are eager to parrot the statistic that 99% of Islamic State- and al-Qaida-related content is detected by the company’s A.I., particularly in response to any criticism about how the platform polices other forms of terrorism. But in a 2018 report published by research network Vox Pol, terrorism expert J.M. Berger wrote that efforts to moderate violent white…

--

--

MORGAN MEAKER
OneZero

British Journalist. Mostly human rights in Europe and the Middle East. Working with @Guardian @Reuters @BBC etc