Illustrations: Priya Mistry

Your Speech, Their Rules: Meet the People Who Guard the Internet

Tech platform trust and safety employees are charged with policing the impossible. They open up to Medium’s head of trust and safety.

Alex Feerst
OneZero
Published in
24 min readFeb 27, 2019

--

WWhen Facebook started 15 years ago, it didn’t set out to adjudicate the speech rights of 2.2 billion people. Twitter never asked to decide which of the 500 million tweets posted each day are jokes and which are hate speech. YouTube’s early mission wasn’t to determine if a video shot on someone’s phone is harmless speculation, dangerous conspiracy theory, or information warfare by a foreign government. Content platforms set out to get rid of expression’s gatekeepers, not become them.

Yet here we are. Controversial content takedowns are regular news. In August 2017, Cloudflare withdrew its DDOS protection service from the Daily Stormer, an American neo-Nazi online publication. A year later, Apple, Facebook, YouTube, and Twitter removed content by conspiracy theorist Alex Jones. Late in 2018, Apple pulled Tumblr from the iOS App Store, reportedly because of child pornography. Tumblr in turn banned all adult content and is now back in the App Store. Like tariffs on companies that get passed on to consumers, restrictions on platforms flow downstream to silence users — writers, trolls, bigots…

--

--

Alex Feerst
OneZero

Alex Feerst is a technology lawyer and expert on technology ethics in areas including artificial intelligence and neurotechnology - https://feerst.com/about