Facebook’s Technocratic Reports Hide Its Failures on Abuse

These reports obscure a torrent of hate speech and other toxic content

Facebook’s moderation reports mask urgent problems with language written by and for the tech elite. Photo illustration; image sources: Chip Somodevilla, William Whitehurst/Getty Images

If you’ve never read a Facebook “Community Standards Enforcement Report,” you probably haven’t witnessed the technocratic language used to obscure the harm done by the platform. Bluntly, it’s appalling.

The report offers statistics about how much more effective the company is getting at removing material that violates its community standards. The most recent edition, covering April through June, was published earlier this month. These numbers are deceptive because Facebook’s team often speaks of solving a problem it directly created. It would be as if a meat-packer was both intentionally poisoning its meat while also releasing stats about making the meat safer.

Facebook hosts, recommends, and amplifies hate as part of its business model while also touting its success at taking that hate down. One only needs to look at the “Kenosha Guard” militia group that may have spurred the double-murder of protesters earlier this week to see the impacts of this business model in real life. The technocratic language Facebook uses to describe its takedown numbers serves to hide the fact that behind such metrics are very real human beings who are targeted and traumatized by material that only exists because Facebook made it possible in the first place. Car companies use crash test dummies; Facebook exposes all of its billions of live users to “crashes” all the time.

One of Facebook’s key metrics in these enforcement reports is its “proactive detection rate,” which refers to the amount of content that was intercepted before users reported it. Take the “hate speech” category for example: In its most recent report, Facebook said it proactively detected 94.5% of the 22.5 million pieces of hate speech content identified in this time period. That leaves well over 1 million pieces of that content to be seen and reported by users. It’s essential to keep in mind that Facebook is speaking of a “who” when it provides these numbers, but it wants you to concentrate on the “what.”

We’ve long been told by Facebook’s top executives that the company’s A.I. is getting better at catching content that violates its “community standards,” but let’s take a look under the hood of the report.

According to a recent story in Fast Company, Facebook “reports the percentage of the content its A.I. systems detect versus the percentage reported by users. But those two numbers don’t add up to the whole universe of harmful content on the network. It represents only the toxic content Facebook sees.”

The “known unknowns,” if you will. This next part is crucial:

For the rest, Facebook intends to estimate the ‘prevalence’ of undetected toxic content, meaning the number of times its users are likely seeing it in their feeds. The estimate is derived by sampling content views on Facebook and Instagram, measuring the incidences of toxic content within those views, then extrapolating that number to the entire Facebook community. But Facebook has yet to produce prevalence numbers for hate posts and several other categories of harmful content.

Nor does Facebook actively report how many hours toxic posts missed by the A.I. stayed visible to users, or how many times they were shared, before their eventual removal. In addition, the company does not offer similar estimates for misinformation posts.

Facebook did not immediately respond to a request for comment about toxic content on its platform and whether it will provide more detailed information in its reports moving forward. For the time being, Facebook has a number that only accounts for what it sees, but there’s a whole ‘nother number that it has to estimate based on a sample. It calls this number “prevalence.”

In other words, there’s too much toxic content for Facebook to ever really see it all (hello, scale) so Facebook has a formula for determining the amount based on random sampling. Abstractly, perhaps everyone understands this, but to think about it another way: There’s so much sewage on the platform that the company must continually guess how much of it people are actually seeing.

Most businesses don’t release reports saying “we didn’t poison” this many people, or “we traumatized 5% fewer people” during Q3 of this year.

Behind those numbers are people being exposed to virulent racism, misogyny, transphobia, holocaust denial, and dangerous misinformation and conspiracy theories. This is where Facebook’s language plays an important role. It’s not just “data” or numbers that these reports are referring to: it’s increments of hate spewing out, harming individuals, their communities, and increasingly, democracies and societies.

So, every time Facebook releases these numbers, it is asking us to think about all the people the company didn’t harm. All the misinformation it didn’t spread. All the conspiracies it didn’t promote. Most businesses don’t release reports saying “we didn’t poison” this many people, or “we traumatized 5% fewer people” during Q3 of this year. It would rightly be seen as unacceptable.

And this type of language also obscures the effects on commercial content moderators that Sarah Roberts writes about in Behind the Screen. Commercial content moderators are traumatized by the job of sifting through content on Facebook so that everyone else is traumatized less.

From Facebook’s report: “Lastly, because we’ve prioritized removing harmful content over measuring certain efforts during this time, we were unable to calculate the prevalence of violent and graphic content, and adult nudity and sexual activity.”

This is pretty astounding when you dig in. A multibillion-dollar company is telling us that because it prioritized removing one kind of harmful content, it is unable to concentrate on other kinds of content. Imagine something analogous from another business. We spent so much time making sure there was no rat feces in your food, we weren’t able to screen for metal shavings. We dedicated most of our resources to seat belts, so this quarter, the brakes won’t work as well…

In the report, Guy Rosen, Facebook’s VP of integrity, said: “We’ve made progress combating hate on our apps, but we know we have more to do to ensure everyone feels comfortable using our services.” This reveals the paradox (some might say lie) at the center of Facebook and its mission. Mark Zuckerberg, Sheryl Sandberg, and anyone else who represents Facebook in public life consistently says that the root of the company’s mission is connection. But the obvious and typically unstated corollary is that if your mission is to connect everyone, your company is necessarily going to connect some of the most odious and hateful individuals with like-minded people. And Facebook does these hateful people the extra favor of recommending them to even more people. Many journalists and researchers have argued that the move toward groups would make hate on the platform even more seamless and more difficult to detect, which it has.

Hate on the platform isn’t so much things going wrong on Facebook as it is the platform doing exactly what it’s designed to do. Thinking otherwise, and letting numbers obscure the effect of that toxicity, is giving Facebook what it wants — to be thought of as a force for good that has some unforeseen side effects. It’s not, and we shouldn’t treat it as such.

Dr. Chris Gilliard is a writer, professor and speaker. His scholarship concentrates on digital privacy, and the intersections of race, class, and technology.

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store