If you’ve never read a Facebook “Community Standards Enforcement Report,” you probably haven’t witnessed the technocratic language used to obscure the harm done by the platform. Bluntly, it’s appalling.
The report offers statistics about how much more effective the company is getting at removing material that violates its community standards. The most recent edition, covering April through June, was published earlier this month. These numbers are deceptive because Facebook’s team often speaks of solving a problem it directly created. It would be as if a meat-packer was both intentionally poisoning its meat while also releasing stats about making the meat safer.
Facebook hosts, recommends, and amplifies hate as part of its business model while also touting its success at taking that hate down. One only needs to look at the “Kenosha Guard” militia group that may have spurred the double-murder of protesters earlier this week to see the impacts of this business model in real life. The technocratic language Facebook uses to describe its takedown numbers serves to hide the fact that behind such metrics are very real human beings who are targeted and traumatized by material that only exists because Facebook made it possible in the first place. Car companies use crash test dummies; Facebook exposes all of its billions of live users to “crashes” all the time.
One of Facebook’s key metrics in these enforcement reports is its “proactive detection rate,” which refers to the amount of content that was intercepted before users reported it. Take the “hate speech” category for example: In its most recent report, Facebook said it proactively detected 94.5% of the 22.5 million pieces of hate speech content identified in this time period. That leaves well over 1 million pieces of that content to be seen and reported by users. It’s essential to keep in mind that Facebook is speaking of a “who” when it provides these numbers, but it wants you to concentrate on the “what.”
We’ve long been told by Facebook’s top executives that the company’s A.I. is getting better at catching content that violates its “community standards,” but…