Facebook Is Too Big to Moderate

The numbers make it painfully obvious

Photo by on

More than a third of the global population of 7.8 billion people use Facebook. They post and no one seems to know (except Facebook) exactly how many overall posts Facebook sees per second (it has to be in the millions).

Now imagine human moderators standing before that tsunami of content, all 15,000 of them, spread across the globe, interpreting languages, nuances, cultural norms, political imperatives, and ideological nuances for content that crosses the line. It’s like a feather trying to hold back a hurricane.

I’ve known these numbers for a long time and have always understood, in the abstract, Facebook’s scale problem. However, something , the founder of CEO of the Alliance to Combat Online Crime, told The Today Show this week crystalized the scale of the problem for me in a way no one has done before.

Today correspondent Kate Snow was asking about the strategies Facebook and others already have in place for moderating content. Peters shook her head and said:

“The number of moderators that these firms employ is just tiny.”

I thought, Tiny? they literally have thousands.

Peters continued:

“We ran the numbers with Facebook. The number of moderators per Facebook user would be the equivalent of the state of Ohio having one policeman.”

I thought, That has to be an exaggeration.

It is and it isn’t. Ohio currently has a population of 11.7 million. If you divide 2.8 billion by 15,000, you get one moderator per roughly every 186,000. So it’s more like six police officers for all of Ohio.

Peters’ point, though, was made.

There’ve been calls for Facebook and other social media platforms to fundamentally change their approach. A 2020 recommended that in addition to ending all content moderation outsourcing, social media companies should double the number of content moderators.

Honestly, I don’t think 30,000 moderators help the math all that much. Plus, there’s the reality of localization. You can’t just add more experience moderators. They should be local language speakers and with a deep understanding of local social mores, practices, and politics (it took years for Facebook to figure this out).

That same study, though, appears to argue that the very act of content moderation is harmful to humans. It calls for providing moderators with “top quality, on-site medical care” and to “sponsor research into the health risks of content moderation.”

On the surface, it’s not always obvious that Facebook needs moderation. Most of what I see, for example, is based on my relatively limited social sphere and middle-of-the-road interests. I rarely use Facebook search and rely on the news feed to deliver updates on family and friends. The Facebook experience is, in a way, custom-built for each user. Its algorithm delivers content based on previous interests and connections and can lead down some pretty dark roads. During The Today Show segment, Peters and Snow were talking about illegal drug sales on Facebook and Instagram.

Unlike Twitter’s public feed, Facebook’s millions of private feeds allow people to post hateful, illegal, and harsh stuff to narrowcast audiences, often making it hard for anyone else, including Facebook and authorities, to pick up.

The cost of speech

Facebook didn’t start with moderation because Mark Zuckerberg believed deeply in freedom of speech and that Facebook was merely a platform for connectivity and content and not the publisher. However, by 2009, it became clear that some form of content policy-setting and moderation was necessary.

In Steven Levy’s exhaustive Facebook biography Dave Willner, Facebook’s former head of content policy, and the person who drafted Facebook’s first set of content moderation rules told Levy, “Like, I cannot express to you how absolutely bananas Facebook would be if it were not for content moderation.”

The fact that Facebook isn’t a complete garbage can of illicit information is not a miracle, it’s the result of a decade-plus-long effort to rein in an almost unstoppable force Mark Zuckerberg unleashed on an unsuspecting (and receptive) world.

The moderation system still appears hopelessly overwhelmed. Levy’s book reported that moderators have roughly 40 seconds to decide if questionable content stays or goes, and somewhere in there they might have to decide if it gets kicked up the chain of command. Some of it does end up on Zuckerberg’s (or COO Sheryl Sandberg’s) desk. Both understand the company and its policies better than a poorly paid contractor moderator, but they're also only human and can make mistakes.

I’ve always thought artificial intelligence is the obvious answer to content moderation scale problems. Facebook already uses it extensively to proactively block hate content. When Levy wrote about these systems, he reported that Facebook’s A.I. still misses half of it. However, Facebook reported late last year that its A.I. is now .

Developers can train A.I. to more effectively catch all kinds of questionable, illegal, hateful, violent, and fake content on the fly. However, poorly trained A.I. can lead to false positives. If human moderators are already dealing with millions of people challenging human moderation policy decisions, Facebook’s improved A.I. moderation must be leading to an unmanageable level of moderation challenges.

Peters told The Today’s Show’s Snow, that her group believes Facebook would be removing a lot more of this content if they could be sued, which is a reference to the potential amendment of the Communication Decency Act protections that keep Facebook and other social media platforms from being directly liable for the content on their platforms.

This aligns with one of the NYU Stern report’s other recommendations: explore narrowly tailored government regulations.

The potential for billions in damages awarded to those harmed by Facebook content would surely spur massive change in Facebook’s moderation policies and practices. And yet I doubt it will solve the problem.

Facebook is being used by a third of all humanity and there is no way to moderate humanity. This is the scale of the problem Facebook faces and I don’t know if it’s possible for an A.I., Facebook, or anyone to fix it.

Want more of me in your life? More of my tech insights and musings to brighten your day? and I’ll send you a weekly update on the tech (and other stuff) that matters to me (and maybe you, too).

Tech expert, journalist, social media commentator, amateur cartoonist and robotics fan.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store