Facebook Cannot Separate Itself From the Hate It Spreads

The social network doesn’t just ‘hold a mirror up to society’— it does something much more powerful and concerning

Photo: Chesnot/Getty Images

magine a factory that allowed anyone to bring toxic waste there, any time of day or night, and promised to store it. Imagine that in addition to storing the waste, the factory would exponentially increase the amount of toxic waste and enlist wide swaths of the population into adding their own pollution to the mix. Imagine that as part of its service, the factory would continually spew those toxins into our air, water, and soil, poisoning millions of people. Imagine then that the factory devoted some small degree of their services to cleaning up some of those toxins, well after much of the toxic waste had been distributed, and then asked to be congratulated for cleaning up 90% of the spills (according to its own unverifiable metrics). Lastly, at every opportunity, the factory would proudly proclaim that it doesn’t profit from distributing toxic waste.

If you can imagine this factory, you already have a wonderful grasp of the status of Facebook in 2020. When Sheryl Sandberg, Facebook’s chief operating officer, released a statement announcing that the company was going to make changes to address the issues of hate on its platform, she said the company had chosen to act “because it’s the right thing to do.” We’d be wise to take this statement with a grain of salt.

This latest promise to do better comes after massive and sustained pressure from activists and advertisers from the Stop Hate for Profit campaign, as well as a letter from three senators asking Facebook about its failure to eliminate white supremacy on the platform, and a pending independent audit from a civil rights law firm. Sandberg goes on to tout Facebook’s success in using artificial intelligence to detect hate speech, calling the platform a “pioneer” in the field. Yet, what she is really touting is the company’s efforts to remove content that it is responsible for amplifying in the first place. When Facebook met with civil rights leaders who had 10 specific recommendations to address hate on the platform, Facebook said it was already doing enough and refused to commit to any of them.

Nick Clegg, Facebook’s vice president of global affairs and communications, released a statement that was equally nonsensical, claiming that Facebook “does not profit from hate” and that the “vast majority” of conversations on Facebook are positive. Clegg said that his belief is that Facebook “holds a mirror up to society,” but this ignores the fact that mirrors reflect while Facebook amplifies and recruits.

Even a cursory look at Facebook’s “mistakes,” as they refer to them (or “Facebook’s business model” as it is known to most everyone outside of the company), includes redlining users, enabling age discrimination in hiring, offering “Jew haters” as an advertising category, promoting the “boogaloo” movement, fueling genocide in Myanmar, and aiding Duterte’s rise in the Philippines. It’s not so much that the problem of hate on Facebook is new, so much as that each new revelation is met mostly with an apology and a “promise” to do better moving forward. Facebook has been apologizing and promising this way since at least 2007. Yet the “mistakes” continue.

Facebook CEO Mark Zuckerberg has often pitched A.I. as the solution to the company’s content moderation issues, notably during his congressional testimony in 2018. But even Facebook’s own employees have said that A.I. is not likely to solve the problem of hate on the platform anytime soon. As Facebook vice president of data analytics, Alex Schultz, has noted, “artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.”

A company whose business model necessitates that it consistently discharge poison into the environment should be dismantled.

In May, the Wall Street Journal released a devastating report on Facebook, which said that the platform, based on its own internal study, was promoting divisiveness. According to the company’s own research, “64% of all extremist group joins are due to our recommendation tools.” This report was largely ignored, however, in part because taking action would have “disproportionately affected conservative users and publishers,” as a key finding of the study was that Facebook “saw a larger infrastructure of accounts and publishers on the far right than on the far left.”

Even as Sandberg and Clegg do damage control, Zuckerberg guards Facebook’s rear, defiantly promising that nothing will change. “Mark Zuckerberg has dismissed the threat of a punishing boycott from major advertisers pressing Facebook to take a stronger stand on hate speech and said they will be back ‘soon enough,” the Guardian noted last week.

“You know, we don’t technically set our policies because of any pressure that people apply to us,” he said in a private town hall with employees, according to the Information. “And, in fact, usually I tend to think that if someone goes out there and threatens you to do something, that actually kind of puts you in a box where in some ways it’s even harder to do what they want because now it looks like you’re capitulating, and that sets up bad long-term incentives for others to do that [to you] as well.”

The weird thing here is that what civil rights organizations are primarily asking Zuckerberg to do is get the white supremacy off of his platform, but he’s treating it like a hostage negotiation. Tuesday’s meeting between Facebook and several civil rights groups reflected as much. The civil rights leaders referred to the meeting as “very disappointing.”

Derrick Johnson, chief executive of the NAACP said, “They [Facebook] lack this cultural sensitivity to understand that their platform is actually being used to cause harm, or they understand the harm that the platform is causing and they have chosen to take the profit as opposed to protecting the people.” The independent audit of Facebook, released Wednesday morning, remarks that Facebook’s decisions have resulted in “significant setbacks for civil rights.”

Facebook’s standard response to critics has often been “we don’t profit from hate.” For a moment — but just for a moment — let’s take it at its word. If “hate” on Facebook isn’t profitable, it follows that either Facebook consistently amplifies hate despite it not being profitable, or that the nature of Facebook means that there will always be hate on Facebook. This is what Clegg and Sandberg seem to be telling people when they say “zero tolerance doesn’t mean zero instances” and “we aren’t perfect.” Simply put, if there is no incentive for keeping hate on the platform, and yet it persists on the platform, then Facebook cannot exist without a significant amount of white supremacy on its platform. This is the position of Facebook executives at the highest level.

In her article in the Guardian, Julia Carrie Wong discusses this grim calculus — what amount of “good” could Facebook possibly do that would even out the evil that they promote and amplify?

As we consider Facebook’s place in our lives and in our society, particularly during a revolutionary moment, when the abolition of technologies and institutions is now a serious discussion after being dismissed as impossible for so long, we should ask ourselves: How much white supremacy and hate are we willing to tolerate in exchange for whatever “good” one thinks Facebook does? It’s similar to asking “how much lead do you want in your water?”, “How much E. coli do you want in your food?”, or “How many heavy metals would you like in your farmland?”

For what it’s worth, my answer is “none.” A company whose business model necessitates that it consistently discharge poison into the environment should be dismantled.

How much toxic waste is Facebook willing to spill into the environment? Its answer seems to have been — and to remain — “as much as we can get away with.”

Written by

Dr. Chris Gilliard is a writer, professor and speaker. His scholarship concentrates on digital privacy, and the intersections of race, class, and technology.

Get the Medium app