The One Rule of Content Moderation That Every Platform Follows
For YouTube, Facebook and the rest, if a decision becomes too controversial, change it
Facing pressure to ban a hatemonger, a tech company like Facebook, Google, or Twitter initially demurs, saying that while some may find his speech (or her speech, but usually his) objectionable, it doesn’t violate the platform’s rules. After a torrent of outrage, the tech company changes its mind and takes some form of action. Activists claim victory, conservatives cry censorship, and eventually, the controversy dies down — until the next time.
It’s a cycle that we witnessed last year with Facebook and the conspiracy theorist Alex Jones, the host of Infowars. The social network first declined to take action against Jones’ pages, saying it would be “contrary to the basic principles of free speech.” After three weeks of public pressure, Facebook changed its stance, saying, “We believe in giving people a voice, but we also want everyone using Facebook to feel safe.”
It played out again last week with Google and Stephen Crowder. Crowder is a right-wing comedian who had used his YouTube channel to lob homophobic taunts at the Vox writer Carlos Maza. On June 4, Google-owned YouTube indicated that it had carefully reviewed Maza’s complaints and found that Crowder hadn’t violated its rules. By the next day, with a rather predictable backlash in full force, YouTube shifted into reverse, saying that a “deeper look” at Crowder’s videos revealed a “pattern of egregious behavior.” It opted to “demonetize” his channel, meaning that Crowder could no longer make money from his YouTube videos, though he would not be banned from the site.
To one side, the reversal looks like a hard-won victory: A major tech platform has been shamed into doing the right thing. To another contingent, it smacks of mob rule: A major tech platform has been goaded into moving the goalposts to censor a politically incorrect voice.
What should be clear to both sides, by now, is the extent to which these massive corporations are making up the rules of online speech as they go along. In the absence of any independent standards or accountability, public opinion has become an essential part of the process by which their moderation policies evolve.
Sure, online platforms have policies and terms of service that run thousands of words, which they enforce on a mass scale via software and a bureaucratic review process. But those rules have been stitched together piecemeal and ad hoc over the years to serve the companies’ own needs — which is why they tend to collapse as soon as a high-profile controversy subjects them to public scrutiny. Caving to pressure is a bad look, but it’s an inevitable feature of a system with policies that weren’t designed to withstand pressure in the first place.
The underlying problem of our platforms is not that they’re too conservative or too liberal, too dogmatic or too malleable. It’s that giant, for-profit tech companies, as currently constructed, are simply not suited to the task of deciding unilaterally what speech is acceptable and what isn’t.
That’s a case made in different ways this month by two experts in online speech. But one sees platforms’ very public flip-flops as part of the problem, the other as a key to the solution.
David Kaye, a University of California-Irvine law professor and UN special rapporteur on free expression, argues in his new book Speech Police that leaving content enforcement in the hands of private companies is a recipe for injustice. An absence of regulation in their home country, coupled with an inconsistent patchwork of laws abroad, has left platforms such as Facebook, YouTube, and Twitter as de facto global speech regulators, even though neither users nor even the platforms themselves really want it that way.
These companies’ incentives — which include user growth, limiting labor costs, and staying on governments’ good side — often works against values such as consistency, fairness, and transparency in content moderation. Even though each of these companies now employs (or subcontracts) scores of human content reviewers, and some well-intentioned managers and executives, Kaye told me, “At some level, maybe at the most senior level, they preference the ad model, they preference the business model, they preference protection against liability.”
Kaye argued that the cycle of public outrage and corporate waffling will continue as long as the platforms make their rules and decisions behind closed doors. Even when they actually do have policies that they’re trying to apply consistently, the lack of a transparent process undermines the public’s confidence in their decisions. From outside, it all looks like a black box.
In a good legal system, decisions may be controversial, but at least the rationale is clearly laid out, and there’s a body of case law to serve as context. But when Facebook decides to de-amplify a doctored video of House Speaker Nancy Pelosi, or YouTube opts to demonetize Steven Crowder’s channel, there’s no way to check whether those decisions are consistent with the way they’ve interpreted their rules in the past, and there’s no clear, codified way to appeal those decisions.
Jennifer Grygiel, a communications professor at Syracuse University’s Newhouse School, agrees social platforms are letting down their users and the public when it comes to safety, misinformation, hate speech, and harassment. But whereas Kaye sees these problems through the lens of international governance and human rights law, Grygiel views them as fundamentally a failure of corporate social responsibility.
In a paper published in the June issue of the journal Telecommunications Policy, Grygiel and co-author Nina Brown argue that it’s not crazy for social media platforms to change their policies and practices in response to public pressure. On the contrary, public pressure, along with the growing threat of regulation, has been crucial in pushing tech companies to take on responsibilities that they were otherwise inclined to shun, such as proactively policing revenge porn, hate speech, and terrorist activity. “The fact is, the platforms are wildly out of control,” Grygiel said in a phone interview. “The use case has to be so horrendous and awful to really push these companies to take bigger steps.”
For instance, it took the Christchurch mosque shootings for Facebook to tighten control of its live streaming features. Grygiel sees YouTube’s response to Steven Crowder’s homophobia as another example of a company re-evaluating its policies only when confronted with an extraordinary outcry. Because the tech firms haven’t invested the thought or resources necessary to create and enforce robust moderation policies, Grygiel said, “this is really being negotiated in the public sphere right now through feedback loops with the public, through social media activism, and also through the press.”
So if tech companies make poor speech police, what’s the solution? I’ve argued before that putting national governments in charge isn’t the answer, and both Grygiel and Kaye seem to agree. Repressive regimes would overstep, of course, but even liberal democracies can vary widely in how they would regulate platforms. European countries like Germany have historically been quicker to regulate what they see as problematic speech, while in the United States the First Amendment would likely prohibit the government from doing much at all. Grygiel is also wary of reforming Section 230 of the Communications Decency Act, the bedrock law governing online platforms’ responsibilities, because they see its liability protections as necessary to empower companies to make decisions that protect vulnerable users. Prior to its passage, websites faced incentives to avoid moderating content at all, lest they expose themselves to lawsuits.
Kaye thinks the answer lies in international human rights law. If tech companies agreed to adopt its language as a common standard, their policies would begin to look less arbitrary and more defensible, with a basis for consistency across national boundaries. On top of that, they’d need to become radically more transparent about their specific rules and decisions, giving governments, academics, and the public the ability to check their work.
That all sounds good, and Grygiel agrees that the big tech companies need to settle on a common framework. Grygiel envisions a standards body analogous to the advertising industry’s Advertising Self-Regulatory Council, but for content moderation and social media safety. Even so, Grygiel believes that public pressure and even controversy will remain an important mechanism for accountability — which isn’t necessarily a bad thing. “The idea that social media gives everyone a voice is often viewed at the individual level, but the real power is in collective action and the public’s ability to help steer corporate social responsibility and accountability through feedback loops.”
Obviously, there are drawbacks to a system that depends partly on public outrage. It encourages activists on all sides to try to “work the refs” to interpret the rules in their favor, as we saw Congressional Republicans attempt when they held bad-faith hearings on Facebook’s alleged censorship of Trump supporters Diamond and Silk. It risks rewarding those who shout loudest, whether at the expense of a quiet majority or a marginalized minority. It may work against nuance, as we saw when Facebook struggled to defend the careful application of its misinformation policies in the case of the Pelosi video against many who just wanted it taken down.
All of which is why the current system, in which each controversy becomes a fresh referendum on the tech companies’ ill-considered rulebooks, is such a mess. But a mess may be the best we have, at least for now. Though Kaye’s proposal to ground global moderation guidelines in human rights law makes sense, I tend to think Grygiel is right that we’ll never achieve some ideal, unchanging set of rules that the platforms can apply confidently in all cases without misstepping. Consistency is a virtue, but so are responsiveness and adaptability. We need standards, and we need transparency — but ultimately, we’ll always need the backlashes, too.