Listen to this story

--:--

--:--

Four Questions About Regulating Online Hate Speech

Determining the standards for online hate speech and disinformation

Photo: Artur Debat / Moment Mobile / Getty Images

On August 6, 2019, the New York Times devoted the front page of its business section to this screaming headline and three articles that followed it:

The three articles involve some careful and important reporting. Two of them capture an underreported feature of online content issues, namely that internet companies, other than social media firms, have a role in regulating what we might call “dangerous speech” found on platforms like 8chan. A company like Cloudflare, which provides network security for millions of sites, can operate as a gateway for internet access, just as internet service providers, telecommunications companies, website hosting services, mobile device manufacturers, app stores, and many others do (see my report to the UN in 2017). Matthew Prince, the CEO of Cloudflare, has suggested that content moderation is a responsibility he does not want to have — he does not want Cloudflare to become an internet censor. (See this great piece from Evelyn Douek discussing Prince’s position and the issues at stake.)

I’m sorry, Times readers; if you were looking for an easy fix to online hate, this isn’t it.

But that headline.

Does hate speech persist online because Section 230 of the Communications Decency Act “shields it”? I’m sorry, Times readers — if you were looking for an easy fix to online hate, this isn’t it.

Look, there is a real debate to have over Section 230’s role in what so many see as the cesspool of hatred that we see online. The Times quotes Professor Danielle Citron in one of the three articles who says we may be in for a “moment of reexamination” of Section 230 and other online norms (and I commend to you her scholarship addressing online hate and harassment). But “hate speech” does not persist online because of some magical Section 230 shield. For the most concise overview, I strongly encourage checking out Daphne Keller’s piece in the Washington Post at the end of July. Kurt Opsahl of the Electronic Frontier Foundation (EFF) captures one problem with the headline succinctly in this tweet:

Online incitement is a very serious problem, with real offline consequences. The tragedies of El Paso, Christchurch, and too many other places highlight the connections between offline harm and online hate. But figuring out solutions to online hate is not as easy as demanding a change in U.S. law. That may be part of the solution. But regulating online hate speech requires us to think through at least four questions. I’ll note them here and hold off on the answers for now in an effort simply to help orient the conversation in what I hope would be a more realistic and constructive agenda than that headline suggests.

First, what is “hate speech”?

The public discussion about hate speech often neglects to commit to its definition. This is true not only of conversation in the media but in legislatures and courts, in the United States and abroad. Hate speech is exceedingly difficult to regulate in the United States because of Supreme Court interpretations of the First Amendment; indeed, as Kurt suggested in his tweet, most of what we think of as “hate speech” cannot be regulated by public authorities in the United States. (Of course, it can be moderated by the companies.) This is different than in Europe, where certain forms of “hate speech” are legally subject to regulation. And under international human rights law, Article 20 of the International Covenant on Civil and Political Rights, governments (other than the United States, which disclaimed power to do this under constitutional law) are bound to prohibit advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence.

So how should we define the subject? Are we concerned with speech that is degrading, offensive, “harmful” to individuals who are members of certain protected categories (like race, religion, national origin, immigrant status, sexual orientation, etc.)? Speech that advocates hatred of, or discrimination against, such individuals and groups? Speech that constitutes incitement to violence? Speech that involves recruitment to join a hateful organization, such as a white nationalist group or ISIS? Speech that constitutes “extremism”? Okay, if so, then what is extremism?

The headline in the Times makes it sound as if online hate is a question for American lawmakers to figure out.

Given the speed, reach, and capacity for disinformation online, should our definition of “online hate speech” be different from offline hate speech? Should it depend on whether the hate speech is in the context of a group committed to offline violence? As of now, the discussion is open-ended and undefined. I could go on, but you get the picture. For the debate to be well structured, we need to know what it is we want controlled.

Second, who should regulate it? Who decides?

This may be an even more important question that the definitional one. The Cloudflare-8chan episode highlights the fact that we increasingly rely on private actors to make decisions about publicly-available content. Are we okay with that? Do we want companies that enjoy ever-expanding control of our access to information and public debate to also have the power and responsibility to make decisions about where to draw the lines on content? Or do we expect public authorities — that is, governments — to channel their democratic processes into deciding what rules the companies should apply? I do not pose these questions to gum up the machinery for dealing with online hate. And I believe that companies and governments have responsibility in this space. I also believe that these are core questions about how we, as a public, decide the rules of the road for speech online. I wrote a book about it, ffs.

Third, should there be a global standard and global enforcement?

Are these companies even capable of making fine distinctions about what is and is not hate speech in this global context?

The headline in the Times makes it sound as if online hate is a question for American lawmakers to figure out. Well, yes and no. Yes, in the sense that we need an approach that works for the American public, worked out (in my view) through democratic process. But no, in the sense that online hate is a global phenomenon. If you think online hate is bad in America, try checking out any number of sites elsewhere in the world, where online hate leads to all sorts of offline violence and is almost entirely unregulated. And what’s worse: authoritarian governments regularly use the phrase “hate speech” in their legislation to criminalize online content that’s merely critical of public figures and institutions — this means governments, religious entities, and anything else that should not be protected against criticism and debate. Social media companies have global standards against online hate that they apply inconsistently or fail to apply at all in places far from Northern California. And those standards regularly fail to respond to the realities of online and offline debates. (Think Myanmar, as the most problematic example.) Are these companies even capable of making fine distinctions about what is and is not hate speech in this global context? Do global rules and global platforms adequately respond to local — or hyper-local, as Chinmayi Arun calls it — problems of hate and disinformation?

Fourth, and finally, what to do about Trump and other public officials who trade in hateful content?

I pose this question not merely because Trump is such a malign figure in the increase in white nationalist violence in recent years. I raise it because Trump uses his platform — whether it’s Twitter or his often incoherent ramblings at rallies or the Rose Garden — to amplify messages of exclusion that the white supremacists and other racists love so much. Addressing hate speech certainly involves addressing its spread and hosting online. But it also involves public leadership. An excellent but often overlooked program for dealing with hate speech is something called the Rabat Plan of Action, adopted by experts convened by the UN High Commissioner for Human Rights in 2013. If you can get beyond the UN language (please do!), I urge you to take a look at its careful recognition of the harms caused by hateful speech and the ongoing importance of protecting freedom of expression. In the context of Trump and other leaders who irresponsibly promote hateful rhetoric, this from the Rabat Plan of Action is a valuable reminder:

Excerpt: Rabat Plan of Action

My next report as UN Special Rapporteur for the UN General Assembly will focus on online hate speech and disinformation: what should be the standards? How should we move toward their control? I need to turn to writing that… right now, actually. Please stay tuned for more.

Teach law at UC Irvine, former UN Special Rapporteur on freedom of expression, author of Speech Police: The Global Struggle to Govern the Internet. @davidakaye

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store