It’s Time for Platforms to Go Beyond Section 230

How ‘democratized speech’ went off the rails and how to get back on track

Tyler Elliot Bettilyon
OneZero
12 min readSep 23, 2019

--

Credit: Leon Bublitz/Unsplash

TThe 2020 election is approaching and our collective anxiety about misinformation and disinformation is heating up. Trolls of all stripes — from basement-dwelling to state-sponsored — are gearing up for another campaign season, and they’re not alone. Newspapers, TV networks, internet platforms, fact-checkers, super PACs, and corporate lobbyists will all shift into high gear for the quadrennial shitshow that is a U.S. presidential election. Election season is a catalyst for the worst online actors, but the weaponization of social media platforms and content aggregators has become a fact of life.

Russia has attempted to influence several western elections, notably relying on the Internet Research Agency to create and spread their propaganda online. The social manipulation mercenary firm Cambridge Analytica made a splash after a 2018 exposé revealed how they harvested data from millions of Facebook accounts. Executives at Cambridge Analytica ultimately plead guilty to breaking European data laws. Facebook was also used to incite genocide against the Rohingya in Myanmar.

China appears to be using Twitter to smear the organizers of the ongoing protests in Hong Kong. The encrypted messaging service WhatsApp has been weaponized in India to spread xenophobic misinformation. YouTube has faced criticism over its recommendation engine, including allegations that it drives people to extremist content, state-sponsored propaganda, and a wide array of misinformation.

Things get even worse if you venture off the beaten path. On Gab and 8Chan — websites that both position themselves as havens of free speech — even mass murderers and terrorists are protected from content moderation. The shooter who killed 11 people in a Pittsburgh synagogue last year used Gab to air his anti-Semitic grievances, and the New York Times identified three mass shootings that were announced beforehand on 8Chan, sometimes with accompanying “manifestos” written by the shooters. Then again, the shooter at Christchurch livestreamed his hideous rampage to Facebook, so Gab and 8chan are not the only platforms with a problem.

The political weaponization of social media gets a lot of attention, but the abuse of these new media platforms isn’t strictly political. YouTube’s recommendation engine has recently been implicated in helping pedophiles string together playlists of children wearing little to no clothing. The content is often generated innocently enough: Parents make home videos of their children at the beach, or kids film themselves dancing. YouTube’s recommendation algorithm identified engagement patterns in these otherwise unconnected videos and created a de facto playlist through the recommended videos section.

There’s also the phenomenon of “revenge porn,” where spiteful ex-lovers share erotic photos and videos of their former partners online without consent. It’s become common enough that nearly every state has passed laws attempting to curb the practice.

I could go on, but you get the idea: Online media is being applied way outside of its intended use cases and the results are bad.

The “democratization of free speech” sounded great once upon a time, but it has not proceeded exactly as advertised. On one hand, new media platforms have indeed given a voice to the voiceless. They helped create media ecosystems that are more resilient to censorship than the centralized outlets of the industrial age. They have empowered people who would otherwise be silenced to connect with audiences worldwide, including protestors, journalists, political dissidents, and whistleblowers.

Unfortunately, the list of empowered parties also includes a diverse group of bad-faith actors. State-sponsored propagandists, corporate propagandists, pedophiles, jihadists, and white supremacists (to name a few) have also become savvy users of online platforms. As a result, there is growing tension between the conflicting goals of defending free expression, protecting vulnerable groups, and assigning some degree of editorial and legal responsibility to the operators of online platforms.

How we got here: Publishers, platforms, and Section 230

Publishers have long been held to certain editorial standards, not just culturally but legally. If a newspaper, magazine, or cable news show publishes defamatory content, the organization can be sued. Same thing for unprotected hate speech and incitement of violence. There are a host of laws, and even more case law, exploring and defining the legal responsibility of publishers with regard to the content they publish. Publishers generally take this responsibility seriously. Even if they sometimes get the story wrong, most news organizations strive to maintain high editorial standards.

Social media platforms, online forums, and content aggregators do not currently have legal liability for the information published on their websites, and not without good reason. On websites like Facebook, Twitter, and Reddit any user can post anything at any time — illegal content included. Holding these organizations legally responsible for every piece of content that lands on their website would be a death sentence for the entire industry. The major platforms have billions of users posting multiple times per day. The prospect of individually moderating every piece of content that ends up on their systems is untenable.

This was the rationale behind one of the crucial laws governing social media companies: Section 230 of the Communications Decency Act. This statute gives certain “providers of an interactive computer service” broad immunity from the legal responsibilities assigned to publishers for content that third parties (that is, users) post to their websites. The law came about after two lawsuits in the 1990s. In the first, Cubby v. CompuServe, the court found that CompuServe was not liable for defamatory content posted to their website because the company did not perform any content moderation: CompuServe could not be held liable for defamatory content that it had no clue was being published to its website. In the second lawsuit, Stratton Oakmont v. Prodigy Services, the court found that Prodigy — which did perform some limited content moderation — was a publisher under the law and could be held liable for all of the nearly 60,000 daily posts on their service.

The result of these cases was that any online platform that allowed third-party content must act as a blind host, or take on significant legal risks. If a website that allowed self-publication of user-generated content took any steps to moderate any posts on their service, they became liable for all of it. This created a perverse incentive for companies to abandon moderation altogether. Section 230 was created to reverse this incentive. Adopted in 1996, the law provides companies with immunity from the publisher’s liability for third-party content. This freed the companies to actively moderate the content posted to their website without taking on liability for illegal content that might slip through the cracks.

In some ways Section 230 was a wild success: Every major online platform has community standards and moderates their user-submitted content in one way or another. These companies’ efforts at self-policing are inadequate — see the above list of scandals — but the situation would surely be much worse if the platform operators all did nothing with regards to content moderation. But like most laws, 230 has had unintended consequences as well.

SSection 230 was written with ’90s-era forums like CompuServe in mind. There was little to no data collected, and even if it was we had not yet invented many of the A.I. tactics we now use to process massive datasets. In the ’90s, these companies did not provide any significant curation or recommendation systems. Users navigated a tree-like structure of topics and website interfaces, sometimes provided a few sorting and filtering options. Flash forward to 2019. The major tech companies now all collect massive amounts of consumer data and use that data to create deeply personalized algorithmic curation systems.

The authors of Section 230 couldn’t have imagined the world of microtargeted advertisements and surveillance capitalism. The law was passed before the term social media had even been coined (in 1997), and long before social media became a major source of news for most Americans. Laws governing data privacy, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) have emerged to help address concerns related to these massive systems of data collection, but laws governing the editorial responsibility of content aggregators and media platforms have remained largely unchanged.

The lack of legal responsibility led to the deprioritization of content moderation for platform operators. The result of that low priority has been poorly funded moderation schemes. To the extent these schemes exist, they have been built out in an ad hoc manner and largely driven by public relations concerns. The result is almost as horrifying as the list of abuses above. Content moderators have become the metaphorical caged child at the center of Omelas. Facebook’s content moderators are shown an endless stream of humanity at its ugliest, including videos of suicide, murder, and child pornography. These people endure shocking working conditions to protect the public at large from the internet’s hideous underbelly.

A way forward: Responsibilities for everyone

One response to the above critique is to point fingers at big tech. Section 230 rightly gave the platforms permission to police themselves, but it turns out the big tech companies suck at self-policing. Platform owners and operators have been at best critically negligent to abuse of their systems. At worst, they have actively obscured abusive behavior. For example, Facebook executives knew about suspicious Russian activity on their platform long before the rest of the world and actively tried to hide it, according to a New York Times investigation.

Partly, these corporations suck at content moderation because at large scales it is a genuine challenge. Billions of people are posting a wide variety of content 24 hours a day seven days a week. Those posts originate in hundreds of different countries each with unique cultural norms. Similarly, posts are made in hundreds of different languages. Consistent community guidelines are hard to draft. Even when platforms successfully create a consistent content policy there will be widespread disagreement about whether that policy correctly identifies which content to censor, especially when the business operates globally.

It’s not realistic to think the internal teams can catch every possible form of abuse before it happens, but it’s clear that companies are capable of catching quite a bit more.

The problem is hard, but the platforms also suck at it because they haven’t been trying. The juxtaposition of working conditions for content moderators — some of whom have developed symptoms of post-traumatic stress disorder from the job — with the six-figure-earning, free-lunch-getting, ping-pong-playing software engineers in Silicon Valley speaks volumes about how much these companies value content moderation: They don’t.

The worst aspects of these platforms are the unintended consequences of a myopic focus on growth and revenue. But unintended doesn’t mean they couldn’t have been anticipated. Facebook could have anticipated that its microtargeted advertisements could be used to violate the Fair Housing Act. YouTube could have anticipated how their recommendation system drives users to misleading and sensational videos. Many platform operators apparently knew their product was incentivizing addictive behavior. It’s not realistic to think the internal teams can catch every possible form of abuse before it happens, but it’s clear that companies are capable of catching quite a bit more.

Platform operators need to invest in trust and safety. That may require significant internal restructuring: From top to bottom, product decisions need to be informed by questions about how bad-faith actors can abuse the system. Thinking this way might require training, new hires, and the general empowerment of people on the trust, safety, and security sides of the business. Content moderators need to be compensated better, including real efforts toward addressing the troubling mental health impact on these workers. Success at content moderation might even require platforms to (gasp!) shrink their margins by occasionally prioritizing trust and safety over advertising revenue.

Prioritizing healthy interactions will likely result in changes to the interface, new product offerings, and other improvements. For example, Instagram is experimenting with removing like counts from the interface. Jack Dorsey has said if he could go back in time he’d never have added the like button to Twitter. From internal algorithmic tweaks to user-facing changes like demetrification, platforms can do quite a lot on their own. But history suggests that most won’t take action without external pressure.

Regulators have largely been asleep at the wheel on this issue throughout the 2010s. Whenever they do emerge from hibernation they’ve been fighting with kid gloves. The recent Federal Trade Commission and Federal Communications Commission settlements against major tech companies amount to a tiny fraction of those companies’ revenue. Google and Facebook can easily afford to continue breaking the law and paying fines if the current climate continues.

Section 230 got some things right: Facebook, Twitter, Reddit, and others should not be liable for the content posted to their services the same way news organizations are responsible for what they print. But the immunity that the law grants is too broad. Algorithmic curation, massive data collection, and microtargeted content decisions are examples of editorial decisions being made by platform operators. Those decisions are mediated by a computer program, but YouTube is still responsible for the creation, testing, and auditing of those programs. Modern platforms are making billions of editorial decisions every day and the results have included driving vulnerable people toward extreme content. Platform operators should be forced to take responsibility for those active editorial decisions.

New data privacy regulations, such as GDPR and CCPA, are a step in the right direction. Still, they don’t directly address the problem of promoting and amplifying hateful or defamatory content. We need a new classification that confers partial liability to platforms when they spread, promote, and amplify illegal speech. And yes, even the First Amendment has exceptions.

Some in the tech world, including Cloudflare CEO Matthew Prince, want more legal guidance on what kind of speech should be protected or censored online. In a blog post written after Cloudflare kicked 8Chan off their network, he argued:

Cloudflare is not a government. While we’ve been successful as a company, that does not give us the political legitimacy to make determinations on what content is good and bad. Nor should it. Questions around content are real societal issues that need politically legitimate solutions.

This kind of law is not familiar to folks in the United States, but Germany adopted a law in 2018 that requires social network providers to remove any content that violates Germany’s fairly strict hate speech laws within 24 hours. Following the Islamophobic terrorist attack in Christchurch, New Zealand, Australia passed a law assigning criminal charges — including possible jail time — for companies that do not “expeditiously” remove “abhorrent violent material.” The legislation’s wording is vague, so the law will probably have to be interpreted by a court before it can be meaningfully enforced. Nevertheless, there is an emerging global movement toward assigning some degree of legal responsibility to technology companies for the interactions their platforms facilitate.

It’s happening in the U.S. as well. In June, the Senate Commerce Committee’s Subcommittee on Communications, Technology, Innovation, and the Internet interviewed a panel of experts about how media platforms utilize persuasive technologies and how those technologies can be abused. California passed the CCPA in 2018, and more recently passed a law requiring bot-driven accounts on social media to identify themselves as such. The bot law has its problems — in particular, the fact that the responsibility for identification is on the bot owner, with no responsibility given to platform operators — but it’s an interesting avenue for regulators to explore as botnets expand their influence online.

The law is always going to be a compromise. Every law we pass is going to be flawed. Even if we got the law just right in the moment, things always change. This is what happened with Section 230: the law effectively solved a problem from the ’90s, and now we have new problems that require us to update the law.

Finally — and this is the part no one wants to say — we have to individually take responsibility for the state of things. I believe the old adage that, “first we shape our tools, and thereafter they shape us.” But I also believe that we have the power to hold ourselves accountable. We can choose to use only the tools that improve our lives, and we can choose how to apply those tools.

Be honest: how often do you share an article after reading just the headline? How often do you see inflammatory, offensive, or disturbing content online and choose to ignore it rather than report it? How often have you joined an outrage-driven pile on? How often do you feed the trolls?

We’ve all made mistakes and we can all hold ourselves to a higher standard online. We still need improvements to the law. We still need platform owners to take more responsibility for their software. And, we each have the most control over ourselves. Deciding how you want to engage with these new media platforms, setting personal limits, and breaking out bad online habits can go a long way toward improving the overall state of our media ecosystem.

--

--

OneZero
OneZero

Published in OneZero

OneZero is a former publication from Medium about the impact of technology on people and the future. Currently inactive and not taking submissions.

Tyler Elliot Bettilyon
Tyler Elliot Bettilyon

Written by Tyler Elliot Bettilyon

A curious human on a quest to watch the world learn. I teach computer programming and write about software’s overlap with society and politics. www.tebs-lab.com

Responses (2)