How Content Moderation Turned Into a Game of Hot Potato

App stores and cloud hosting platforms want the right to ban content without the responsibility of moderation

Photo: Smith Collection/Gado/Getty Images

When app stores and cloud hosting platforms banned Parler earlier this month after the self-described “free speech” social network failed to moderate calls for violence, they set a new precedent. Previously, the conventional wisdom was that developers bore the responsibility of policing an app’s community. After all, the developer is in the best position to know what its users need, what they’re up to, and how to build the specific moderation tools that work best for its community.

But with the Parler bans, companies that hosted the app — in an app store, on web hosting, or in a domain registrar — asserted a say over whether the community within the app was being moderated effectively.

While major social networks like YouTube and Facebook have at least attempted to spell out exactly what content is allowed, moderation by these host companies is largely nonexistent, even as they reserve the right to ban their users.

Apple gave Parler an ultimatum: Come back with a plan to moderate your community or be removed from the App Store. This policy might explain why Facebook, which has also been used to facilitate real-world violence, gets to stay, while Parler, which actively pitches itself as a platform for “free speech” where almost anything goes, doesn’t. While examples of illegal or violent activities can be found on both platforms, Parler’s posture of pushing back against moderation put it at odds with Apple’s policies. This put Apple (as well as Google, Amazon, and others) in the awkward position of deciding whether an app is moderating its community properly on a somewhat arbitrary basis.

Platforms could alleviate that confusion by both clarifying their platform guidelines and providing tools to help developers conform to them. Apple has requirements for apps. In many cases, the company also provides tools and best practices to make it easier for developers to comply with those rules. Apps are required to follow certain privacy guidelines, for example, so Apple includes a robust permissions system. However, Apple doesn’t offer similar tools for moderation.

Nevertheless, more detailed guidelines on how to build an app can only go so far in moderating an app’s community. Sarah T. Roberts, PhD, an associate professor of information studies at the University of California, Los Angeles, and a content moderation expert, explains that internet infrastructure companies didn’t necessarily ban Parler because it failed to include the right kind of buttons or tools in its app, but because of the culture it was fostering.

“The reason was that Parler was becoming not only an embarrassment to be associated with, but it was becoming a potential site of the fomenting of violent insurrection,” Roberts says.

Parler was also a unique case where the app in question openly flouted the very idea of moderation. An app like Among Us, a popular group game that has been scrambling to prevent abusive chat, could be served by a different approach.

Among Us, which is run by a four-person team, was suddenly faced with the challenge of moderating hundreds of millions of players when it spiked in popularity after being featured by streamers. Tech platforms could, theoretically, help provide tools for moderating such apps. But doing so would not only be costly for the tech platforms but also increase their liability.

Amazon is one of the few platforms that provides its AWS customers with any moderation tools—through its Rekognition service—but there’s always a need for humans in the loop—a lot of them. Facebook, the largest social network in the world, outsources most of its human moderation to approximately 15,000 third-party moderators (a number that New York University suggests should be at least doubled to moderate the site’s content effectively). For Apple, Google, Amazon, or any other tech platform, providing moderation services to developers would mean hiring more people and taking on the responsibility of their failure.

“I think it cuts both ways.” Roberts explains. “Because if their hands are on it and the moderation isn’t done to the level to which the public is satisfied, or if the public is dissatisfied by what feels like a heavy hand, I think there would be concern on the part of that firm to get involved in that way. I also think there could be a sensibility of overreach.”

Even if tech giants don’t directly provide moderation services to developers on their platforms, Roberts suggests they could do more to proactively guide developers on how to do so properly.

“What if being able to demonstrate that you have features to counteract child exploitation… would be a thing that you have to do to be hosted?” Roberts says. “That would change the playing field, and a lot of people would get really savvy really quickly about that part of their development process.”

Such standards could, for example, help ensure an app like Among Us has the reporting system it needs before it goes live. Notably, both Apple’s and Google’s developer guidelines require apps with user-generated content to have mechanisms to report offensive content, and Apple reviews apps for compliance with its guidelines before publishing them to its App Store. Among Us, which features a chat system but not hosted user content per se, lacks these mechanisms but is still allowed on both app stores.

“What if being able to demonstrate that you have features to counteract child exploitation… would be a thing that you have to do to be hosted?”

Parler’s ban wasn’t prompted by a change in the number or type of its moderation tools. Every version of an app has to be approved to be published on the App Store, so if Apple felt Parler was missing key features, it could have banned the app prior to the Capitol riots on January 6 that made dealing with Parler too toxic for the tech giants to tolerate any longer.

A similar situation occurred when Discord briefly banned the WallStreetBets server. According to a statement from Discord, the community received multiple warnings for “hate speech, glorifying violence, and spreading misinformation.” However, the server remained active until the day it ended up in the news for manipulating GameStop’s stock. The company banned the server, claiming it had nothing to do with the server’s role in the stock market controversy. However, unlike larger platforms, Discord walked back the ban and started working directly with server admins to rebuild the server in a way the would comply with the company’s policy.

These cases give the impression that companies are willing to tolerate misbehavior on their platforms until the communities that violate the rules cause controversies, at which time the ongoing rule violations serve as a convenient excuse to remove them.

But with little more to go on in developer guidelines than “keep your house clean or else,” developers are left in a precarious position. Every developer has to decide for themselves what counts as effective moderation — in some cases even settling on no moderation at all — and the platforms that host them will allow it. Until they don’t.

“This is a hot potato. You can see everyone’s trying to pass it, nobody wants to hold it.”

“Ultimately, when we’re talking about moderation of commercial, for-profit properties, this is a hot potato,” Roberts says. “You can see everyone’s trying to pass it, nobody wants to hold it.” The result is that at every step of the production chain — from individual developers to the app stores that host their apps to the cloud providers their apps run on — moderation becomes someone else’s responsibility until it makes someone higher up the chain look bad.

And yet, any platform or company involved in developing an app or service will have one moderation tool at their disposal: the choice to stop doing business with a partner. Most platforms have been hesitant to exercise that power, preferring to be neutral — or at least feign neutrality — as much as possible. But sitting out the hard work of moderation is an increasingly risky business move.

“Every time we get online, there’s a risk of encountering these things, and that’s just an unfortunate fact,” Roberts says. However, there is still room for improvement. “We could do better along the way. We can do better at the development stage, we can do better at the testing stage, we can do better at the distribution stage. And maybe people could just be a little bit better, too.”

Eric Ravenscraft is a freelance writer from Atlanta covering tech, media, and geek culture for Medium, The New York Times, and more.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store