This is an email from Pattern Matching, a newsletter by OneZero.

Pattern Matching

Twitter and Facebook Experiment With Offloading Content Moderation

Birdwatch and the Oversight Board are new approaches to the same idea: shifting responsibility away from the platforms themselves

Twitter headquarters in San Francisco. Photo: Scott Strazzante/San Francisco Chronicle via Getty Images)

In the never-ending scramble to solve the insoluble problem of content moderation, social media companies are willing to try just about anything — as long as it doesn’t involve making it a core part of their business.

Contrasting approaches were on display this week from Facebook and Twitter. Facebook’s Oversight Board, a semi-independent body that it created as a sort of appeals court for content moderation decisions, ruled on its first slate of five cases. While Facebook was turning to its elite panel of well-known figures from around the world, Twitter announced a project called Birdwatch that enlists ordinary users to flag, label, and annotate misleading tweets.

One is punting moderation to its own private international court of justice; the other is hoping to crowdsource it to a cadre of volunteers. Does either model hold promise for making online content moderation more consistent, credible, and effective?

The Pattern

New frontiers in the outsourcing of content moderation

Birdwatch is intriguing in concept. Here’s how Twitter’s vice president of product, Keith Coleman, explained it in an official blog post Monday:

Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable. Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.

Initially, Birdwatch will be open to a small test group of more or less random Twitter users, who must give a real phone number and email address and be free of recent rules violations. (You can apply here.) At first, their discussions and notes will be siloed on a separate Birdwatch site. Eventually, the plan is to open Birdwatch to a wider group and incorporate its work into the site itself, in some form, with the goal of giving “context” to potentially misleading tweets. But when I asked the company what that might look like, a spokeswoman said that’s yet to be determined. To guard against trolls gaming the system, Twitter will allow participants to rate each other’s notes for “helpfulness” and assign reputational scores.

You can go ahead and dive in to see what it looks like so far. When I checked on Friday, users were flagging tweets from both the political left and right — including this one from Rep. Cori Bush, D-Missouri, and this one from Rudy Giuliani. They were also taking aim at nonpolitical tweets, including a satirical tweet from a stand-up comic joking that the CEO of Robinhood is Ghislaine Maxwell’s son.

As on a Wikipedia talk page, flagging a tweet starts a discussion in which other users can weigh in. The Robinhood joke tweet, for instance, drew responses from Birdwatch users who felt it was not misleading because it was clearly satirical. One early takeaway for me is that Twitter is going to have to lay down some more specific guidelines if it wants these arguments to have any grounding in a shared understanding of what counts as misleading.

Wikipedia, for all its warts, has been held up recently as an example of how a user-generated content site can self-moderate effectively. The CEO of the Wikimedia Foundation, Katherine Maher, made that case herself recently in a Fast Company op-ed headlined, “In a fragmented reality, Wikipedia isn’t just a refuge. It’s a roadmap.” Maher makes a compelling case for the factors that have made Wikipedia a relatively trusted information source. She points to decentralized problem-solving, institutionalized discourse, inclusion, and the fostering of healthy dissent as keys to its success.

What remains to be seen is whether that model is transferable to a social media site such as Twitter that didn’t build it in from the outset. When Wikipedia’s volunteer editors dive into detailed arguments about questions such as exactly what to call the attack on the U.S. Capitol, they’re literally creating the content of one of the world’s most influential websites, and thus a rough draft of history. The motivation is obvious, even if the rewards aren’t tangible.

When Birdwatch users debate whether a politician’s tweet is misleading, it’s less clear exactly what they’ll be accomplishing. The evanescent nature of tweets means that by the time Birdwatch wraps up a three-hour debate, the tweet in question might already be old news — or even deleted. Even if it isn’t, there are longstanding questions about the effectiveness of appending labels to inaccurate or misleading tweets (though recent studies have found it can help, at least in some contexts). In contrast to Wikipedia editors, the work Birdwatchers will be doing feels in some way ancillary to the primary work of Twitter, which is, well, tweeting.

Mike Caulfield, who researches digital literacy at Washington State University, raised the question of motivation in this perceptive thread. Boston University law professor Tiffany C. Li had a good thread on the difference between Wikipedia as a “communal” project and Twitter as a “connective” one, and how Birdwatch could devolve into a vehicle for harassment and misinformation.

Whether Birdwatch has a chance of success will also depend at least in part on whether it seems to actually make a difference. When Birdwatch decides a tweet is misleading, will that just lead to a simple flag on the tweet that links to a talk page? A consensus note, written and edited by Birdwatchers, that appears directly below the tweet in users’ feeds? Some feedback into the Twitter algorithm that limits misleading tweets’ virality, or downranks accounts that post multiple misleading tweets? Incorporating Birdwatch into Twitter’s recommendation algorithm strikes me as one way that the project could actually make a difference, though it would also likely be controversial.

There’s also the question of whether Twitter is shirking its own responsibility by delegating part of the task of moderation to uncompensated volunteers. Unlike the Wikimedia Foundation, Twitter is a wealthy, for-profit corporation, which could reasonably be expected to hire people for tasks central to its mission. But while Twitter, like Facebook, has increasingly accepted responsibility for discrete classes of potentially harmful misinformation, it remains wary of being an all-purpose arbiter of truth or credibility.

The original idea behind social media feeds was that users could implicitly decide for each other what was worth reading simply through their own behavior: shares, likes, retweets, clicks. Now that the flaws in that approach have become glaring, they’re looking for more thoughtful feedback mechanisms, while still trying to avoid taking on an editorial role for themselves.

Which brings us to Facebook’s Oversight Board. Rather than put the onus on users, Facebook has enlisted a roster of paid experts and charged them with the thorny task of reviewing selected appeals to Facebook’s own content moderation decisions. This week, they ruled on their first slate of five cases — though not yet on the ban of President Trump, which the board will take up in its next set of deliberations. (It’s soliciting public comment on that question until February 5.) It also issued a series of policy recommendations stemming from those cases. Facebook is required by the terms of its agreement with the Oversight Board to implement its specific case decisions, which it promptly did. It is not required to follow its broader policy recommendations.

In four of the five initial cases, the board overruled Facebook’s decision to take action against the post in question, raising the possibility that the board will turn out to be more permissive than Facebook itself. “For all board members, you start with the supremacy of free speech,” said board member and former Guardian editor Alan Rusbridger, in an interview with NBC News’ Dylan Byers. “Then you look at each case and say, what’s the cause in this particular case why free speech should be curtailed?” For a deep dive into the decisions, read Harvard researcher Evelyn Douek’s Lawfare post and this thread from Emma Lanso of the Center for Democracy and Technology’s free expression project.

The jury is still out on both Facebook and Twitter’s latest experiments in content moderation. It’s worth noting for now both the contrast between the two and the big thing they have in common. The contrast is that Facebook’s board is exclusive and expertise-driven, while Twitter’s tries to tap the wisdom of crowds. What they have in common is that they’re once again seeking legitimacy by offloading responsibility for their platforms’ content, rather than by internalizing and owning it. The impulse is understandable: Platforms are loath to take sides, and making editorial judgments about users’ content is a threat to the premise that media can essentially be crowdsourced and automated. Expect more ambitious experiments in the future as they exhaust every avenue short of embracing an editorial function themselves.

Survey of the Week

Pew finds a dramatic partisan split on whether social media companies should remove potentially violence-inciting content from elected officials.

Gamestonk Takes of the Week

How Will the GameStop Game Stop?

  • Matt Levine, Bloomberg Opinion

“But the web has mudded what separates David from Goliath, and unless we face the challenge of what digital technology has done to democracy, it may be increasingly hard to distinguish who one is even supposed to root for — that instead, public discourse will simply be a yelling match between who can most cleverly manipulate the market of ideas.”

  • Navneet Alang, Toronto Star

Gamestop isn’t David vs. Goliath. It’s Goliath vs. Goliath.

  • Alexis Goldstein, Markets Weekly

Everything’s a joke until it’s not

  • John Herrman, New York Times

Shameless Self-Promotion of the Week

This week OneZero published a feature I’ve been working on for months. It’s about how Nextdoor is quietly replacing the small-town paper.

Thanks for reading Pattern Matching. Reach me with tips and feedback by responding to this post on the web, via Twitter direct message at @WillOremus, or by email at oremus@medium.com.

Senior Writer, OneZero, at Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store