Pattern Matching
How to Start Fixing Social Media
The first step is agreeing on exactly what it is — and what we want it to be
This was a week of pointing fingers. As the dust cleared from the January 6 riot at the U.S. Capitol and Donald Trump was re-impeached (early candidate for word of the year), many blamed the social media platforms on which his most rabid supporters organized. Critics on the left said they failed to take timely action against Trump, QAnon, and Stop the Steal groups; the right blamed them for taking action at all.
Finger-pointing in itself might not seem all that productive. But the debate over exactly what role the platforms played in fomenting political violence, and what they could have done differently, has the potential to be clarifying. Academics and technologists are now weighing with fresh urgency social media reforms that could redefine how we interact in online spaces — if we can ever reach consensus on what those reforms should be.
The Pattern
From assigning blame to brainstorming solutions.
Early in the week, Facebook sought to deflect blame for the insurrection at the Capitol onto its rivals. “I think these events were largely organized on platforms that don’t have our abilities to stop hate, don’t have our standards, and don’t have our transparency,” Sheryl Sandberg said in an interview with Breakingviews’ Gina Chon on Monday. That’s not exactly true, as the Washington Post’s Elizabeth Dwoskin and CNN’s Brian Fung and Donie O’Sullivan pointed out — research and watchdog groups found extensive organizing activity on Facebook leading up to Trump’s rally — but it is in keeping with Facebook executives’ penchant for defensiveness.
For its part, Twitter took its lumps and introspected. “While there are clear and obvious exceptions, I feel a ban is a failure of ours ultimately to promote healthy conversation,” CEO Jack Dorsey said in a somber thread about the suspension of President Trump’s Twitter account. “And a time for us to reflect on our operations and the environment around us.”
One smaller network, Parler, became the consensus scapegoat and was booted not only from Google and Apple’s App Stores, but also by its cloud hosting service, Amazon Web Services. Parler is not a sympathetic character here. The app explicitly marketed itself as an anything-goes alternative to the moderation policies of the mainstream social networks, and there is plenty of evidence that it played a starring role in the Capitol riot. It also exposed pretty much all of its data to hackers via what Wired’s Andy Greenberg called an “absurdly basic bug.”
Still, there are reasons to be wary of AWS taking on the role of content moderation enforcer. The Electronic Frontier Foundation’s Jillian C. York, Corynne McSherry, and Danny O’Brien wrote an extremely thoughtful post arguing that moderation should be left to user-facing platforms, not infrastructure providers. Owen Williams also wrote for OneZero about just how much power the cloud giants wield.
Deplatforming Parler might do some good in the short term, and serve as a warning to other app makers that marketing yourself to racists and conspiracy theorists is a risky strategy. But it’s analogous to the deplatforming of Trump, which I wrote about last week, in that it won’t do much to solve social media’s underlying issues. The extremists who were organizing there are now flocking to other platforms, including privacy-focused apps such as Telegram that offer encrypted messaging. OneZero’s Sarah Emerson found that MeWe, envisioned as a privacy-focused alternative to Facebook, was struggling with an influx of right-wing militia groups.
The tradeoff between privacy and security is one of the oldest problems in online communication. My instinct is that the possibility of private (and thus unmoderated) online communication is worth preserving, but that it should be limited to interpersonal or small-group messaging, while any platform that amplifies speech to sizable audiences should be held accountable for moderating it.
To return to those larger platforms, I think Twitter’s Dorsey got one big thing right: Banning users is a way of treating a symptom of social media dysfunction, not the dysfunction itself. That’s why Sandberg is wrong to pat Facebook on the back for playing Whac-a-Mole with Stop the Steal groups in the weeks leading up to January 6. For a striking look at some of the dynamics that fueled our current mess, read this New York Times piece by Stuart A. Thompson and Charlie Warzel on exactly how Facebook’s reward mechanisms helped to radicalize users.
So if banning Trump and blackballing Parler aren’t the answer, what is? It’s a question that’s been central to my reporting and thinking for years, and which I’ll continue to explore in future editions of the newsletter. For now, I’ll point to two broad approaches that have gotten some attention this week.
One school of thought holds that the big social media companies are capable of doing better, even if they must be pushed, regulated, and cajoled into it. It’s the premise that underlies calls for stronger and more consistent moderation (including beyond the United States), independent oversight, greater transparency and accountability, firewalls between platforms’ policy and lobbying arms, crackdowns on unverified content, restrictions on political ads, fact-checks of politicians, circuit-breakers to slow the spread of mega-viral posts, and new features such as an “I made a mistake” button on Twitter. In OneZero this week, Aviv Ovadya made the case that new and better metrics could help to hold social platforms accountable for progress on problems such as misinformation.
Another group of tech critics regards today’s social media environment as fundamentally broken, and focuses accordingly on rethinking the entire project. As it happened, one ambitious effort to reimagine social networks was unfolding this week, out of the spotlight. A nonprofit called Civic Signals held its first New Public Festival, a series of panels and discussion groups (conducted remotely) aimed at defining and building healthier, more public-spirited online spaces.
Led by University of Texas communication professor Talia Stroud and The Filter Bubble author Eli Pariser, the festival kicked off by unveiling the results of a two-year-long study. It looked at what characteristics would define a healthy social network — such as inclusion, safety, thoughtful discussion, and bridging people’s differences — and measured the existing ones along those dimensions. The findings are interesting in themselves, even if you don’t particularly buy the optimism at the project’s core. For instance, Reddit ranked highest in “promoting thoughtful conversation,” but poorly on “encouraging the humanization of others.” Casey Newton offered some thoughts on the project in Platformer, and you can find out more about it here.
If the idea of platforms as quasi-civic spaces intrigues you, and you don’t mind an academic approach to the topic, this paper by researcher Carolina Are of City, University of London applies “third space theory” to the big social networks, particularly Facebook and Instagram. Are compares social platforms to shopping malls and gated communities as spaces that are hybrids of private and public, commercial and civic.
If that seems obvious, consider how it complicates the past week’s often-oversimplified debate as to whether banning someone from social media constitutes censorship. Yes, social networks such as Facebook are private actors, and the First Amendment only applies to censorship by the government. And yet it is not crazy for people to have some expectation of due process when it comes to their right to post on the world’s dominant social media platform. Here’s how the conservative columnist Matthew Walther put it in The Week:
If progressives really believed in the basic tenets of 20th century antitrust law, and rejected the premise that only states can exercise quasi-coercive authority over individuals and communities, they would not resort to ridiculous talking points of the “Well, a private company can decide to publish whatever it wants” variety, as if Facebook were your local newspaper rather than the reason your local newspaper no longer exists.
Academic as it might seem, understanding social networks’ hybrid nature is crucial to how we reckon with them. If social networks were truly public-spirited endeavors, then reforming them would be a matter of figuring out how they ought to work, and then asking them to find a way to make that happen. (Or perhaps, as Pariser and others have suggested, building publicly owned social networks.) Dorsey seems to endorse that view when he invites critics, researchers, and technologists to collaborate with Twitter on goals such as stimulating healthy conversations. Other nods in this direction include Facebook’s Oversight Board, features from Apple and Google that monitor screen time, and Instagram’s experiments with hiding public metrics. If, on the contrary, we view them as purely commercial enterprises that can’t be expected to pursue anything other than profit, then the only viable approach is to constrain them through some combination of shaming, public pressure, and regulation.
Ultimately, we’ll need both imagination and regulation. Thought experiments about the perfect social network are moot without powerful levers to push companies in that direction. But laws, penalties, and shaming are liable to backfire unless they’re backed by a coherent vision of what we actually want social networks to look like.