This is an email from Pattern Matching, a newsletter by OneZero.

Pattern Matching

What Social Media Companies Have Fixed Since the 2016 Election

And a few things they haven’t

Photo: NurPhoto/Getty Images

In 2016, everyone from Russian agents to British political consultants to Macedonian teenagers to randos in the American suburbs used social platforms with impunity to spread misinformation ahead of the U.S. presidential election. Much of it favored Trump, who pulled off a stunning upset victory.

Just how much influence those campaigns had on the outcome has never been established, and probably never will be. There is at least some research to suggest that Russian trolls, for instance, are not particularly effective at changing Americans’ political opinions. Still, it was widely agreed in the aftermath (though not by Trump) that the platforms had been far too easily exploited for the purpose of election interference, and that their dynamics had favored hoaxes and hyperpartisanship over reliable information sources.

On the eve of the next U.S. presidential election, it is fair to say that the platforms have come a long way in acknowledging and taking steps to address the most blatant of those abuses. What isn’t yet clear is how much of a difference those steps will make.

The Pattern

Social media companies are fully prepared for this moment… right?

  • It’s almost hard to believe now, but the last time the United States elected a president, Facebook, Twitter, and YouTube had essentially no policies on misinformation, conspiracy theories, or foreign election interference. Now, all the major platforms have such policies — but they are constantly evolving, inconsistently applied, and often hotly contested, both on the national stage and within the companies themselves. Together they amount to a convoluted patchwork in which the same post, ad, or user might be banned on one platform, slapped with a warning label on another, and fully permitted on a third.
  • When it comes to political speech, social platforms have gone from being essentially anarchies to something more like a Wild West. There are laws now, and there are sheriffs, but outlaws and banditry still abound. Perhaps the most effective changes, relatively speaking, have been the companies’ approaches to foreign interference campaigns. Facebook and Twitter in particular now proactively investigate and take down networks of accounts linked to state-sponsored influence operations, often in cooperation with federal authorities.
  • Domestic misinformation continues to flourish on social media, though the platforms have managed to limit specific categories of lies and hoaxes. Facebook is the only one to implement broad policies against false and misleading news stories, and it partners with fact-checkers to label them. But there remain gaping holes in that policy and its enforcement. Twitter still has no blanket policy against misinformation — perhaps understandably, given the nature of its platform — but has enacted rules against specific categories of false claims that it considers particularly harmful, such as Covid-19, voting, and the integrity of elections. (Facebook has also created more stringent rules against these varieties of misinformation.) Both companies have begun to apply some of their policies even to public figures, though they differ in the specifics of what they take down, what they label, and what they let stand. And they’re taking heavy flack, understandably, for what looks to critics like selective enforcement.
  • YouTube, meanwhile, has taken the most laissez-faire approach, seemingly content to let its rivals take the heat for failing to enforce their policies while implementing few of its own (until recently, at least). This despite YouTube being a major hub for conspiracy theories in particular. Even TikTok, allegedly such a threat to American democracy that Trump is still trying to ban it, has rolled out election misinformation policies that go farther than YouTube’s in some respects.
  • Speaking of conspiracy theories, the leading platforms have now cracked down on QAnon groups and accounts — but only after it gained a massive nationwide following and sowed widespread mistrust of democratic institutions, not to mention incidents of real-world violence. While Twitter took aggressive action starting in July, Facebook and YouTube waited until October, ensuring that QAnon would remain an influential force in the November election. Still, early research suggests the crackdowns have had a significant effect, belated though they may be.
  • Conspiracy theories pose a legitimately thorny problem for social platforms because enabling nontraditional information sources to challenge official narratives is one of their core value propositions, especially in authoritarian contexts. But the approach of isolating and banning particularly influential conspiracies that have been widely debunked by a free press and civil society, such as QAnon, seems to hold at least some promise, in the short term.
  • Political ads are one realm in which the platforms have made significant progress, both because they’re easier to control and because Congress forced their hand on disclosures and transparency measures. Twitter last year announced a full ban on political ads, including issue ads — a policy I critiqued at the time — while Facebook continued to allow political ads and plays a major role in campaigns’ digital strategies. Facebook did ban new political ads starting this week — a policy whose implementation came with glitches that mistakenly disallowed numerous preapproved ads. It has also announced a temporary ban on all political advertising beginning the day after the election, as a bulwark against efforts to fuel civil unrest about the election results. Here too, YouTube has been the most hands-off, which is why the Trump campaign is making it a centerpiece of its final ad push.
  • At least one expert with firsthand experience believes all the changes add up to a substantially healthier online information environment in 2020. Alex Stamos was Facebook’s chief security officer for the 2016 election and its aftermath, and left the company in 2018 amid reports that he was frustrated with its handling of the concerns he and his team had raised about foreign interference. He is now director of the Stanford Internet Observatory. “I think Facebook and Twitter have reacted well to 2016, especially to potential foreign influence,” Stamos told me. “They are trying to react to how to best respond to the fact that the majority of disinformation is domestic, and are bumping up against the limits of consensus of what their roles should be there.” He did not have the same praise for YouTube, which he said “still has the least comprehensive policies of the big three and is, by far, the least transparent.”
  • Still, the platforms’ misinformation problems are far from solved, he said. Some of the same issues that plagued Facebook in the 2016 U.S. election have migrated to smaller social networks, and many persist in a virulent form in countries around the world, including on Facebook. “Facebook in particular needs to figure out a more sustainable model that can be applied globally,” he said. Finally, tech companies will need to build on the targeted misinformation policies they developed for the 2020 U.S. election to tackle future risks, such as disinformation about Covid vaccines.
  • But even Stamos’ careful optimism about the platforms’ election preparedness isn’t shared by some of the other experts I’ve spoken with. Dipayan Ghosh, who has experience both as a technology policy adviser in the Obama administration and working on security and privacy at Facebook, is now director of the Platform Accountability Project at Harvard’s Kennedy School. “Four years on, it’s hard to substantiate a claim that the big tech companies are doing any better than they did in 2016,” Ghosh told me.
  • “On one hand, they are catching more policy-offending content including disinformation in absolute terms,” Ghosh went on. “On the other, the number of bad actors online has exploded — and we all knew that it would. As such, we’re still seeing high rates of disinformation, which may in the end have an electoral impact. Meanwhile, companies like Facebook have left the door open for politicians to openly lie and spread misinformation because of a false commitment to free speech — and some politicians are taking advantage. Overall, as such, we continue to see a noisy digital media environment that could well mislead many voters.
  • Ghosh’s comments hint at what I view as the hard nut at the core of the problem: the fundamental structure of social media. Over the past four years, the major social platforms have reluctantly acknowledged that they have a role to play in preventing blatant abuse and exploitation of their platform by obviously bad-faith actors, and they’ve taken real steps toward addressing that. Halting, often confusing, and in many ways unsatisfying steps, but real steps nonetheless. (Again, this is less true of YouTube, which has consistently done the minimum it can get away with and has been years behind Facebook and Twitter in taking responsibility for its platform.) But reining in the most obvious and clear-cut abuses does very little to change the overall impact of social media on political discourse.
  • Social media, in the broadest sense, has both democratized public speech (in the sense of giving a megaphone with potentially global reach to people who didn’t previously have one) and systematically amplified the speech that best plays to each individual user’s biases and emotions. There are tremendous upsides to this — the old media regime unjustly marginalized many voices, including those of minorities and social movements — and there are vast, scary downsides that we’re still just starting to reckon with. (Stamos, for his part, argues that these sorts of vague concerns about the societal downsides of algorithmic feeds are hard for platforms to address in the absence of clear, empirical evidence to back them.) Without changing the dominant platforms’ ungovernable scale, the incentives created by their engagement-based algorithms, or the format of feeds that give equal weight to reliable and unreliable sources, the battle against online misinformation, conspiracies, and extremism will be forever uphill.
  • To put it in tech company terms, political misinformation on social media isn’t just a policy and enforcement problem. It’s a product problem. Twitter has at least begun to acknowledge that, with its project to promote “healthy conversations” via product tweaks to disincentivize dunking, for instance. But any given five minutes using the app should be more than enough to see how far that effort has gotten it. And Facebook may be just beginning to look at product changes, as evidenced by its suspension of political group recommendations this week and Instagram’s temporary removal of its “recent” tab. By and large, however, Facebook and YouTube have resisted any suggestion that their misinformation problems may be endemic to their core products. And no wonder: For all the controversies, backlashes, fines, and regulatory posturing, they’re more profitable than ever.

Undercurrents

Under-the-radar trends, stories, and random anecdotes worth your time.

  • Elizabeth Warren headlined a virtual fundraiser for Joe Biden, focused on regulating Big Tech, Recode’s Theodore Schleifer reported. The October 27 event also featured Rep. David Cicilline, D-Rhode Island, who chairs the House antitrust subcommittee that produced a report last month calling for sweeping antitrust action against tech platforms, and prominent tech critics such as Shoshana Zuboff, Tim Wu, and Safiya Noble. With the Trump administration now routinely taking shots at the industry, the event offers a reminder that a Biden administration could include prominent advocates of tech regulation from the left, and a preview of some of the figures who could be part of that. (Warren, who published a plan to break up Big Tech last year, is reportedly angling for Treasury Secretary.) We’re left to read such tea leaves because Biden himself has not advanced a substantive tech policy platform. It seems likely he’d be less cozy with Silicon Valley than Obama was, at least — though perhaps more so than Trump, who now tweets “Repeal Section 230!” on what seems like a daily basis. Notably, Silicon Valley leaders have been backing Biden at rates much higher than they did Clinton in 2016, according to a new analysis by Recode’s Schleifer and Rani Molla.
  • Who’s really running Twitter? As the company’s increasingly aggressive moderation has made the company a villain in some conservative circles, a pair of new profiles shed some light on the leadership driving those changes. The first is of CEO Jack Dorsey, who calmly weathered a browbeating from Sen. Ted Cruz, R-Texas, at this week’s Senate Commerce Committee show trial — I mean hearing. He’s portrayed by the Wall Street Journal this week as something of a gentle soul, cerebral and decidedly hands-off in his leadership of the company even in times of intense turmoil. For all the anecdotes and reporting, the profile struggles to penetrate the enigmatic Dorsey’s character, though it does find him maintaining cordial correspondence with a strikingly broad range of public figures, including a right-wing troll who was permanently banned from the platform in 2015. Perhaps more relevant to the moment, given Dorsey’s detachment from Twitter’s day-to-day decisions, is Politico Magazine’s profile of Vijaya Gadde, the company’s top legal and policy executive. Gadde comes across as the firm but steady hand behind the company’s gradual embrace of a more progressive approach to policy and content moderation, rooted in a human rights perspective, as well as its willingness to confront Trump. It’s a worthwhile read on a figure who hasn’t previously gotten attention commensurate with her role at the company.
  • Apple is developing its own alternative to Google Search, the Financial Times reported, as its contract with Google has come under scrutiny in the DOJ’s antitrust suit against the latter. Interesting as that is, it’s worth remembering that Apple develops lots of product ideas that never see public release — and has released at least one Google alternative, Apple Maps, that probably shouldn’t have.

Headlines of the Week

How a fake persona laid the groundwork for a Hunter Biden conspiracy deluge

— Ben Collins and Brandy Zadrozny, NBC News

And now, a sinkhole full of rats

— Claire Lampen, The Cut

Senior Writer, OneZero, at Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store