This is an email from Pattern Matching, a newsletter by OneZero.
Facebook Is Already Preparing for a Biden Presidency
How the prospect of a “blue wave” is changing content moderation.
Up until two weeks ago, Facebook was a place where QAnon conspiracy groups flourished, aided by its recommendation algorithms; where you could pay to promote anti-vaccine misinformation; where Holocaust denial was treated as a legitimate political opinion; and where conservative news outlets were known to get a pass on rule enforcement to avoid upsetting the right.
As of today, none of those things are true. A series of dramatic policy reversals by the dominant social network, followed by an aggressive crackdown on a dubious New York Post story alleging corruption by the Biden family, amount to a transformation in the company’s official posture toward online speech (if not the underlying dynamics that make it such a potent vector for misinformation). One year after Mark Zuckerberg delivered a full-throated defense of free speech at Georgetown University, emphatically rejecting calls to broaden restrictions on what views Facebook users can express, his company has done just that. The moves to ban QAnon accounts, ban anti-vaxx ads, and ban content that “denies or distorts the Holocaust” come on top of recently announced bans of post-election political ads and ads that attempt to delegitimize U.S. election results.
What, some observers wondered, could account for the sudden shift? No doubt some of these changes had been in the works for a while and were moving along separate tracks within the organization. But the timing cannot be pure coincidence. The moves come just as Biden has firmly reestablished himself as the heavy favorite to win the presidency in three weeks, and amid growing odds of a Democratic majority in both chambers of Congress. The changes position Facebook to better defend itself against a potential president and party that has raised the specter of regulations that could seriously damage its business.
Social platforms are hedging against a potential Biden presidency.
- After four years of trying to appease Trump and Congressional Republicans who cry “censorship” over fact-checks, the major internet platforms now face the prospect of a government run by a party that wants them to moderate content more aggressively. Biden, like Trump, has called for the repeal of Section 230 of the Communications Decency Act, the crucial statute that shields platforms from liability for their moderation decisions — but for the opposite reason. Whereas Trump wants to see social media companies pummeled with lawsuits for blocking speech they deem offensive or dangerous, Biden wants them to face consequences when they fail to do so.
- In the background is the growing threat of antitrust action, highlighted by a report from the House antitrust subcommittee’s Democratic staff that called for sweeping changes that could seriously constrain the largest tech companies’ growth prospects. I wrote in last week’s Pattern Matching about the potential for more modest, yet still significant, bipartisan antitrust action. But a Democratic sweep would put the entire suite of aggressive antitrust recommendations on the table. While the antitrust probe is not primarily about online speech, I’ve written before about how the politics of antitrust action have historically been driven in part by public and official anger at the companies in question. Biden himself told the New York Times’ Charlie Warzel in May, “I’ve never been a fan of Facebook” or Zuckerberg, whom he called “a real problem.”
- While Facebook isn’t going to come out and say it, especially with the election’s outcome still uncertain, sources with experience in policy and communications there and at social platforms told me it’s apparent that the risk calculus has changed: The regulatory threat from the left is now at least as credible as that from the right, if not more so.
- “From 2015 until recently, the social content moderator umps gave Trump an expanded strike zone for disinformation and abuse,” tweeted Nu Wexler, a communications consultant who has worked for Facebook, Twitter, and Google, as well as the Democratic Party. “Now, at the end of his term and trailing by 10, he’s just not getting as many generous calls as he used to get a few years ago.”
- Alex Stamos, Facebook’s former chief security officer and now a researcher at the Stanford Internet Observatory, agreed political considerations may be entering the equation. “It’s possible that the political winds are changing Facebook’s approach,” he told me. “This is a structural problem with having content policy and government relations in the same organization.” As extensively reported in the Wall Street Journal, New York Times, and Judd Legum’s Popular Info newsletter, among others, Facebook routes key enforcement decisions through its representatives in Washington, D.C., including Joel Kaplan. In contrast, I’ve reported that Twitter keeps those channels separate to avoid a conflict of interest. That could make Facebook more likely to bend its rules in the direction of whoever is in power — a problem that is not limited to the United States.
- This is not to say that Facebook’s policy changes are misguided, per se. Questions of how enormous global platforms should draw the boundaries around acceptable speech rarely have easy answers. Some experts, such as University of Virginia professor Siva Vaidhyanathan, go so far as to say there is often no right answer to difficult questions of content moderation; the very notion of crafting hard-and-fast, consistent rules for what 2.7 billion people are and aren’t allowed to say is absurd.
- And yet that is the situation in which these platforms have put themselves, and us. Facebook banning QAnon groups, anti-vaccine ads, or Holocaust denial might seem like a no-brainer to many, given the real-world problems that can emerge from those online communities. I tend to agree, with some reservations, that those particular steps are overdue. But even then, defining what constitutes an anti-vaccine stance or Holocaust denial isn’t always straightforward. For instance, some scientists feared earlier this year that Trump would rush an unsafe vaccine through regulatory approval in order to make it available before the election. Any policy on anti-vax speech has to be nuanced enough to tolerate good-faith debates about side effects and even questioning of public health agencies, while effectively targeting misinformation.
- Given the inevitable complexity of content moderation, it’s not inherently unreasonable or craven that platforms would look to elected officials for cues on where to draw the lines, just as they have shown some responsiveness to backlashes from the media, the public, and their own workers. If the elected officials change, and the cues change, we might expect the lines to shift as well. On the other hand, it does undermine Zuckerberg’s attempts to justify his policies by appeal to grand philosophical principles when they seem to be so amenable to shifts in the political winds.
- Of course, suggesting that every tweak in Facebook’s moderation policies is a response to Biden’s poll numbers would be an oversimplification. Evelyn Douek, a lecturer at Harvard Law School who studies online speech, told me that while political calculations can influence moderation decisions, she views Facebook’s changed stance on Holocaust denial or anti-vaccine ads as in keeping with a general industry trend over the past few years, spurred by shifting public opinion on freedom of speech.
- More telling in some ways than Facebook’s formal policy shifts were the ad hoc responses of the major platforms to the explosive yet factually sketchy New York Post story about emails found on a laptop that allegedly belonged to Hunter Biden. Here, both Facebook and Twitter acted about as quickly and aggressively as they ever have, albeit in different ways. Facebook allowed people to post the story, but preemptively suppressed the post’s algorithmic reach while the company awaited feedback from its fact-checking partners. While it claimed this was standard procedure, skeptics pointed out that in the past, Facebook has often suppressed distribution only after the results of the fact-check were in. Twitter, for its part, blocked sharing of the story altogether, going so far as to temporarily suspend users who posted screenshots of the text. Whereas Facebook pointed to its policies on political misinformation, Twitter opted to apply its policy against “hacked materials,” whose zero-tolerance approach was designed more for doxxing or revenge porn than controversial New York Post articles.
- Both decisions sparked (even louder than usual) howls of censorship from the right, while Twitter’s move was so heavy-handed that it even drew scorn from many on the left. Senate Republicans said they would subpoena Twitter CEO Jack Dorsey, with Sen. Ted Cruz calling it “election interference.” Alex Kantrowitz has more on the “nightmare” of ill-prepared platforms blundering their way through the vortex of a monumentally polarizing election. Twitter CEO Jack Dorsey ended up apologizing, and by Thursday night the company had hastily drawn up new policies.
- Again, it’s not obvious exactly what the right way to handle this story would have been — and by Thursday, Facebook’s approach, in particular, was looking prescient as reporting emerged that suggested the laptop might have been planted as part of a Russian disinformation campaign. There were murmurs that the platforms may have been tipped by intelligence officials to be extra-wary of the Post story, though I got no confirmation of that. Even if they weren’t, their actions can be partly explained by the fact that they were already on the lookout for a “hack-and-leak” operation, and highly motivated not to facilitate another foreign interference operation after 2016. Facebook’s head of security policy, Nathaniel Gleicher, indicated as much back in a tweet back on September 24, which is instructive to revisit. Casey Newton’s Platformer newsletter has some useful analysis of how platforms think about hack-and-leaks.
- Still, Facebook and Twitter both erring on the side of extreme caution with an inflammatory story about Biden, in particular, is consistent with the idea that they are particularly loath to make an enemy of the man who could be the next president. (YouTube’s conspicuous inaction, meanwhile, is consistent with its longstanding strategy of ducking and covering while its rivals take the flack.)
- Ultimately, Facebook’s newly invigorated approaches to content moderation as it faces the possibility of a “blue wave” election is a reminder that the dominant internet platforms are neither Republican nor Democratic — they’re capitalist. As long as their business depends on a favorable regulatory environment, when it comes to tricky questions of policy, ceteris paribus, they will tend to align with power.
Under-the-radar trends, stories, and random anecdotes worth your time.
- While Democrats may be up in the polls, Republicans aren’t about to give up on settling their own scores with Big Tech. On Thursday, Trump’s FCC Chair Ajit Pai said the agency will issue new rules to “clarify” how Section 230 is applied. That came after the Supreme Court declined to take up a review of the statute — for now. Notably, Justice Clarence Thomas published a statement signaling interest in reviewing Section 230 in the future. In particular, as Lawfare’s Anna Salvatore explained, he argued that it has been interpreted to confer more immunity on platforms than the text of the law requires. As a side note, Trump — who tweeted yet again this week that Section 230 should be repealed — has invoked the law on his own behalf in the past, including to defend himself against a defamation charge stemming from something he retweeted. (Who knew “retweets are not endorsements” was an actual legal argument?) Newt Gingrich, meanwhile, suggested that platforms be regulated as “common carriers” and blocked from “censoring messages.” No word on whether he’s prepared for the avalanche of scams, spam, and pornography that would quickly inundate a truly unmoderated social network.
- At Google, employees are encouraged to speak their mind — unless it’s about antitrust, the New York Times’ Daisuke Wakabayashi reports. The story is a revealing look at an internal taboo at the search giant, which is the subject of an impending antitrust lawsuit by the Department of Justice, along with inquiries by state attorneys general and the aforementioned House subcommittee. “They don’t address it in emails,” Wakabayashi writes. “They don’t bring it up in big company meetings. […] And if you hope to land an executive job at the internet company, do not bring up the A-word in the interview process.” It makes for an eyebrow-raising read, especially when juxtaposed with Google’s own official mission statement: to “organize the world’s information and make it universally accessible and useful.” On Thursday, meanwhile, Google held an event to highlight new “helpful” search products, playing up the competition that the company faces in the realm of information discovery.
Idea of the Week
— Eli Pariser, Wired
Chart of the Week
— Felix Richter, Statista
Tweet of the Week
— Brian Fung, @b_fung