This is an email from Pattern Matching, a newsletter by OneZero.

Pattern Matching

In Defense of Politics on Facebook

Social networks would love to show users less political content. Here’s why that’s a problem.

Photo: SAUL LOEB/AFP via Getty Images

Political posts on Facebook and other social networks are often divisive, misleading, or just plain false. Social platforms including Facebook and YouTube have played a role in radicalizing people and facilitated the organization of radical groups, including hate groups, some of which have committed real-world violence.

There is reason to believe that social networks have not merely played passive host to these developments, which have been implicated in the decline of democratic institutions in the U.S. and abroad, but have actively fueled them with feed-ranking and recommendation algorithms that systemically amplify sensational claims and outrage-bait over nuance and balanced reporting.

So if Facebook and other social networks could find a way to show people less political content, that would be a good thing, right?

Unfortunately, it isn’t that simple.

The Pattern

Why social networks can’t put the politics genie back in the bottle.

Facebook announced this week that it will begin running tests in which it reduces the amount of political content in some users’ feeds. The tests will affect a small percentage of users in Brazil, Canada, Indonesia, and the United States. But don’t be surprised if they presage broader changes. Facebook recently tested removing politically themed Groups from its recommendations, and it must have been satisfied with the results because it went on to make that filter permanent last month. Mark Zuckerberg also told investors on a January 27 earnings call that “one of the top pieces of feedback we’re hearing from our community right now is that people don’t want politics and fighting to take over their experience on our services.” Facebook has previously said that only 6% of what people see in their feeds is political, but apparently, the company is confident it can whittle that down.

That raises the question: What is political content, exactly? How does Facebook define “political?” A group called “Biden for President” pretty clearly qualifies. But what about a Black Lives Matter group? Or a post about the MeToo movement? If I write a post criticizing mask mandates as an infringement on my liberty, is that political? What about if I write a post urging others to abide by mask mandates? Will that be shown to fewer people now? Perhaps more to the point, how will Facebook’s algorithm know whether my post is political or not? Defining political is hard enough in theory; applying such a definition consistently on the scale of billions of users seems impossible.

I put these questions to a Facebook representative, communications manager Lauren Svensson, via email. She didn’t get into specifics, but acknowledged the thorniness of the problem. “On the definition of political content — this is something we’re still digging in on and we’ll be assessing during these initial tests,” Svensson said. To identify political posts, she added, “we’ll be using a machine-learning model that is trained to look for signals of political content and predict whether a post is related to politics. But we’ll be refining this model during the test period to better identify political content, and we may or may not end up using this method longer term.”

Machine-learning models are useful for many things, but giving satisfying or even remotely coherent answers to messy philosophical questions such as “Is this statement political?” tends not to be their forte. Beyond that, a growing body of research suggests algorithms have a tendency to embody societal biases, including racism and sexism. Facebook has given no indication that it will disclose its model for researchers to study or audit. There is thus good reason to worry that Facebook’s algorithms will end up discriminating against posts by marginalized groups, among other inevitable flaws.

Facebook is hardly the first social network to respond to controversies over problematic political content by trying to simply have less political content. I expressed similar concerns when Twitter announced in 2019 that it would no longer accept political ads. At least some of my worries turned out to be justified: Emily Atkin reported for MSNBC last week that Twitter has been accepting ads from big oil companies that claim their fossil fuels are making the world cleaner — while rejecting ads from environmental groups fact-checking those claims. Corporate greenwashing, it seems, is apolitical, by Twitter’s definitions, but pushing back on it is political.

YouTube, for its part, has long had “advertiser-friendly content guidelines” under which its ad systems can demonetize creators’ videos about “sensitive events” and “controversial issues,” among other categories. This has led to a lawsuit by a group of LGBTQ creators alleging that YouTube’s algorithms discriminate against their videos. I wrote in 2019 about how this could well be true even without any discriminatory intent on YouTube’s part: Without careful oversight, downgrading “sensitive” and “controversial” material inherently punishes people whose civil rights and very existence are considered sensitive or controversial to others. YouTube responded to the suit by claiming immunity under Section 230.

It’s hard to think of a major social platform that hasn’t tried in some way to distance itself from politics. Nextdoor explicitly prohibits posts or discussions about national politics in its main feed — a stance that led moderators to take down posts related to Black Lives Matter before the platform intervened last year to categorize the movement as an issue of local interest. I reported last week that Nextdoor, like Facebook, has stopped recommending political groups. Snapchat and TikTok have tried to build platforms that are fundamentally apolitical by design, though both have struggled to keep them that way. The Intercept reported last year that TikTok had been directing moderators to censor political speech, in addition to suppressing posts that featured people deemed “ugly” or “poor.” (The latter, TikTok claimed, was meant to discourage bullying.)

It’s appealing to imagine a world in which our social networks have less influence over political discourse. A world where our feeds are restored to some halcyon state of baby pictures, selfies, viral jokes, and funny dances. A world where more people turn to professional media organizations with strong editorial standards for their news, and/or follow trusted independent sources in an intentional way via podcasts, newsletters, or Medium posts. But this vision is also naive. There’s no such thing as an apolitical social network because the personal is political. Limits on political speech are also political; they maintain the political status quo against those who would challenge it.

Even if you agree that we’d be better off with political content making up, say, 3% of people’s social feeds instead of 6%, there’s reason to be wary of how platforms define, measure, and enact that. No one should accept a company’s claim to be able to do so without discriminatory effects, in the absence of transparency as to exactly how that will happen. What most of us really want from Facebook and other platforms, I suspect, is not “less politics” but less hate speech, less misinformation, less algorithmic bias toward shock and outrage and tribalism — in short, less of a distortionary effect on politics. But that’s hard to scale, requires value judgments, and could hurt engagement. Much easier, for a software company, to whip up a machine-learning model to identify political-sounding keywords, call it neutral, and make sure no one gets a look under the hood to see who it’s actually affecting.

Undercurrents

Under-the-radar trends, stories, and random anecdotes worth your time.

  • Facebook’s Oversight Board is dividing the academic community. On Friday, the New Yorker published a fascinating and revealing inside look at the board’s formation, written by Kate Klonick of St. John’s Law, the one academic researcher whom the company invited to study it. While some hailed the story as the “definitive” account of an influential experiment in social media governance, others, including many academics, criticized it for legitimizing a body that they view as a glorified public-relations ploy. (One particularly combative nonacademic pundit smeared Klonick as a “lobbyist.”) The dustup surfaced questions about access, framing, and independence that have long been familiar to tech journalists covering an industry that routinely gives preferential treatment to favored outlets. I offered some thoughts in this Twitter thread about how access journalism and outsider criticism can be complementary, at their best — and how that relationship can go wrong. Mike Ananny, an assistant professor at USC, wrote in more depth for Neiman Lab earlier this month about the new questions raised by “access scholarship” vis-à-vis access journalism. Meanwhile, a request from the board for public comment on Facebook’s deplatforming of Donald Trump drew a slew of thoughtful responses. My favorite was this one from Columbia University’s Knight First Amendment Institute, which called for the board to withhold a ruling on the Trump ban until Facebook “commissions an independent investigation into the ways in which its design decisions may have contributed to the events of January 6th.”
  • Amazon is reportedly building a wall-mounted, Echo-powered screen that works as a home command center. The device will have “a large touchscreen that attaches to the wall and serves as a smart home control panel, video chat device and media player, according to people familiar with the plans,” Bloomberg’s Mark Gurman reported. It’s the latest in Amazon’s push to beat Apple and Google to the future of home computing, with voice-based A.I. as the underlying platform, and sensors and cameras everywhere. It also sounds, perhaps more than any device before it, like Orwell’s telescreen from 1984.
  • Much like defining political content, defining vaccine misinformation turns out to be trickier than it sounds. Facebook this week announced a tough new line on vaccine misinformation — which sounds good, but comes with all kinds of complications. Zeynep Tufekci raised some flags about the policy’s potential unintended consequences in this Twitter thread. Meanwhile, my OneZero colleague Sarah Emerson took a fascinating deep dive this week into Facebook Groups for the “vaccine-hesitant.” She spoke with volunteer moderators who try to listen with empathy to people’s vaccine fears — and provide evidence-based responses — without crossing the invisible line that would get them taken down as hubs for misinformation.
  • Clubhouse has privacy and security problems. I wrote this week about the hot social app’s creative use of people’s iPhone contact lists — and why it’s suggesting that users invite their therapist, drug dealers, and local takeout joints to join them on Clubhouse. Friday evening, the Stanford Internet Observatory reported that there may be deeper issues with its data handling: Its researchers found “easy to uncover” flaws that “pose immediate security risks to Clubhouse’s millions of users, particularly those in China.”

Headlines of the Week

Black doctors work overtime to combat Covid myths on Clubhouse

— William Turton, Bloomberg News

Is This Beverly Hills Cop Playing Sublime’s ‘Santeria’ to Avoid Being Live-streamed?

— Dexter Thomas, Vice

Weird Stock Anecdote of the Week

The one company whose stock outperformed Amazon’s in the Jeff Bezos era was… Monster Beverage, the energy-drink maker, Quartz’s David Yanofsky pointed out.

Thanks for reading Pattern Matching. Reach me with tips and feedback by responding to this post on the web, via Twitter direct message at @WillOremus, or by email at oremus@medium.com.

Senior Writer, OneZero, at Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store