Missing the Point

When AI manipulates free speech, censorship is not the solution. Better code is.

Lessig
Published in
9 min readOct 11, 2021

--

Every issue is easy — if you just ignore the facts. And Glenn Greenwald has now given us a beautiful example of this eternal, and increasingly vital, truth.

In his Substack, Glenn attacks the Facebook whistleblower (he doesn’t call her that; he calls her a quote-whistleblower-unquote), Frances Haugen, for being an unwitting dupe of the Vast Leftwing Conspiracy that is now focused so intently on censoring free speech. To criticize what Facebook has done, in Glenn’s simple world, is to endorse the repeal of the First Amendment. To regulate Facebook is to start us down the road, if not to serfdom, then certainly to a Substack-less world.

But all this looks so simple to Glenn, because he’s so good at ignoring how technology matters — to everything, and especially to modern media. Glenn doesn’t do technology. It’s beneath him. In his book about the work he did with the whistleblower (or is he a “whistleblower”?) Edward Snowden, Glenn recounts that he almost missed the connection with Snowden because he couldn’t figure out how to get encrypted communications to work on his computer. (Note to us boomers (and almost-boomers like Glenn): don’t brag about your technical cluelessness; it makes you look like grandpa with the TV remote.) The things we find difficult, we find ways to ignore. And that’s precisely what Greenwald has done in his 2,500-word tirade against a woman who has risked everything to help us understand what the most powerful social media company in the world is hiding from us.

If you’re going to understand the problem that Frances Haugen is describing, you have to begin with technology. And there is no better technology story to begin with than the one Pro Publica broke four years ago.

In that story, Pro Publica described how Facebook had begun offering a new category of users to Facebook advertisers: “Jew-haters.” Anyone eager to reach “Jew-haters” now had a simple way to do so: Facebook would give you access to the people they had identified as “Jew-haters,” at least if you paid them the right price. Your “Jew-hating” content could then be safely delivered to them, and only to them. Perfect market efficiency, and yes, this actually happened.

When the story broke, Facebook acted astonished and then acted quickly to turn off this (to the racists, at least, very valuable) feature of the Facebook advertising platform. But Facebook didn’t fire the author of that category. It didn’t even dock his pay. Because that category was authored by the one employee (other than Zuckerberg) that Facebook will never fire: its AI. It was Facebook’s genius AI that had decided that advertising to “Jew-haters” was a good idea. It was Facebook’s genius AI that had crafted that category and then had begun to offer it to Facebook’s advertising users.

Now you could, of course, describe the decision by Facebook to terminate the category “Jew-haters” as “censorship.” A category of speech is speech; deciding to ban it is the decision to ban speech. You could say that all this was just more evidence of the Vast Leftwing Conspiracy that is so effectively destroying free speech in America today.

Or you could simply say that this was an example of Facebook censoring no one’s speech — because literally, no one had spoken. A machine, trained on a simple objective — maximize ad revenue — had discovered a technique that helped it do just that. Of course, it had used speech to achieve that objective — “Jew-haters.” But to describe what that machine had done as “to speak” or “speech” is just to confuse that vital and democratically critical ideal with the output of a profit-maximizing device. Machines don’t speak; people do.

Yet astonishingly, we’re long into a legal battle about whether that simple statement is in fact true. I believe it is — or at least, as I have explained, I believe that the First Amendment should not be taken to protect what I call “replicant speech,” an important subset of machine speech.

But almost a decade ago, one of America’s most brilliant legal academics, Eugene Volokh, declared in a paper commissioned by Google that it was not true. And while I’d be against that statement for all sorts of machine speech, I also believe that it is just a confusion to extend that endorsement (“This is free speech!”) to everything that happens to utter with meaning. The 11th Circuit was smart enough to conclude that a talking cat didn’t deserve the protections of the First Amendment. Replicants among us might recoil at that decision. I think it speaks midwestern-good-sense.

The reason is that in the hands of replicants, “speech” is a tool of manipulation. As Facebook’s own research demonstrated, for example, the Instagram algorithm draws on many sources to determine that a particular user is body dysmorphic. Once it determines as much, it feeds that soul more and more body dysmorphic images. It does this not because it is evil. It does this not because it is trying to induce the user to suicide. It does this because the user responds to these images, by “engaging” more with the Instagram platform. The objective function of the AI that drives the feed is to maximize engagement. The algorithms are astonishingly good at finding our weak points, and terrifyingly efficient at feeding us just what we don’t need.

In an investigative report published before the Facebook Files, the Wall Street Journal produced a powerful demonstration of this dynamic. The investigators had built bots that pretended to be humans using the platform TikTok. Those bots engaged with content on TikTok. Their engagement induced TikTok to feed the bots more and more targeted content. Depending on the content selected, the bot was quickly driven down a rabbit hole of content, regardless of the effect that content might have on the user. A “depressed bot” was quickly fed even more depressing content because that content induced the bot to engage more. Again, “engagement” is the objective that these AIs are maximizing for.

None of this is terribly new. It has been the focus of researchers for many years — see especially the work of Renee DiResta. And beyond social media, we’ve long had philosophers worrying about efficient AIs driving the world to madness. (Think about Oxford philosopher Nick Bostrom’s thought experiment about a super-intelligent AI programmed to maximize the production of paperclips; as the AI learns how to achieve its objective, it soon converts the whole world to the production of paperclips.)

What is new is the clear evidence that Facebook saw this dynamic, yet chose repeatedly to ignore it. Frances Haugen’s testimony showed again and again that when faced with a choice between making its platform less harmful and making its platform more profitable, Facebook chose profit (almost) every time. (Aaron Mak’s piece in Slate gives an excellent summary of the testimony and evidence.) They knew how their technology was driving vulnerable sorts to harm. They chose to accept that harm, to secure even more profit to the platform. As Haugen testified, Facebook is “buying its profits with our safety.”

To criticize all this is not to call for censorship. Indeed, as Haugen’s testimony showed, censorship (through “fact-checking” and similar technologies) was among the least effective techniques for dealing with the problem. The remedy instead is to dial down the intensity of these monster manipulation engines. (Call them “Monster Mes” for short.) The remedy, in other words, is a speed limit for the Monster Mes, slowing their capacity to manipulate, and thereby giving humans a chance to catch up.

Facebook’s own research shows this point precisely. As the Wall Street Journal reported, when Facebook’s Civic Integrity Team discovered that its algorithm was pushing toxic content, it also discovered that the more frequently content was shared, the more likely it was that the content was toxic. As Jeff Horwitz, the author of the WSJ series, summarized it in a podcast, “if a thing’s been reshared 20 times in a row, it’s going to be 10x or more likely to contain nudity, violence, hate speech, misinformation, than a thing that has just not been reshared at all.” That led the data scientists to recommend simply limiting the number of reshares possible. This of course wouldn’t disable the ability of someone to share any particular content. Anyone is still free to copy and paste the link to that content, and email it or message it to their friends. But by disabling simple resharing beyond two hops, the platform could radically slow the spread of inciteful and hateful content.

That change may well cost Facebook something, but it would buy all of us a much safer net. And while it is unlikely that Congress would enact such a requirement — because it is unlikely that this corrupted Congress will do anything — it is certainly feasible for a company like Apple, led by a man like Tim Cook, to ban poisonously viral technologies like Facebook, until they demonstrate they have embedded speed limits like this to slow the spread of hate.

Free speech absolutists will respond that all speech is manipulation. No doubt, that is certainly true. Commentaries by Tucker Carlson are more effectively manipulative than commentaries by Bill Moyers (sadly). Yet the First Amendment rightly protects both.

But if we can’t distinguish between the manipulations by billion-dollar AIs and the manipulations by fellow humans, then we are truly lost. The argument against Facebook is not that it allowed bad speech to exist on its platform. The argument is that it deployed a technology to amplify that speech to the most vulnerable — a technology that humans can’t yet defend against. We don’t know we’re being manipulated. We don’t see the game that’s being played. Evolution has not given us the psychological tools to resist being rendered a tool by these billion-dollar Monster Mes. And the First Amendment in particular, and free speech rhetoric more generally, must account for these facts about us, and for the efforts to protect us against the consequences of raw machine manipulation.

An analogy to a bar might help: When is a bartender responsible for the harm caused by a customer who drinks and drives, and then kills someone with their car? Certainly not when she simply serves the customer a drink. We’ve accepted that risk in our society. Maybe she’s liable if she runs an extreme happy hour that drives many to drink far beyond what they should. In certain states, she would certainly be liable if she knows that the customer is intoxicated and will drive and yet serves the customer anyway. But we should all agree that she should absolutely be liable if she spikes the drinks with chemicals that she knows will make it extremely hard to stop consuming alcohol.

That is the charge against the Monster Mes: They are spiking the drinks. The selective machine-driven manipulation of content exploits the weaknesses of those that consume it. That spiking is not simply mirroring who we are; that spiking is making us into people we don’t want to be.

The remedies here will not be simple. There is no silver bullet. In the lead‑up to the 2020 election, Facebook demonstrated that it knew how to dampen the “twitchiness” of its network; those efforts slowed viral spread. That slowing was importantly effective at avoiding the sort of craziness that ensued after the election — after, that is, Facebook had turned those efforts off.

Yet complexity notwithstanding, the question raised by Haugen’s testimony is just this: Can we trust Facebook to choose our safety — for the vulnerable, for our democracy — over their profits? If the Facebook Files teach us anything, they teach us the answer to that question is plainly “no.”

Of course, some believe that social networks are “unsafe at any speed,” to remix a line from Ralph Nader. That was not Haugen’s testimony. Haugen said explicitly that she believes in the potential for good that Facebook offers America and the world. And she argued explicitly that design choices within Facebook could make that good possible.

We need to focus on those design choices — however uncomfortable to the technophobes that focus may be. Such choices fit technology to the limits of our psychology. They are regulations to accommodate human weakness, not regulations to deny a human right. To oppose technologies that have a plain, if not intended, side effect of rendering individuals and society crazy is not to support the censoring of speech. To the contrary, we should oppose solutions that rely upon subjective, and thus, inherently political, judgments about content.

Censorship is not the solution. Better code is. For this code is law, and that law must support the values of the free speech that humans can engage, given the limits evolution has built within our brains.

Disclosure: I am serving in a limited capacity as pro bono counsel to Haugen. These views are my own.

--

--