To Make Big Tech Better, Make It Smaller

Decreasing the salience of tech’s errors is as important as reducing its prevalence

Cory Doctorow
Published in
4 min readMar 21, 2022


Back in 2018, Frank Pasquale published “Tech Platforms and the Knowledge Problem,” in which he proposed a taxonomy of tech reformers: Some of us are “Jeffersonians” and others are “Hamiltonians.” (In 2018, this was a very* zeitgeisty taxonomy!)

Here are their positions:

  • Hamiltonian: “improving the regulation of leading firms rather than breaking them up”
  • Jeffersonian: “The very concentration (of power, patents, and profits) in megafirms” is itself a problem, making them both unaccountable and dangerous.

In a new article for EFF, I make the case for Jeffersonian theories of content moderation, or, as the title has it: “To Make Social Media Work Better, Make It Fail Better.”

Let me start by saying that Big Tech platforms suck at moderation and do a lot of things wrong. We helped develop the Santa Clara Principles, which lay out concrete steps that platforms could and should take to improve their moderation:

But even if they do all that, they’ll still suck, because they’ve set themselves an impossible task. Facebook says it can moderate conversations in 1,000 languages and 100+ countries. That’s an offensively stupid claim to make.

Communities are partly defined by their speech norms. Some words are considered slurs by some communities and not by others — and some communities may only consider a word a slur if it’s used by outsiders, but not members of the group.

That means that moderators — possibly relying on machine translations from a language they don’t speak — have to figure out not just whether a word is acceptable or not, but also whether the speaker is a bona fide member of the community in its eyes.

This is how you get the familiar parade of moderation horrors, which Mike Masnick documents thoroughly on Techdirt: