What Twitter and Facebook Can Learn From Alexander Hamilton

You can’t pop a filter bubble with more information

Image: jayk7/Getty Images

IIt’s been said that the solution to misinformation is simply more information. It’s the cornerstone of the argument for the type of virtually unrestricted free speech we enjoy in the U.S.

The argument is that you address misinformation by providing facts that directly speak to the misinformation (or as so many like to call it today, “fake news”). And sometimes this approach works.

The Federalist Papers were written in 1787 and 1788 not only to help sell the American people on the benefits of the newly proposed U.S. Constitution, but to also correct the voluminous misinformation campaign waged against the document by its opponents. History shows us that this effort was successful—at least in its primary goal — the Constitution was ultimately ratified on June 21, 1788. It’s less clear whether people who were already radicalized against the Constitution had their minds changed by the Federalist Papers.

In contemporary times, the internet has done wonders to help people understand the reality of mental illness. It has corrected misinformation and helped stop the spread of discrimination and prejudice associated with these conditions. Because of the internet, people now know more facts about mental illness and hold fewer false beliefs about it.

But sometimes the approach doesn’t work. And we have Facebook, Twitter, Google, and other tech companies to thank for that.

Prior to around 2000, the internet was not a self-reinforcing bubble. The search engines didn’t personalize their results to show you only what they thought you wanted to see based upon your past searches. In the years that followed, however, things began to change.

If the Federalist Papers were published today, the only people who would be exposed to them would be the Constitutionalists — people who already believed in and supported the document.

Software developers at Google, Facebook, and other tech companies, working in their own insular bubbles of computing, decided to create algorithms that they thought would benefit society. They didn’t consult any social scientists when developing these algorithms; they worked in solitary groups, answering only to their middle-manager supervisors.

We all know the result. We now live in an age where each of us lives in our own insular filter bubble (or information bubble). This bubble is popped only when a person makes a conscious effort to throw off the invisible shackles of the algorithm and take to alternative, private browsing technologies. But in doing so, people also generally have to forgo virtually every social networking app or site available today.

More information can’t solve this problem, because people won’t see the information that disagrees with their point of view. If the Federalist Papers were published today, chances are the only people who would be exposed to them would be the Constitutionalists — people who already believed in and supported the document. It would likely fail in its effort to change anyone’s mind.

We live in a strange time, where the norm is to find agreement with whatever supports your point of view, facts be damned. And the tech companies — the ones who created this problem in the first place — seem helpless or unwilling to fix it.

Instead, we’re looking at ignorance as the new smart. Disclaim responsibility and keep the profits rolling in. Because fake news makes even more money than boring old facts. As a sign of the times, right after the 2016 elections, Mark Zuckerberg, the head of Facebook, was downright incredulous to any suggestion that Facebook had anything to do with how the election turned out. It took months before he began to acknowledge the company’s role and started working on solutions. How myopic does a person have to be not to understand how their own platform works?

There are no easy answers to this complex problem. Once created, platforms like Facebook don’t topple easily, despite the attempted boycotts over the years. (And while it’s easy to pick on Facebook, virtually all social networks and Google suffer from the same core issues of faulty algorithms created by well-meaning but short-sighted developers.)

What I do know for certain is that more information won’t solve this problem. People already complain about information overload, so giving people even more data won’t really help — especially if it simply underscores their own biases (what psychologists call “confirmation bias”). Nor will additional, marginal social networking platforms help. Instead, they’ll simply balkanize people even further.

Welcome to 2020. Technology has not only succeeded in bringing us a little closer together, it’s simultaneously breaking us apart.

Founder, Editor-in-Chief & CEO, PsychCentral.com; CoFounder, Society for Participatory Medicine; Editorial Board, Computers in Human Behavior journal

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.