Google and Facebook Are Utilities. It’s Time They Act Like it.

We need to protect the truth in a technological atmosphere where it’s so easily manipulated

Credit: SOPA Images/Getty Images

OOne of the great achievements of civilization has been the gradual improvement in physical security for humans. Most of us can expect to conduct our daily lives without constant fear of injury and death. Article 3 of the 1948 Universal Declaration of Human Rights states, “Everyone has the right to life, liberty and security of person.”

I would like to suggest that everyone should also have the right to mental security — the right to live in a largely true information environment. Humans tend to believe the evidence of our eyes and ears. We trust our family, friends, teachers, and (some) media sources to tell us what they believe to be the truth. Even though we do not expect used-car salespersons and politicians to tell us the truth, we have trouble believing they are lying as brazenly as they sometimes do. We are, therefore, extremely vulnerable to the technology of misinformation.

The right to mental security does not appear to be enshrined in the Universal Declaration. Articles 18 and 19 establish the rights of “freedom of thought” and “freedom of opinion and expression.” One’s thoughts and opinions are, of course, partly formed by one’s information environment, which, in turn, is subject to Article 19’s “right to… impart information and ideas through any media and regardless of frontiers.” That is, anyone, anywhere in the world, has the right to impart false information to you.

Democracies have placed a naive trust in the idea that the truth will win out in the end, and this trust has left us unprotected.

And therein lies the difficulty: Democratic nations, particularly the United States, have for the most part been reluctant — or constitutionally unable — to prevent the imparting of false information on matters of public concern because of justifiable fears regarding government control of speech. Rather than pursuing the idea that there is no freedom of thought without access to true information, democracies seem to have placed a naive trust in the idea that the truth will win out in the end, and this trust has left us unprotected. Germany is an exception; it recently passed the Network Enforcement Act, which requires content platforms to remove proscribed hate speech and fake news, but this has come under considerable criticism as being unworkable and undemocratic.

For the time being, then, we can expect our mental security to remain under attack, protected mainly by commercial and volunteer efforts. These efforts include fact-checking sites such as and — but of course other “fact-checking” sites are springing up to declare truth as lies and lies as truth.

The major information utilities such as Google and Facebook have come under extreme pressure in Europe and the United States to “do something about it.” They are experimenting with ways to flag or relegate false content — using both A.I. and human screeners — and to direct users to verified sources that counteract the effects of misinformation. Ultimately, all such efforts rely on circular reputation systems, in the sense that sources are trusted because trusted sources report them to be trustworthy. If enough false information is propagated, these reputation systems can fail: Sources that are actually trustworthy can become untrusted and vice versa, as appears to be occurring today with major media sources such as CNN and Fox News in the United States. Aviv Ovadya, a technologist working against misinformation, has called this the “infopocalypse — a catastrophic failure of the marketplace of ideas.”

One way to protect the functioning of reputation systems is to inject sources that are as close as possible to ground truth. A single fact that is certainly true can invalidate any number of sources that are only somewhat trustworthy, if those sources disseminate information contrary to the known fact. In many countries, notaries function as sources of ground truth to maintain the integrity of legal and real estate information; they are usually disinterested third parties in any transaction and are licensed by governments or professional societies. (In the city of London, the Worshipful Company of Scriveners has been doing this since 1373, suggesting that a certain stability inheres in the role of truth-telling.) If formal standards, professional qualifications, and licensing procedures emerge for fact-checkers, that would tend to preserve the validity of the information flows on which we depend. Organizations such as the W3C Credible Web group and the Credibility Coalition aim to develop technological and crowdsourcing methods for evaluating information providers — which would then allow users to filter out unreliable sources.

A second way to protect reputation systems is to impose a cost for purveying false information. Thus, some hotel rating sites accept reviews concerning a particular hotel only from those who have booked and paid for a room at that hotel through the site, while other rating sites accept reviews from anyone. It will come as no surprise that ratings at the former sites are far less biased, because they impose a cost (paying for an unnecessary hotel room) for fraudulent reviews.

Regulatory penalties are more controversial: No one wants a Ministry of Truth, and Germany’s Network Enforcement Act penalizes only the content platform, not the person posting the fake news. On the other hand, just as many nations and many U.S. states make it illegal to record telephone calls without permission, it ought, at least, to be possible to impose penalties for creating fictitious audio and video recordings of real people.

Finally, there are two other facts that work in our favor. First, almost no one actively wants, knowingly, to be lied to. (This is not to say that parents always inquire vigorously into the truthfulness of those who praise their children’s intelligence and charm; it’s just that they are less likely to seek such approval from someone who is known to lie at every opportunity.) This means that people of all political persuasions have an incentive to adopt tools that help them distinguish truth from lies. Second, no one wants to be known as a liar, least of all news outlets. This means that information providers — at least those for whom reputation matters — have an incentive to join industry associations and subscribe to codes of conduct that favor truth-telling. In turn, social media platforms can offer users the option of seeing content only from reputable sources that subscribe to these codes and subject themselves to third-party fact-checking.

From HUMAN COMPATIBLE: Artificial Intelligence and the Problem of Control by Stuart Russell, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2019 by Stuart Russell.

Professor and author of Human Compatible and Artificial Intelligence: A Modern Approach

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store