‘Lie Machines’: How Governments Are Using Tech to Spread Misinformation About Covid-19
Philip N. Howard, director of the Oxford Internet Institute, explains how bots, trolls, and junk news are harnessed by political actors to sow deception
Lie machines are on the rise — they’ve been built to undermine our faith in society’s key institutions and to encourage citizens to question authority. Lie machines have helped swing elections and sow discontent. And now they’ve been tuned to abet authoritarianism during the coronavirus crisis.
“It’s about doubting institutions that have performed pretty well for a long time, like national health care systems and professional news outlets,” says Philip N. Howard, the director of the Oxford Internet Institute. Howard is the author of the new book, Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives, which has taken on an urgent new relevance as states and political actors try to influence the perception of their response to the pandemic.
Howard defines the lie machine as “the social and technical mechanisms of putting an untrue claim into service of ideology,” composed of three parts: producers of lies, such as political candidates; distributors, such as social media platforms like Instagram; and marketers, such as political consultants. In the age of Covid-19, the lie machine is working to undermine trust in institutions like the World Health Organization, pushing a narrative that scientists and experts should not be trusted. And this, Howard argues, has worrying implications for global health.
OneZero spoke with Howard, who has written several books examining the role of technology in politics, such as Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up, to discuss how the far-right uses the lie machine and political lies in the age of the coronavirus, among other topics.
This interview has been edited and condensed for clarity.
OneZero: In your new book, you argue that tech platforms are not passive, but actors in the political process. Can you explain this role of technology in politics?
We’re used to thinking that politics is mostly about people. And ideas. And rhetorical flare. And personality. But there’s so much structurally that goes on now, especially through our devices, that give us little messages, little prompts, that generates big data for firms to play with, and then those firms resell the data to consultants, and then social media content gets married with the credit card records, with voter registration files, to produce these enormous databases that help consultants figure out who to push [to vote]. And who should stay at home. And who not to spend any time with.
Technology comes with features. They let you stay in touch with your friends and family. They let businesses extract data about you. They leave a pretty detailed record of your behavior — not just your attitudes or your aspirations, but your behavior. If your behavior doesn’t map onto your attitudes, or your aspirations, these other actors can tell. So the technology plays such an active role in translating our political values into action that it’s got agency.
You write that “the far-right used the lie machine earlier.” How and why did this happen?
I believe that political consultants for ultraconservatives are much less afraid of upsetting our privacy values or, perhaps, even being caught by the law. So they do much more to experiment and push the envelope and try out new things than regular political consultants, or progressives. It was ultraconservative activists who figured out direct mail in the ’70s — this idea of mailing everybody, even if they didn’t want the mail. Buying people’s addresses. Research shows that for most of the 2000s to the early 2010s, conservative websites were much more likely to keep users on the property and not link off the website — to not allow a lot of interaction. Democratic websites are much more likely to put multimedia citations. There’s a strange ideological bent for the far-right towards doing anything to get their person elected, even if it means breaking privacy norms. Or breaking the law.
Recently, some extremely wealthy ultraconservatives have been putting down big money to build up the dataset. To take advantage of the affordances in social media. We saw Brexit in 2016 and the U.S. election in 2016. And there’s been four years of bad press, congressional investigations. Judiciary investigations. State’s attorney generals’ investigations. Scholarly investigations. Independent journalists doing their own stuff. And the most proactive the [social media] firms have ever been is now, around the global pandemic. It’s taken Covid to have the social media firms behave responsibly to the junk that’s circulating on their platforms.
How is the lie machine working in the coronavirus era?
What’s really changed since coronavirus is that China came out as a global superpower in misinformation. We had studied it over the last four years, and thought they had some capacity to do misinformation. But they really only started to care what the English-speaking social media world thought when the Hong Kong protest started. That’s when China started working across multiple platforms, in English, pushing content out. They were ready for Covid. And when Trump and some of his leadership team started to get people to call it the “Wuhan virus,” China didn’t want that. They were ready to go with an operation that included state-backed media, paid trolls, automated accounts, and multiple social media platforms.
The state-backed media, like Russia Today, won’t really publish disinformation — something patently wrong — they’ll report on other rumors. CGTN reported on an Italian doctor who thought that maybe coronavirus originated in a lab. Their reporting is with a question: Could this be true? That story gets picked up by other people, and shared — they share it not knowing quite the truth, but they share it also wanting to ask the question. One of the first pieces of misinformation around coronavirus was “10 things you can do to protect yourself” — put garlic in the corner of the room. Gargle salt water! It wasn’t medical information. The first wave of shares might have come from trolls or people who wanted to be disruptive. But eventually, it hit people who weren’t sure, and if there’s an easy fix, they want to pass along to friends and family. That’s the stuff that’s really hard to process. And hard to fault social media users for passing along.
During a global pandemic, what are the political motivations for having the public question facts about health?
Let’s talk about the range of actors. For China, it’s about making sure that the world doesn’t blame China. Or doesn’t think that a Chinese authoritarian government has made the problem worse. That’s what China’s interest is in politicizing misinformation. For other authoritarian regimes — Iran and Russia — it’s about promoting distrust in democracies. And in international organizations. For them, the target is WHO, meddling in international affairs. And confusion among WHO doctors. Their messages are about how elected leaders in the West are too slow and Russia is leading on science. And Russia is sending aid. To help these weak democracies. For the ultraconservatives and white nationalists in the U.K. and the U.S., it’s about undermining the government. Especially anything that looks like the government using its authority to have people stay at home. Or follow social distancing rules.
Some tech platforms are dealing responsibly with misinformation on Covid, right?
We just did a study of YouTube to see what people are finding when they search for coronavirus information and found that the majority of search results are to professional news sites. It’s telling because YouTube likes to insist that it’s user-generated content — but it’s not. It’s quite dependent on professional news outlets and their video content. That’s what people find. They don’t find junk health news. At least in the top few pages. The only sad part about that is that they very rarely find professional health content on YouTube. Nobody’s watching WHO videos or CDC videos. The other thing we found is that Twitter released a special API for Covid and they’ve made it free. That’s great because they don’t have a great reputation for sharing and explaining what data they have.
Recently, Facebook was doing something about the “LIBERATE” websites [the social media company shut down pages created to organize ‘Reopen’ protests in different states — the editor]. Not everybody is going to like that. It does smack of political meddling. But I’d say that if a significant amount of the Facebook community page content is based on patently false information, it’s a service to the community for Facebook to take it down.