Listen to this story
The modern understanding of personal privacy was first defined in the landmark 1890 essay “The Right to Privacy.” Its authors, Samuel Warren and Louis Brandeis (the latter of whom eventually joined the Supreme Court), were wealthy American lawyers concerned about how the paparazzi of the time were deploying a powerful new technology called a camera. “Recent inventions and business methods call attention to the next step which must be taken for the protection of the person,” they wrote. “Instantaneous photographs” and “numerous mechanical devices” threatened to gather and disseminate individuals’ personal information. In response, they argued that everyone had a “right to be let alone.”
The Warren-Brandeis essay is notable for broadening the legal concept of privacy to include the more nebulous sense of what our personal privacy really feels like. Their argument has only become more relevant over the nearly 130 years since. The power and ubiquity of the internet has birthed privacy infringement on a scale that would have been unimaginable to the Victorians. The 2018 Cambridge Analytica scandal revealed that the personal data of 87 million Facebook users was used to design psychologically targeted news and propaganda campaigns to sway voters in key elections in the United States, the U.K., and Mexico.
But that scandal can be seen as just one brick in a wall of mistrust that has built up between citizens and the technology they use, especially when the service, like Facebook, makes money by selling advertising targeted to its users.
In response, a new wave of companies is monetizing the new opportunity of offering paid-for services with heightened privacy. The “right to be let alone” is now a right that can be paid for. Gartner forecasts that worldwide spending on consumer security software will reach upwards of $6.6 billion in 2019. These services include Safe Shepherd, a privacy protection service that scrubs user information from marketing databases; TrustedID, which helps to detect risk of identity theft and control the use of personal information; and myriad startups offering encrypted email, messaging, and browsers. In 2014, one New York Times columnist spent more than $2,200 in a single year trying to protect her privacy through a fleet of such services.
But there’s a problem. These services reflect the increasing value of privacy, not as a right, but as a luxury good. Privacy is an extra, something to be opted into and paid for. And those who keep using “free” advertising-supported platforms — because they don’t understand the risks of giving away personal data or can’t afford to pay extra for privacy — are finding themselves on the wrong side of the new digital divide.
This kind of effective discrimination against low-income communities is nothing new, and neither is invasion of their privacy. Welfare claimants in the United States can be subjected to unannounced searches and interrogations, including invasive surveillance, urine-sample drug tests, and DNA testing of their children. The implication is that the poor are expected to give up their privacy to receive basic services. And the internet — despite promises of neutrality and equality — proves to be no refuge from this rule.
“No laws prohibit poverty discrimination” says Michele Gilman, a law professor at the University of Baltimore School of Law and one of the authors of the 2017 study “Privacy, Poverty, and Big Data: A Matrix of Vulnerabilities for Poor Americans.” “The potential harms of this privacy bargain are more perilous for low-income people,” she says. “People who live in poverty are subject to targeting due to their economic status, such as for predatory financial products or predatory educational programs. And due to their digital dossiers, they can lose opportunities for housing, employment, and education when landlords, employers, and schools are conducting screenings and associating poverty with undesirable traits.”
The vulnerabilities low-income communities face online are changing.
The study concludes that “anti-poverty advocates are rightfully concerned that the digital world will replicate, if not reinforce, both covert and overt patterns of surveillance.” “A right to be let alone is important in the sense that we all need some autonomy in determining how much of ourselves to share with the public,” Gilman says. Her work is reinforced by Tim Berners-Lee, the inventor of the World Wide Web who, last November, proposed a “contract for the web” that would establish clear norms, laws, and standards to underpin the internet. That would include a fundamental right to privacy.
Gilman thinks social media companies could soon offer options for users to pay for increased privacy, while still providing free services for those who surrender their information. This option, she says, “will further the divide between the privacy haves and have-nots and exacerbate economic inequality.”
It isn’t just poor Americans; entire populations have been preyed on. In 2015, Facebook launched Free Basics (formerly Internet.org), a mobile app that offered free internet access to users in developing countries. Facebook promised to help people get online by offering stripped-down version of Facebook, Wikipedia, ESPN, and others.
But a 2017 Global Voices report found different motives for Free Basics. The app actively urged users to sign up and log into Facebook, and it divided third-party services into two tiers, giving greater visibility to one set of information over another. User activities were channeled through Facebook servers, generating yet more banks of consumer data in the process. Facebook wasn’t introducing people to an open internet — it was plundering the next data frontier in emerging markets outside the United States.
Facebook responded by claiming the Global Voices study did not reflect the experiences of millions of people who had benefited from the service, but by that point, Free Basics had already been labeled as “digital colonialism.”
The vulnerabilities low-income communities face online are changing. Last month, Mark Zuckerberg posted a 3,000-word manifesto outlining his plan to make Facebook more “privacy-focused.” Between the usual personal homilies and meditations, he described the company’s plan to push users toward encrypted private messaging and away from the news feed.
“As I think about the future of the internet,” Zuckerberg wrote, “I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.”
Critics say the move amounts to little more than rear-guard corporate defense. Facebook faces high costs and regulatory hurdles in its attempts to remedy its recent debacles. Some think that by merging data between its different messaging apps, as planned, Facebook is effectively shielding itself from looming antitrust authorities, and that pulling users into a closed network with end-to-end encryption would exempt it from moderating the fake news, propaganda, hate speech, and vaccine misinformation its platform disseminates.
“I would be very pleased if it was real and true, but there’s a lot of skepticism associated with [this move toward privacy],” says Ann Cavoukian, who leads the Privacy by Design Centre of Excellence at Ryerson University in Toronto. The growing digital privacy divide is disturbing, but as Cavoukian points out, what’s most alarming isn’t that privacy is becoming a luxury commodity, but that it’s considered a commodity in the first place. “The norm shouldn’t be that you have to protect it just like you have to put a wall around your house,” she says. “The house belongs to you. The information belongs to you, and they shouldn’t just take it at a whim.”
Back in 2007, Cavoukian was invited to Facebook’s headquarters to speak to Mark Zuckerberg and Chris Kelly, the company’s first chief privacy officer. She says the company then appeared genuinely committed to privacy. Cavoukian believes the business model changed when Sheryl Sandberg joined Facebook from Google in 2008 as chief operating officer.
“They went in the direction of ‘we’re going to make a lot of money,’” she says. “And then privacy tanked.”