Listen to this story
Tech Isn’t Vulnerable — You Are
Any conversation about privacy has to acknowledge that we are alone and powerless
Launching the New York Times new privacy newsletter this spring, the tech journalist Charlie Warzel described the word “privacy” as “an impoverished term.” The problem with privacy, he explained, is that it is a concept “so all-encompassing that it is impossible to adequately describe” — a so-called hyperobject like climate change or social class. It’s only momentarily, when specific aspects of privacy come to light, that we get a glimpse of the whole that the word could describe.
Nevertheless, Warzel made an attempt to reframe privacy or define it in a more accessible way. Since we are ultimately unable to control how all the data we create are used, distributed, tabulated, or just plain stolen, we are in the process of losing control of our lives. “When technology governs so many aspects of our lives — and when that technology is powered by the exploitation of our data — privacy isn’t just about knowing your secrets, it’s about autonomy,” Warzel wrote.
We are aware, however vaguely, that the data trail we create online is used in ways we can’t fully imagine.
The ‘control’ frame is useful to an extent in convincing people to understand privacy’s importance. We all like to believe we are in control of our own destinies and possessions. Typically, things that threaten this belief, legitimately or not, can create a passionate response. We will fight for our freedoms — normally. But our approach is different with technology. This is almost entirely due to good marketing. Technology companies in the last 20 years have co-opted the idea of personal autonomy, using it as a sales pitch for their platforms and indeed incorporating personalization as a key product feature. From the outside, it does look like we’re in control.
But, as Warzel noted, we know that behind the scenes, something else is going on. We are aware, however vaguely, that the data trail we create online is used in ways we can’t fully imagine, and in ways we haven’t explicitly agreed to. Still, we have to live in the modern world. The more indispensable an internet connection becomes, the less choice we have, which means we have less and less autonomy, a key element of being capable of exercising control over our lives. It’s difficult to expect someone to gain control over something they’ve never had the option to control in the first place.
This doesn’t mean the idea of control, or its loss, is the wrong way to conceptualize privacy (or its loss). Privacy is absolutely about control. Privacy’s relationship with control is helpful if we think about it correctly, which is to say the wrong way around. If what we get from control is a sense of decision-making power and responsibility for outcomes (positive attributes of tech adoption), then perhaps we ought to focus on its opposite: the loss of control.
What happens when we lose control? We become angry or confused. We tend to get desperate. We scramble. When we’re in control, we are safe and protected; when we lose control, we are at risk. Specifically, we are vulnerable.
As it happens, vulnerability is also a word you hear frequently when people are talking about technology, though it rarely references humans. Instead, vulnerability has been co-opted by the tech industry. Vulnerabilities are now usually software problems. They’re the holes through which our information might be accessed without our consent, and through which our privacy is ultimately put at risk.
These so-called vulnerabilities are exposed all the time. On August 20, 2019, for example, security researchers discovered eight separate vulnerabilities in a version of Google’s Nest IQ cameras, leaving it open to a range of potential disruptions. The same day, we learned that in its latest software update, Apple accidentally opened a previously-closed security vulnerability that, if exploited, could grant complete control over the phone.
Software is protected. We are not.
In July, the problem was at the bank Capital One. The company revealed that a hacker — a former employee named Paige Thompson — allegedly gained access to over 100 million credit card applications using a method called cryptojacking. Paige reportedly “created a program in late March to scan cloud customers for a specific web application firewall misconfiguration,” then “exploited it to extract privileged account credentials for victim databases and other web applications.” The list of these incidents will keep growing.
It might be time we performed our own linguistic co-option. We need to reclaim vulnerability.
When technology is at risk of being hacked, attacked, or exploited by malicious actors, there is always someone looking out for it. A team of engineers will immediately create a patch or update to fix a tech problem. Software is protected. We are not. When our phone is remotely disabled, or when our banking information is stolen, our total lack of autonomy is exposed. The sense of control — the positive feedback loop we experience that feeds us content we enjoy, recommendations we like — is equally exposed as a lie. We are left without the tools to fix our problem. Customer service avenues are jammed. Legal coverage is spotty at best.
We fly blind. We desperately reach for any solution that feels solid. We are at the mercy of any strategy that feels right, which is usually wrong — like, say, sweeping attempts at legislative reform.
Before we get to decide how we want to implement control mechanisms, we need to be honest about our situation, about the reality of our position in the system. This is where we borrow a familiar term, where we humanize a common problem: vulnerability. We need to realize it’s not technology that is vulnerable; it’s us. Every discussion of privacy has to begin there: We have no control. We are powerless and alone.
The fight for privacy begins there.