Why You Can’t Really Consent to Facebook’s Facial Recognition

While the social media platform’s latest approach to facial recognition appears to respect user’s choices, the offer is so tainted we can’t truly agree to it

Photo: Peter Macdiarmid/Getty

Co-authored by Woodrow Hartzog

NNot long ago, Consumer Reports launched an investigation into Facebook and discovered something surprising. Nearly a year and a half after the company introduced a new, designated setting called “Face Recognition” that allowed users to opt-out of facial recognition, not everyone had access to it. This wasn’t just a problem for the people directly affected. It symbolized the larger issue of tech companies leaving us feeling disempowered, even when they say they’re doing more to protect our privacy.

By July it was clear that change was coming. That’s when the Federal Trade Commission hit Facebook with a historic $5 billion fine and ordered the company to “clearly and conspicuously” disclose how it uses facial recognition software — in a separate form from the privacy and data policies — while also obtaining the user’s “affirmative express consent.” In the aftermath, Facebook announced that after consulting with “privacy experts, academics, regulators,” and other stakeholders, it is finally giving everyone the same simple controls for facial recognition and only subjecting new users (as well as the old ones who are only now gaining access to the Face Recognition setting) to the technology if they choose to opt-in.

What’s not to like about Facebook‘s latest approach? After all, keeping facial recognition off by default is a good privacy by design approach. It puts privacy first and bakes in the values of choice and consent — ideals that, in the United States, go together like stars and stripes. The appeal is intuitive. People typically don’t want to be told what to do. We want to be free to make up our own minds — about what to eat, what to watch, and, yes, what technologies to use. From this perspective, Facebook deserves praise for not shoving facial technology in our faces.

Over time, facial tagging by users at scale can penetrate deep into our psyches… it makes those of us who engage in it increasingly comfortable with being subjected to facial surveillance and less concerned about others being subjected to it, too.

Unfortunately, Facebook’s efforts reflect superficial views of autonomy and consent. Law professor Nancy Kim provides a deeper account in her new book, Consentability: Consent and Its Limits, by critically interrogating the conditions where people are asked to give their consent. Kim maintains that what lawmakers and companies often call “consent” is an abuse of the term that sanctions unfair arrangements. Building on her insights, we argue that the consent offered by Facebook and every other company using facial recognition is tainted.

Autonomy, Consent, and Consentability

You can’t give valid consent to every offer that comes your way. Sometimes, the circumstances aren’t good enough and, as a result, your autonomy is deeply compromised. That’s because you don’t really know what you’re getting into or you’re not really in a position to voluntarily choose to say yes or no. Other times, however, the conditions are right. When this happens, Kim argues that the conditions of “consentability” are met.

Consentability makes all the difference in the world. That’s because we can agree to risky proposals that give others power and leave us vulnerable. Indeed, consentability is so transformative it creates what professor Heidi Hurd calls “moral magic.” Through consentability, a surgeon follows the Hippocratic Oath; without it, a sexual touch can be assault.

Autonomy is largely about freedom. In some circumstances, it’s about self-determination; the ability to make up our own minds about what’s in our best interests. In these situations, when others subvert our decision-making power through carelessness, condescension, manipulation, and abuse, they disrespect our personal autonomy. That’s why an offer can fall short of the standard of consentability when a person, institution, or corporation seeking our consent keeps us in the dark by failing to provide us with the appropriate quality and quantity of information concerning the risks involved with their proposal. Both cognitive limitations and how information is presented can impede informed consent.

Fair societies don’t privilege people’s individual choices at the expense of everything else. That’s because in some circumstances decisions that are good for you can limit other people’s autonomy. That’s why Kim insists a democracy can only truly be committed to equality if the government aspires to protect “collective autonomy” by trying to safeguard every citizen’s right to make fundamental life choices.

Policies like smoking bans that protect social welfare by restricting individual choice embody the ideal of prioritizing collective autonomy. State and local governments decided that giving people the right to smoke themselves into early graves doesn’t mean they should be able to choose to light up wherever they want. These laws are based on the idea that people should be able to exercise basic freedoms, like moving through public spaces and entering places that are necessary for survival such as workplaces, without undue concern for their well-being. Freedom of movement is a higher-level autonomy interest than the freedom to enjoy the sensations that come from inhaling tobacco.

As Kim notes, many autonomy interests can be ranked and compared against one another. That’s why another way that an offer can miss the mark of consentability is if it comes at a terrible social price — providing only modest autonomy gains for some but depriving others of more basic freedoms. In situations where your freedom comes at other people’s expense, the consentability standard can require that your privilege get put in check.

The Inconsentability of Facial Surveillance

Facebook’s facial recognition policy may be legal but it fails the consentability standard by obscuring risk and corroding collective autonomy. The company doesn’t explain the big risks of turning on facial recognition; users are led to believe that by enabling facial recognition, the worst that can happen is they’re making it easier for Facebook to send targeted ads and identify them in random photos. Since these risks seem low-stakes, it’s easy for folks to embrace the service and expect that their friends will, too. Leadership at Facebook can predict this. They realize that making it easy for users to consent protects against a mass opt-out.

The more the private sector engages in facial surveillance, the harder it becomes to tell law enforcement that they should have meaningfully restricted access.

But over time, facial-tagging by users at scale can penetrate deep into our psyches and help normalize facial surveillance itself. Those of us who engage in it frequently and thoughtlessly become increasingly comfortable with being subjected to facial surveillance, and less concerned about others being subjected to it, too. Facebook’s offer for users to feel free to use facial surveillance is thus an invitation for them to be disciplined. As the philosopher Michel Foucault argued, discipline can be a form of power that erodes our autonomy by engineering mentalities and transforming sensitivities. In this case, by impacting our personal and collective preferences, discipline leads facial surveillance to be perceived as an essential component of modern life, something that should be accepted and embraced, rather than critically interrogated, much less rejected.

And so even though tagging on Facebook, just like so many other consumer applications of facial surveillance, satisfies largely trivial autonomy interests, their collective impact paves the way, as time passes, for increasing amounts of facial surveillance to occur throughout the public and private sectors. Barring massive shifts in regulation, with enough time and enough surveillance the very fabric of society will change. Disproportionate harm will fall on vulnerable and marginalized populations like people of color, whose opportunities to act autonomously will shrink as they become ever-more worried about what the networks of intelligent surveillance machines are communicating to authorities.

The Ada Lovelace Institute recently published the first national U.K. public opinion survey on facial recognition technology. The results are striking. Most people don’t know much about how the technology is actually being used. Many people seem uncomfortable with companies using the technology for commercial benefit, are anxious about the normalization of surveillance, and, for the time being, would like the private sector to voluntarily stop selling the technology to the police.

However, 71% agree that “The police should be able to use facial recognition technology on public spaces, provided it helps reduce crime.” What the poll results suggest is that U.K. citizens aren’t truly apprehensive about the normalization of facial recognition. If they were, they’d be concerned that the process will get in the way of creating effective safeguards for ensuring that law enforcement use the technology responsibly.

In the United States, it isn’t realistic to expect that normalization is compatible with good governance. A recent Pew Research Center poll into American views reveals that, despite the high-profile controversies, the majority of people (especially older, white, and conservative-leaning adults) still “trust law enforcement to use facial recognition responsibly” and that sizable numbers might have beliefs about the accuracy of the technology that conflict with reliable studies.

Like in the U.K., Americans worry more about tech companies using facial recognition technology than the government, and this suggests an overly idealistic vision about how distinctive the two sectors are. It’s not just that law enforcement can get information from tech companies. It’s that the more the private sector engages in facial surveillance, the harder it becomes to tell law enforcement that their access should be meaningfully restricted. Once normalization changes hearts and minds, it’s hard to dial it back when emotional outcomes are on the line, like public safety.

Tyranny of the Majority

In a democracy, it is reasonable to expect that people will mainly consider themselves and people like them when weighing the costs and benefits of a particular decision. Such is the pull of tribalism and privilege. In practice, this means that people who aren’t part of minority communities might not be sufficiently concerned with how their gain from consenting to facial recognition comes at other people’s expense. Over time, when majority groups consent to offers that are cost-benefit justified for themselves, large-scale social transformation can result that compromises the autonomy interests of marginalized groups.

When the dangers of secondhand smoke became clear, many cities and states enacted smoking bans. While cigarettes and facial recognition are not comparable in many ways, they share similarly fatal consent problems. If facial recognition becomes normalized in industry through consent regimes, it means that the government will have a backdoor to retroactive surveillance through the personal data industrial complex. Through public/private cooperation, surveillance infrastructure will continue to be built, chill will still occur, harms will still happen, norms will still change, collective autonomy still will suffer, and people’s individual and collective obscurity will continue to diminish bit-by-bit.

Even if every facial recognition system asked for consent before use, society would still suffer. A barrage of “I agree” buttons and switches would fuel government and industry’s unquenchable thirst for more access to our lives. A complete ban is the only way to ensure that facial recognition does not become entangled and abused in both the sacred and blessedly mundane aspects of our lives.

Prof. Philosophy at RIT. Latest book: “Re-Engineering Humanity.” Bylines everywhere. http://eselinger.org/

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store