Listen to this story

--:--

--:--

Our Government Should Not Be Conducting Facial Surveillance

New proposals for regulating the use of face recognition technology are major victories for the legislative imagination, even if they don’t become law

Photo by Bernard Hermant on Unsplash

Co-authored by Woodrow Hartzog

TThe debate over facial recognition technology has advanced to the point where one thing is clear: It must be regulated. Not only have civil rights groups like the ACLU made this case, but even companies like Microsoft and Amazon acknowledge that change is necessary.

The question, then, is what’s the best way to respond to the dangers that facial recognition poses? Problems concerning bias and disparate impacts on minority communities are far from resolved. Corporate proposals aren’t credible solutions given the risk involved. And the dangers to our basic constitutional liberties are so profound that it might not be possible to effectively protect them.

Thankfully, major change is happening in how lawmakers are willing to think about regulation. After a few blips from Illinois, Texas, and Washington imposing some rules on biometric surveillance amid a nearly complete absence of specific restrictions otherwise, in the past month three bills have been introduced at the state and city level that go beyond singling out facial recognition technology as exceptionally dangerous for governments to use. Instead, they aim to outright temporarily or permanently ban it.

Lawmakers and the rest of society have tough decisions to make.

This is a big deal, even if the bills never become law. They are evidence that widespread, unrelenting deployment of facial recognition technology is not an inevitability. They also demonstrate how collectively we can break the cultural and political trajectory of ever-increasing, ever-intrusive surveillance.

Proposed legislation

The first proposed ban on facial recognition technology was the Stop Secret Surveillance Ordinance, put forth by Aaron Peskin, a member of San Francisco’s Board of Supervisors, in January 2019. It was part of a more comprehensive set of rules to enhance government surveillance oversight. The bill’s ban on facial recognition technology is complete and succinct, simply making it unlawful for the government to use facial recognition technology and the data obtained from it in any way.

This is the most robust and direct attempt to protect against government abuses of facial recognition technology that we have seen.

In early February, Massachusetts Senator Cynthia Creem and Representative David Rogers introduced bills that would implement a moratorium on all biometric surveillance until it could be proved to be safe, nondiscriminatory, and used in a sustainable way with respect to our privacy and civil liberties.

Two further bills introduced around the same time by Washington state senators Bob Hasegawa, Rebecca Saldaña, and Joe Nguyen would similarly provide for a moratorium on facial recognition pending a report on its accuracy but would also prohibit its use in public places or in analyzing body camera footage without a warrant. Unlike the San Francisco bill, this one paves the path for biometric surveillance through the use of warrants.

Each of these legislative moves demonstrates the appetite for regulation of facial recognition. They have been introduced at the same time as more comprehensive and, in some ways, less robust bills that regulate facial recognition specifically and biometrics more generally, including for industry. Suggested proposals by Amazon and Microsoft are not as prohibitive and focus on process, transparency, and consent, but they would also all but guarantee a rollout of facial recognition infrastructure. Lawmakers and the rest of society have tough decisions to make.

Bans can stop normalization and disprove technological determinism

It has been argued that if the San Francisco bill passes, it will create a weird and kind of ironic distinction between government, which will be hamstrung, and industry, which can do whatever it wants. In the Atlantic, Sidney Fussell notes that the ban would prevent the San Francisco Police Department “from using Amazon’s Rekognition software to scan video footage for suspects after a shooting — but a grocery store will be permitted to do the same thing to analyze shopper behavior.”

“Curtailing this kind of retail surveillance,” Fussell continues, “will require an entirely different approach.”

Fussell is right. And yet there remains a more complicated connection between the public and the private sectors. By understanding why the private sector wields more power than is typically acknowledged, we can appreciate why the argument underlying the San Francisco proposal is more persuasive than critics are admitting.

Facial recognition takes many forms. But they are all reengineering our personal and collective preferences and making it feel like a familiar, unthreatening, and even necessary component of life in the 21st century.

A key reason why a ban on the government using face recognition technology is so important to consider is that the links between public and private sector uses of it are growing stronger by the day — and not simply because data can flow between one sector and the other. Consumers are seduced into using a range of products and services that aren’t necessarily problematic in themselves but collectively combine to produce a harmful result. Tech companies aren’t even acknowledging this problem exists, let alone taking responsibility for it. Policymakers aren’t doing any better, having turned a blind eye to the issue. Since “facial recognition is expected to become a $9.6 billion market,” we no longer have the luxury of keeping our heads in the sand.

Facial recognition takes many forms: authentication through Apple’s Face ID, automated tagging on social media, face filters on Instagram, pay-with-your-face terminals, Amazon’s doorbell camera, the Ring surveillance system, and all the rest. But they are all reengineering our personal and collective preferences and making facial recognition technology in all of its diverse facets and functions feel like a familiar, unthreatening, and even necessary component of life in the 21st century.

Even the education technology industry is promoting this fatalistic worldview. Marcel Saucet, CEO of a group that supports Nestor, a company that provides artificial intelligence for the classroom, notes that while students have expressed discomfort with their instructors requiring them to use Nestor’s “engagement detecting” system, their concerns won’t have any impact. “Everybody is doing this,” he said, bluntly concluding, as if speaking for all of society, “we cannot go against natural laws of evolution.”

Ultimately, then, seemingly innocuous experiences of convenience and entertainment might be dampening the public’s capacity to appreciate the threats that face surveillance poses to privacy and civil liberties and sapping their ability to conceive that there is an alternative future worth fighting for. If this is so, we should expect the shift in public perception through incremental normalization and function creep to directly affect how the courts view what are reasonable expectations of privacy.

Simply put, if citizens expect to be immersed in facial recognition technology wherever they go, they might become open to allowing law enforcement to behave like everyone else. It would become harder for the courts to back a normative standard of privacy if that seems out of sync with what is happening in the real world. Indeed, technology companies recognize this potential and, unsurprisingly, are already exploiting it as a selling point. They’re making the case that it’s a foregone conclusion where public opinion will end up — namely, pressuring police departments to use the same tools found in casinos, stadiums, and even fictionalized TV accounts of policing.

Because consumer technology wins hearts and minds by covertly instilling beliefs that affect regulation, proposals like the San Francisco bill are wins for the legislative imagination regardless of whether they become law. They give us hope that the problem can be stopped before it’s too late. Too often, even many of those who advocate for privacy assume that pumping the brakes on digital surveillance technology is an idealistic, unattainable, and overly reactionary fantasy. The best we can hope for, they say, is to create some guardrails that, in the best-case scenario, will allow the positive benefits of a technology to flow while impeding the potential for abuse. As Microsoft president Brad Smith bluntly put the point at the World Economic Forum in Davos, checks and balances are the only sensible approach to governance, whereas “a sweeping ban on all government use clearly goes too far.”

We’re sympathetic to the concern about going too far. So sympathetic, in fact, that we applaud the San Francisco bill. It’s clear that the ideology of technological determinism intrudes too far into the democratic process when it seems impossible for citizens to reject a technology that fundamentally threatens their well-being. Just as we should stop saying that privacy is dead, we also should insist that the deployment of facial recognition technology isn’t inevitable.

Prof. Philosophy at RIT. Latest book: “Re-Engineering Humanity.” Bylines everywhere. http://eselinger.org/

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store