A New Tool Jams Facial Recognition Technology With Digital Doppelgängers

Brighter AI promises to protect protesters. But is it enough?

Photo illustration; Image source: Carlina Teteris/Getty Images

There are many reasons why the movement to ban the police from using facial recognition technology is growing. This summer, reporters at the New York Times and Detroit Free Press revealed that Detroit police officers used faulty facial recognition to misidentify and wrongfully arrested two Black men, one for supposedly stealing watches, and the other for allegedly grabbing someone else’s mobile phone. Recent reporting at Gothamist revealed the New York Police Department deployed facial recognition technology to investigate “a prominent Black Lives Matter Activist.”

Technology companies have been harshly criticized for providing law enforcement with facial recognition technology. While IBM got out of the business and Microsoft and Amazon emphasize that they’re not currently providing facial recognition technology to police, companies like Clearview AI, Ayonix, Cognitec, and iOmniscient are continuing to work with law enforcement. Not every technology company, however, beats to the same drum. There are startups that are geared to limiting the dangers posed by facial recognition technology.

Berlin-based startup brighter AI recently launched a public interest campaign to help solve the problem of authorities using facial recognition technology to identify protesters. The campaign website, Protect Photo, provides a free privacy engineering service that quickly removes “facial fingerprints” from user-uploaded images.

Deploying proprietary “deep natural anonymization” software, it scans the original photos, pinpoints a large number of facial features and mathematical relations between them (how far apart the nose and mouth are, for example), infers demographic information (including age, ethnicity, and gender), and combines this data to create new images that look strikingly similar to the originals but which contain an essential difference. The new photos have new synthetic faces. CEO Marian Gläser claims these are facial recognition proof: Automated systems can’t identify the digital doppelgängers.

Before and after using brighter AI’s Protect Photo tool, which creates a synthetic face to guard against facial recognition. The company’s technology only works on group photos, which this image is cropped from. (OneZero editor in chief Damon Beres pictured here, with permission.) Credit: brighter AI

Protecting faces with digital doppelgängers

I can’t emphasize strongly enough that privacy engineering can only provide limited relief from facial recognition technology — particularly not when it involves a manual upload process from an end user. The widespread dangers posed by this technology fundamentally require legal remedy. Facial recognition technology should be banned.

But since the ban movement is taking time to grow, privacy-enhancing technologies can play an important role just like they do in other contexts. Nobody expects surveillance capitalism to be destroyed by using DuckDuckGo as a search engine, Signal as a messaging service, or ProtonMail for email. Still, these aren’t pointless services.

Since I wasn’t given special access to brighter AI’s software, I couldn’t ask a technical expert to directly assess whether the altered facial template data will consistently jam facial recognition systems, whether the synthetic faces could somehow be reverse-engineered by a committed adversary, if the demographic inferences are accurate across large samples (or if there’s a risk of changing the meaning of altered photos, for example, by unintentionally whitewashing them), or if the photos uploaded to Protect Photo are, as promised, readily deleted. To justifiably trust the software, this is important information to know, and how diverse the software team is has relevance, too. Members of communities who experience discrimination can have a heightened sensitivity to problems their communities face, including prevalent misunderstandings about their individual and group identities. But Gläser says the software is being externally audited, and if his confidence is justified, protesters and journalists might want to consider the benefits of using it. While face-swapped pictures can still contain personal identifiers, it should take the police more time and effort to match people by tattoos or signature clothing and accessories than by using automated facial recognition systems. Because the path of least resistance is often the one that gets taken, imposing transaction costs can deter unfair police scrutiny.

In order for the Protect Photo campaign to make as big of an impact as this type of intervention can create, two things need to happen. For starters, a well-designed and easy to use mobile app version needs to be developed and released. Perhaps most importantly, people will need to see a compelling reason for choosing the service. They already have easy options for obscuring faces in photos, such as blurring them or replacing them with boxes or emoji.

Gläser sees the secret sauce in the emotional quality of brighter AI’s face-swapped photos: the determination, grit, and even anger in the protesters’ faces can remain intact through the synthetic creations. Whether others will agree and find the emotional continuity sufficiently compelling remains to be seen. The big institutional question is whether journalists who feel torn between the obligation to not manipulate photos and the responsibility to “minimize harm” and “give special consideration to vulnerable subjects” will see this type of synthetic face as close enough to an unaltered one to justify updating the norms of protest coverage. Even if they do, they might worry about the slippery slope potential of sliding from face-swapping for protective purposes to sanctioning alternative facts and propaganda. Perhaps using captions or watermarks could mitigate anxiety.

When I experimented with the Protect Photo software I was surprised to learn that the system rejected portraits of individuals. The only photos I could get it to alter contained multiple protesters. I was surprised by this since the goal of the Protect Photo campaign is to safeguard probe photos against facial recognition technology. I asked Gläser why this was happening and learned it’s not a glitch. Instead, he explained it’s an ethical limitation baked into the codean attempt to prevent bad actors from using software to create socially detrimental deepfakes. That’s an interesting restriction, one that highlights a key difference from Fawkes, an image cloaking service recently covered by Kashmir Hill at the New York Times. Fawkes allegedly jams facial recognition systems by making “tiny, pixel-level changes” to pictures of faces so that they corrupt the training models that power facial recognition systems. Since Fawkes is based on a Trojan Horse, data pollution model, I couldn’t get it to detect faces in photos containing multiple protestors.

The business background

Brighter AI was ready to spring into action because it already developed the deep natural anonymization software for its business transactions. Working with European clients in the self-driving car industry (like Volvo and Valeo) and the public transportation industry (like Deutsche Bahn), brighter AI has been carving out a niche as a data-minimization service that can enhance compliance with the General Data Protection Regulation (GDPR). When clients use brighter AI’s software to swap faces found on the photos and videos they’ve taken for research and product development purposes, Gläser says they process less personal information that they would if they analyzed unaltered facial images. That’s less personal information they need consent for. It’s less personal information that can be abused during a data breach. And it’s less personal information that someone might want to delete under the right to be forgotten.

Since the synthetic faces contain demographic data and are programmed to, as best as the software allows, maintain the original eye positioning and facial expressions, Gläser contends the information is far more useful as training data for neural networks than are blurred, blacked out, or pixelated faces. Specifically, brighter AI’s clients can use the demographic information to balance their datasets and combat the biases that arise when inputs are too homogeneous. The practical payoffs of doing so range from increasing the chances of clients developing user-detecting software that recognizes diverse types of people to creating analytics that help clients to better understand how different types of people are inclined to respond to a situation. For example, self-driving car companies need to ensure their vehicles can recognize all kinds of pedestrians who are crossing in front of them (not just white males) and that their driver monitoring systems can detect whether all kinds of drivers (not just white males) are paying attention to the road. And public transportation companies want to ensure that people of all ages can successfully navigate their environments — that older people, for instance, won’t be confused when looking for their trains.

Why protecting personal obscurity is crucial but not enough

For many years, Woodrow Hartzog and I have been championing conceptualizing a range of so-called privacy issues as obscurity dilemmas and defending obscurity as essential to personal development, intimacy, and political participation. From our perspective, brighter AI’s face-switching service can be classified as an obscurity filter.

“Obscurity” is the degree to which unwanted parties like government, corporate, or social snoops are prevented from doing bad things with our personal information because they find it hard to get or hard to properly interpret. Before facial recognition technology became ubiquitous, a default layer of obscurity protected our faces when we went out in public. That’s because only a limited number of people recognize them and know our names.

Given the threats a host of emerging surveillance technologies pose, it’s worth considering developing and using obscurity-enhancing tools while robust legal solutions are advocated for and legislated. We’re going to need protection from far more than facial recognition technology. Among other obscurity-eviscerating dangers, there’s gait and heartbeat recognition, as well as automated lip-reading.

The thing is, while obscurity filters can be useful tools for fighting against all kinds of robot surveillance, they have limitations just like all privacy engineering products and services do. Deploying technological solutions to protect people from surveillance threats sets off a cat-and-mouse arms race. And depending on how obscurity filters are configured and the protections existing laws offer, the technological safeguards can exacerbate some problems while minimizing others. For example, when protecting obscurity also facilitates the automated collection of demographic data about race, gender, and ethnicity, there’s a risk of intensifying social problems even if that information isn’t linked to any personal identifiers.

The risk exists because even though there are responsible ways to collect and analyze demographic information, automating information about race incentivizes automated racism. Automating information about gender incentivizes threats to queer people as well as sexism. And automating information about age incentivizes ageism. These incentives exist because the various forms of essentialism are linked to unfounded prejudices and unfair displays of power that automation, even when well-intentioned, risks perpetuating and amplifying. Even trying to automate the interpretation of facially expressed emotions inherently poses discrimination risks. No matter how diligent companies like brighter AI are or how selective they are when choosing clients, these incentives will remain so long as biases remain socially embedded.

Prof. Philosophy at RIT. Latest book: “Re-Engineering Humanity.” Bylines everywhere. http://eselinger.org/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store