This Filter Makes Your Photos Invisible to Facial Recognition

Digital cloaking, and how you can reclaim a modicum of digital privacy

Photo: arvin keynes/Unsplash

InIn 2020, it’s safe to assume that any photo uploaded and made public to the internet will be analyzed by facial recognition. Not only do companies like Google and Facebook apply facial recognition as a feature, but companies like Clearview AI have been discreetly scraping images from the public internet in order to sell facial recognition technology to police for years.

Now, A.I. researchers are starting to think about how technology can solve the problem it created. Algorithms with names like “PrivacyNet” and “AnonymousNet” and “Fawkes” now offer a glimmer of refuge from the facial recognition algorithms trawling the public web.

These algorithms aren’t the solution to privacy on the web — and they don’t claim to be. But they’re tools that, if adopted by online platforms, could claw back a little of the privacy typically lost by posting images online.

Fawkes is an anti-facial recognition system based on research from the University of Chicago. The program, a nod to the Guy Fawkes masks made popular by hacking group Anonymous, tries to limit the use of images with faces posted online.

Fawkes identifies these invisible features and then tweaks them, eliminating commonalities.

When a facial recognition algorithm is trained to recognize a person’s appearance, it does so by finding relationships between pixels in the different training images it’s shown. That relationship could be as simple as the geometry of the face, but University of Chicago researchers point out that these algorithms also pick up invisible “features” of a person’s appearance.

Fawkes identifies these invisible features and then tweaks them, eliminating commonalities between images. Since these tiny features weren’t visible to the human eye before, neither are the alterations. These slight changes are called a “cloak.”

To test how effectively cloaked images fooled real-world algorithms, the researchers trained the facial recognition algorithms sold by Microsoft, Amazon, and Google on cloaked images. They found their cloaks were 100% effective — when every single image found was cloaked. When about 15% of the images in the training data were uncloaked, the protection rate dropped below 40%.

“While we are under no illusion that this proposed system is itself future-proof, we believe it is an important and necessary first step in the development of user-centric privacy tools to resist unauthorized machine learning models,” the researchers wrote.

Other approaches at de-identification make visible changes to a person’s appearance in an attempt to limit whether they can be recognized by both humans and machines. While this use case seems more obtuse, it’s an approach that’s garnered attention from Facebook and several universities.

Facebook researchers in Tel Aviv proposed a method to change a person’s appearance in live video, making them generally unrecognizable to machines.

Facebook’s approach for de-identifying a person uses a video and a previously taken image of the person. The algorithm finds the similarities between the image and the video, and then tries to “distance” those similar features of the face when generating a new video. This idea is almost like a personally tailored deepfake, except it’s meant to specifically not look like a person.

“This allows, for example, the user to leave a natural-looking video message in a public forum in an anonymous way, that would presumably prevent face recognition technology from recognizing them,” the researchers wrote.

Two papers, one from Temple University and another joint paper from Rochester and Purdue Universities, take a different tact. They generate randomized fake faces and stitch them on top of the faces being de-identified.

“The target of this technique is to fool face detection algorithms, which has been shown to be more capable than human beings, and, in the meantime, preserve the perceptual quality for human eyes,” Tao Li, a graduate student at Purdue University and co-author of the AnonymousNet paper, tells OneZero in an email.

This approach could be used to protect the privacy of those who appear in the background of images, as a redaction tactic. Li says that the problem of de-identification is typically associated with data publishing or how to share images with people in them in datasets without also sharing those people’s identities. In the latter case, changing the person’s face to be unidentifiable is enough.

“While we are under no illusion that this proposed system is itself future-proof, we believe it is an important and necessary first step.”

Li also makes a case for personal use. If there’s a picture of a group of people and one doesn’t want their photo to be out on the web, the face replacement technique would be enough to change that person’s face without needing to blur it and potentially ruin the photo.

But no matter how images are redacted on a case-to-case basis, it doesn’t alter the general value of data, and how bad actors will abuse the openness of the internet to obtain and resell data.

“Without minimizing the value of that research, the most important privacy changes we need are structural reforms to remold corporate incentives, not individual preventative measures,” Lindsey Barrett, staff attorney at Georgetown Law’s Communications & Technology Law Clinic, tells OneZero. Those big structural reforms look like a ban on facial recognition and a robust privacy law that prevents privacy-infringing data collection.

“Being careful about privacy settings is better than not being careful about them,” she says. “But we know after years of research and common sense that the problem of consumer data exploitation is too big and unwieldy for each of us to mitigate or prevent on our own.”

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store