The Tools to Defeat Facial Recognition Are Free Online
Facial recognition software is far from perfect — research has shown that it’s plagued with racial bias, for example — and now researchers have identified a flaw with the robotic gaze.
Research from Huawei’s Moscow Research Center details one way to thwart a popular open-source algorithm used to detect whether there’s a face in an image or not, a crucial first step before the system matches that face against a database of known faces.
The Huawei paper shows how two stickers, each with a specific pattern that looks like a deformed QR code, can fool a face detection algorithm with 95% accuracy once they’re placed on a subject’s cheeks. If you’re savvy, you can fork the code and play with it yourself on GitHub.
The attack on the face detector turns the algorithm against itself — it only works because the researchers had access to the program they were trying to fool. But that doesn’t mean it’s useless. Many commonly used facial recognition tools are built on open-source software that is available to all.
To develop the sticker hack, the team first took photos of themselves with checkered patches on their cheeks. Then they built an algorithm to alter the checkered cheek patterns in the images, and test whether that changed the confidence of the algorithm that there was a face in the image. The algorithm was set to tinker with the checkered boxes hundreds of times, checking its confidence again and again, until further changes didn’t lower the probability of detecting a face.
When the altered checkered patches were printed and applied to a face in real life, they still evaded the face detector.
Researchers noted that the patterns the algorithm generated were specific to the person in the original image — meaning each evasion has to be tailored to its user.
This research certainly isn’t a death blow to facial recognition — there are only two examples of it working, and the paper explicitly notes that a vulnerability like this could be patched out, meaning Huawei is likely studying it in order to secure its own A.I.
It’s also not the first attack of its kind. Research from 2016 used a similar technique to create glasses that would obscure parts of a subject’s face and fool a machine.
Still, this work illustrates that even after years of research on adversarial examples, the people creating A.I and the people trying to trick it remain locked in a cat and mouse game that seems unlikely to end any time soon.
“There is no existing solution to mitigate this issue according to recent publications,” the researchers write.
And for those worried about the nearing dystopian future of always-on surveillance, knowing there’s a way around the machines tracking our whereabouts is a small comfort that the panopticon is not infallible — even if it means wearing QR codes on our faces.