A New Tool Jams Facial Recognition Technology With Digital Doppelgängers
Brighter AI promises to protect protesters. But is it enough?
There are many reasons why the movement to ban the police from using facial recognition technology is growing. This summer, reporters at the New York Times and Detroit Free Press revealed that Detroit police officers used faulty facial recognition to misidentify and wrongfully arrested two Black men, one for supposedly stealing watches, and the other for allegedly grabbing someone else’s mobile phone. Recent reporting at Gothamist revealed the New York Police Department deployed facial recognition technology to investigate “a prominent Black Lives Matter Activist.”
Technology companies have been harshly criticized for providing law enforcement with facial recognition technology. While IBM got out of the business and Microsoft and Amazon emphasize that they’re not currently providing facial recognition technology to police, companies like Clearview AI, Ayonix, Cognitec, and iOmniscient are continuing to work with law enforcement. Not every technology company, however, beats to the same drum. There are startups that are geared to limiting the dangers posed by facial recognition technology.
Berlin-based startup brighter AI recently launched a public interest campaign to help solve the problem of authorities using facial recognition technology to identify protesters. The campaign website, Protect Photo, provides a free privacy engineering service that quickly removes “facial fingerprints” from user-uploaded images.
Deploying proprietary “deep natural anonymization” software, it scans the original photos, pinpoints a large number of facial features and mathematical relations between them (how far apart the nose and mouth are, for example), infers demographic information (including age, ethnicity, and gender), and combines this data to create new images that look strikingly similar to the originals but which contain an essential difference. The new photos have new synthetic faces. CEO Marian Gläser claims these are facial recognition proof: Automated systems can’t identify the digital doppelgängers.