General Intelligence

Glasses Equipped With Facial Recognition Are Coming

New York-based Vuzix is selling augmented reality headsets to identify suspects

Photo illustration. Photo: Smith Collection/Gado/Getty Images

Welcome to General Intelligence, OneZero’s weekly dive into the A.I. news and research that matters.

If police around the world start wearing AR glasses equipped with facial recognition, there’s a good chance they’ll be made by Vuzix.

The Rochester, New York-based company has been by far the most bullish on the technology, partnering with companies around the world, including the infamous Clearview AI, to integrate facial recognition algorithms into its headset computer.

The push started in 2019, when Vuzix announced that it was partnering with another tech company, NNTC, to bring facial recognition to its devices. The technology was pitched as a solution for police and security professionals, who could now identify blacklisted individuals in real time. Suddenly, facial recognition without infrastructure like CCTV cameras became possible.

Now, Vuzix seems dead set on bringing facial recognition to its AR glasses. In February, Gizmodo reported that Vuzix was working with Clearview AI to bring its billion-person facial recognition to Vuzix’s AR glasses. (Clearview said at the time that the app was just a prototype.)

Vuzix also recently announced that it was working with a company called TensorMark to bring facial recognition to the company’s headsets. Vuzix is pitching its product as a solution not just for security, but also border patrol, first responders, retail, hospitality, and banking.

Vuzix isn’t the only company in this space, either. Chinese tech company Rokid, which makes smart AR glasses, has trialed facial recognition algorithms, according to tests performed by the U.S. National Institute of Standards and Technology. Another Chinese company, LLVision, is also manufacturing glasses outfitted with facial recognition that look similar to the now-defunct Google Glass.

Facial recognition in an AR headset raises all the same issues as the technology when deployed in CCTV cameras, including privacy and accuracy. But the small form factor also begs new questions, like what shortcuts might have been taken to run facial recognition algorithms on smaller, weaker computing chips? Do matches get double-checked by anyone?

The predominant concerns about facial recognition aren’t scenarios in which the technology works, but what happens when it doesn’t. Rank One co-founder and CEO Brendan Klare replied to OneZero’s inquiry earlier this month, when we wrote about the company’s new facial recognition that only relies on eyes and eyebrows, taking umbrage with the phrase “Soon enough, you won’t be able to hide behind a mask.” Klare wrote that many applications for periocular recognition are opt-in experiences like customer checkout or online banking, where a person wouldn’t want to remove a mask.

“The only applications where someone would consider ‘hiding’ is in the case of forensic face recognition, where the task is to assist in the manual identification of rapists, murderers, armed robberies, etc. based on digital media evidence,” he wrote.

But there is a good reason why innocent people might not want to be scanned by facial recognition. Take the case of Guillermo Federico Ibarrola, who was arrested after being wrongfully identified by Buenos Aires’ live facial recognition system, and spent six days in police custody. He was released, offered coffee and dinner after six days of detention, and given a bus ticket home.

As live facial recognition becomes more prevalent around the world, and adapts to convenient form factors for law enforcement, accountability and transparency measures become exponentially more important. But barring any kind of facial recognition not written by tech company employees, it doesn’t look like that will happen.

On a brighter note, here’s some interesting A.I. research from the week:

Unlocking New York City Crime Insights using Relational Database Embeddings

IBM researchers set out to create an algorithm for predictive policing, but claim that it cannot be biased because the system doesn’t explicitly take into account race or gender. However, that’s not a silver bullet. Data like location can be proxies for labels like race in certain cases.

S2IGAN: Speech-to-Image Generation via Adversarial Learning

This could be the seed of a much-needed technology for anyone who has ever forgotten an actor’s name: An algorithm that creates an image based on just the spoken word. Soon “You know the guy, the one with the mustache and the glasses,” won’t be so vague.

RoadText-1K: Text Detection & Recognition Dataset for Driving Videos

Companies like Waymo and Lyft have released enormous datasets of information arguably needed by self-driving cars to understand the rules of the road. However, this dataset focuses specifically on reading road signs, an important task that might have been difficult for machines without specific data from which to learn.

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store