General Intelligence

‘Aggression Detection’ Is Coming to Facial Recognition Cameras Around the World

Russian firm NTech Lab plans to roll emotional detection features worldwide as soon as 2021

Dave Gershgorn
OneZero
Published in
3 min readSep 25, 2020

--

An employee at NtechLab, the company that won the city’s tender to supply the facial recognition technology, demonstrates the technology during an interview with AFP on February 5, 2020. Photo: Kirill Kudryavstev/AFP/Getty Images

OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.

NTech Lab, makers of Russia’s expansive real-time facial recognition surveillance system, is set to roll out “aggression detection” as well as “violence detection” features, which will flag law enforcement when the algorithm thinks someone is committing or about to commit violence starting in 2021.

The firm recently got an injection of cash from the Russian government and an unnamed Middle Eastern partner. Flush with $15 million in new funds, it’s now eyeing expansion into Latin America, the Middle East, and potentially Asia, according to a report from Forbes.

But of course, when it comes to recognizing aggression, the algorithm’s interpretation of a situation could have a large margin of error.

While previous studies have found facial recognition to be racially biased to perform worse on people with darker skin, tests on detecting emotion and skin tone are even more damning.

A study that tried to gauge the emotions in 400 photos of NBA players found that Black players were consistently found to be more “angry” than white players.

“Until facial recognition assesses black and white faces similarly, black people may need to exaggerate their positive facial expressions — essentially smile more — to reduce ambiguity and potentially negative interpretations by the technology,” study author Lauren Rhue wrote about the research.

Let’s keep it short and sweet: NTechLab is spreading an untested, morally corrupt algorithm to the governments least likely to quibble over the ethics of how it’s used. And it’s going to make a lot of money doing it.

Now, a quick pivot to some of the most interesting A.I. research of the week:

Generating fake disasters

--

--

Dave Gershgorn
OneZero

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.