Welcome to General Intelligence, OneZero’s weekly dive into the A.I. news and research that matters. It is written by OneZero senior writer Dave Gershgorn.
Apple has spent the past three years convincing its customers that facial recognition is the future: It’s faster and more secure than using a fingerprint, and the smaller sensors give more real estate for a bigger screen.
That may all still be true in the long term, but now the company’s two largest markets, the United States and China, have been advised to wear face masks in public to curb the spread of the coronavirus, rendering Apple’s millimeter-precise facial scan useless.
As its customers try to trick their $1,000-plus devices into recognizing their faces with a mask, Apple released the iPhone SE this week, formally extending the life of the company’s humble fingerprint sensor.
Apple plans years in advance for product releases, meaning this was planned months before the coronavirus pandemic gripped the world. But as it so happens, the new iPhone SE is now the only device in the company’s current lineup that will actually work as intended while following health guidelines in public. (It’s also far less expensive than the company’s flagship phone, making it a more frugal purchase for those who can’t repair their current iPhone or need a new device in a time of mass unemployment or salary reduction.)
The extension of Apple’s TouchID does raise the question: What happens to facial recognition technology at a time when so many of our faces are obscured?
A.I. research already exists for identifying people wearing masks, and technology that could identify partially covered faces was a trend before the coronavirus. But the accuracy of those systems in practice is unclear, especially when even unobscured facial recognition has been shown to be wildly inaccurate. If police departments adopt these technologies under the auspices of policing during a public health crisis, it could compound an already treacherous issue of ineffective facial recognition.
We’re keeping track of enhanced surveillance technology deployed by governments in the time of coronavirus, which we’re updating weekly, so keep an eye out for further developments on this front.
Below are a few A.I. research papers that emerged this week that may be of interest.
Dozens of experts have published a set of guidelines for how organizations using A.I. can make verifiable claims about their algorithms’ accuracy and consequences.
Defense contractor Lockheed Martin and Purdue University researchers are trying to automatically detect the impact of natural disasters on buildings from satellite data. The algorithm detects individual buildings and classifies them as no damage, minor damage, major damage, or destroyed.
Every smartphone ever made might have a fingerprint sensor: the camera. A new study looks at a way to identify a person’s finger when they press it over a smartphone camera by detecting changes in light as blood flows through the finger.
Research from Guru Gobind Singh Indraprastha University uses a 47-question test to tell whether roommates are compatible. Those with a high “party hard” score might not match with roommates with a high “don’t talk” score.
If you thought the ability to generate realistic facial hair on a photo was only good for Instagram filters, never fear: Adobe, Tencent, and ByteDance say it’s good for enhancing deepfakes as well.
A new competition at an international speech A.I. conference hopes to push forward the ability to automatically detect Alzheimer’s dementia after analyzing a clip of a person talking. Startups like Winterlight Labs are also trying to build out this capability.