A Facial Recognition Giant Refuses to Share Details About Its Algorithm Dataset
NEC claims its systems aren’t biased — but rejects calls for transparency
The South Wales Police have been using live facial recognition since 2017 and claim to be the first department to make an arrest in the United Kingdom with the technology. Officers have surveillance cameras mounted on top of a large white van; the cameras record pedestrians and try to figure their identities.
While many facial recognition programs are hidden from the public eye or revealed only through public records requests, the U.K. has been touting its programs, publicizing the use of NEC’s technology by the South Wales Police and London’s Metropolitan Police.
How the U.K. determines facial recognition can be used within its borders could set a precedent for other European countries, just as the first cities in the United States to ban the technology have done.
Privacy advocates in the United States have been pushing a more radical solution: You don’t have to worry about biased data being used to train a facial recognition algorithm if you ban facial recognition entirely. As companies like Amazon, Microsoft, and IBM either pause or step away from police facial recognition contracts and municipalities like San Francisco and Boston ban the technology, the movement against facial recognition is undoubtedly growing in the United States.
A rebuke in the U.K. could bolster U.S. activists’ movements into one that’s worldwide, especially for NEC, which has more than 1,000 contracts around the globe.
NEC’s response to the lawsuit has lacked detail, to say the least, according to The Register.
The company has allegedly refused to provide any details of what data is used to train its algorithms to determine one face from another, and the police using the technology don’t know how the algorithm was trained. There’s reason for concern: A trial of NEC’s technology in 2018 had a 98% failure rate, and a 2019 audit found an 81% false positive rate.
Understanding the data used by the system is a crucial component of determining potential racial bias in the algorithm—information NEC seems to have no interest in divulging. In effect, the company is asking the public to take it at its word, and it’s unclear if that trust has been earned.
Now, for a change of pace.
In this week’s A.I. research, we’re going to focus on three papers that each deal with a different problem for self-driving cars. The entire autonomous vehicle industry is built on a single assumption: Driving on a public road is a task that a robot can accomplish.
Driving itself is actually fairly simple, and cars have been navigating their own journeys since 2004, when DARPA held its infamous competition. But driving alongside other vehicles in every weather condition on every kind of road, from highways to back roads, is another feat altogether.
Suddenly, driving isn’t just controlling a steering wheel but a mountain of subtasks, like identifying surrounding cars, seeing through rain, and dealing with hackers.
Here’s a look at some ideas for these problems:
Self-driving cars typically have more than one camera to give them a 360-degree view. This research describes a way for a car to keep track of surrounding cars, as they’re seen by all of the car’s different cameras.
Humans typically have no problem driving in the rain. But for algorithms, rain can completely alter a dataset. That could be raindrops on a camera lens or different contrasts between objects in cloud cover. The algorithm in this paper seeks to “de-rain” image data, making it usable to detect cars and signs around a car.
Towards Robust LiDAR-Based Perception in Autonomous Driving: General Black-Box Adversarial Sensor Attack and Countermeasures
Anything as important and dangerous as a car is going to attract the attention of hackers — and researchers have discovered vulnerabilities as simple as a piece of tape that can trick autonomous cars. This paper from the University of Michigan and UC Irvine describes a sophisticated attack on an autonomous car’s lidar sensor. The attack creates a fake vehicle in front of the car, potentially making it come to a stop.