21 States Are Now Vetting Unemployment Claims With a ‘Risky’ Facial Recognition System

ID.me has rejected some legitimate claimants in addition to fraudsters

Illustration: Taylor Le

On January 25, California officials told the public that while the state had paid out $114 billion in unemployment benefits, auditors had found a problem. More than $11 billion of those payouts were fraudulent.

To address this, California officials hired ID.me, a 10-year-old startup, to ensure that every person who receives benefits is actually eligible. ID.me provides an app where users can upload pictures of their government documents, like a driver’s license and passport, as well as a selfie. The company says it will then use A.I. algorithms and facial recognition to authenticate the documents and compare ID pictures to the new selfie.

ID.me claims this system prevents $1 billion in fraud per week across the 21 states that already use the service for claims related to Covid-19. California has suspended 1.4 million unemployment claims under suspicion of fraud since it adopted the system. While 300,000 of those suspended Californians have been able to verify themselves, 1.1 million now have less than 30 days to do so through ID.me’s service.

But along with any alleged fraudsters, legitimate claimants have also been rejected by the company’s machine learning and facial recognition systems — leading to massive delays in life-sustaining funds.

For instance, California resident Tarri O’Donnell told ABC7 News that after she had uploaded her passport and scanned her face for ID.me’s facial recognition, the software couldn’t confirm her identity. She then needed to wait hours for a video call so an ID.me representative could do it manually.

Another Californian waited for more than 48 hours to get on a video call with the company after their identity couldn’t be verified automatically using the app. Hundreds of Twitter users have tweeted at the company this month, unable to access unemployment benefits and begging the company to intervene on their accounts. Unemployment in California is currently at 9% but spiked up beyond 15% in 2020. For many citizens, obtaining these funds is crucial, especially as avenues of work are still closed due to the coronavirus pandemic.

ID.me started in 2010 as TroopSwap, which provided retail deals for military service members and their families. CEO Blake Hall and company president Matt Thompson are both former Army Rangers. In 2012, TroopSwap launched TroopID, which verified that the people claiming military benefits had actually served. The company formally changed its name to ID.me in 2014, with the purpose of pursuing the wider field of identity verification. In 2016, the Veterans Administration hired ID.me to authenticate themselves on the department’s “Vets.gov” website.

ID.me doesn’t disclose the details of how its system works, which the company claims is an effort to prevent bad actors from gaming its system. In a written statement to OneZero from the company attributed to CEO Blake Hall, the company says that it licenses its facial recognition algorithms, and that the Face Match feature has a 95% or higher “pass rate.” However, the company declined to answer questions from OneZero about who created or supplies the algorithms, as well as the algorithm’s accuracy or training data and why facial recognition is the best approach for this task.

The company said it had not seen a statistical difference in facial verification rates broken down by gender or race but didn’t offer any additional evidence.

The National Institute for Standards and Technology has found that even the best-performing facial recognition systems it has tested do not perform as well on women and people with darker skin tones. In some cases, people with darker skin tones were misidentified at rates 10 to 100 times more often than people with lighter skin tones. And even if that amounts to just a few misidentifications out of 1,000 matching attempts, it could mean thousands of people denied or delayed services when facial recognition is deployed on a state or federal scale.

Inioluwa Deborah Raji, a fellow at the Mozilla Foundation and co-author of the seminal Gender Shades facial recognition audit, says ID.me’s system recalls the U.K.’s passport photo checker, which attempted to use facial recognition to verify that certain portraits were suitable for government ID. However, the tool told women with darker skin that their portraits were unsuitable 22% of the time, compared to 9% for lighter-skinned men, according to a BBC investigation.

“It’s the same reason why it’s frustrating when I arrive at an airport and the machine that’s meant to match my face with my passport doesn’t work for me under the wrong lighting because of my darker skin tone,” she told OneZero. “You have to make a case for yourself to the company, to customer service, to whoever you have access to, that you still deserve whatever’s on the other side of that facial verification match.”

Raji says it’s a red flag that the company hasn’t disclosed any statistics for how its algorithm performs or given any indication that it has tested the technology on different skin types. She also warns that facial recognition data can sometimes be collected for one purpose and used for another.

For instance, in January, the FTC sanctioned A.I. firm Paravision for collecting personal photos through a consumer photo-sharing app and then using that data to train facial recognition algorithms. Those algorithms were sold to enterprise customers, including a contract with the U.S. military.

So far, there have been no news reports of bias in ID.me’s facial recognition systems, the company isn’t known to resell facial recognition systems or data, and it’s still unclear why the system is currently locking out some users. But its use of facial recognition for verification means the company is relying on a tool that has been proven to be flawed in the past and must collect sensitive face data that others have shown can be misused.

“It’s just incredibly frustrating to continue to see that people are still defaulting to the most risky version of that verification process,” Raji says. “There are so many more creative avenues that could be explored that keep getting ignored in favor of trying to make facial recognition work.”

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store