General Intelligence

Border Patrol Has Used Facial Recognition to Scan More Than 16 Million Fliers — and Caught Just 7 Imposters

A new report lays out CBP’s shoddy implementation of facial recognition technology

Photo illustration, source: Yasin Akgul/Getty Images

The agency that runs the United States’ airport and border facial recognition program has failed to properly tell the public about how it works, a new report has found. In whole, the report reads like a major red flag: The U.S. government is charging ahead with the adoption of this questionable technology, and it’s not informing the public or keeping proper tabs on accuracy.

By law, the Customs and Border Patrol is supposed to inform the public when facial recognition is being used by putting up clear, legible signs telling people that their faces are being scanned and how they can opt out. The department is also supposed to put accurate, up-to-date information online about its facial recognition, and provide information through its call center.

Signs disclosing the use of facial recognition were hidden behind bigger signs at airports, and some contained outdated information.

But a new report from the federal Government Accountability Office (GAO) found CBP lacking in all these regards. Signs disclosing the use of facial recognition were hidden behind bigger signs at airports, and some contained outdated information. Some signs didn’t tell people how to opt out, or what would happen if they asked to opt out.

Not having these signs means that people were less likely to opt out or question whether they needed to submit to a facial scan. Privacy organizations told GAO that CBP discourages opting out, with the justification that doing so “would lead to additional security scrutiny, increased wait times, and could be grounds to deny boarding.” GOA found that CBP officers weren’t present to address opt-out requests at airports, meaning that travelers who opted out would have to wait while additional staff was called in to address concerns.

Investigators also couldn’t always reach the CBP call center to ask about opt-out information. When they did reach staff at the call center, the operators didn’t know information about the facial recognition programs. (Welcome to the club, GAO.)

The CBP, in turn, blamed airports, airlines, and cruise operators for failing to communicate with passengers. The CBP relies on private companies to post privacy signs. But the CBP has done little to make sure these signs are present: The department only checked on one out of 20 places where the facial recognition tests were taking place. CBP has no plans to make sure such signs are present at other locations.

The report also contained some interesting statistics about the effectiveness of the facial recognition programs in catching individuals traveling under false identities. In airports, CBP has scanned more than 16 million passengers arriving in the United States up to May 2020, and stopped a total of seven imposters. At the southern border, facial recognition was also used to scan 4.4 million pedestrians crossing into the United States between September 2018 and December 2019, and stopped 215 imposters.

The CBP is also using facial recognition for commercial vehicles in Brownsville, Texas, and at the northern border at Peace Bridge in Buffalo, New York. However, no imposters were reported to be caught at those locations.

The CBP highlighted one success story from the southern border to GOA: The agency used facial recognition to catch a pedestrian trying to enter the United States wearing Halloween makeup.

Perhaps the most glaring oversight that GAO noted, however, was that CBP doesn’t regularly monitor the accuracy of its facial recognition. The border agency looks at data from two flights per airport per week to make sure the images are good quality and accuracy is above the acceptable threshold of 90%. But according to the report, this process doesn’t alert CBP to ongoing daily issues until days or weeks after they start to occur.

The GAO report reflects the confusing, uneven use of facial recognition technologies at border crossings today. While the number of cameras pointing at our faces continues to grow, we know less and less about why they’re scanning us or what databases they’re matching us against.

And now, here’s some of the most interesting A.I. research of the week:

Facebook explores 3D photos

New research from Facebook looks at how to convert a 2D photo to 3D, by using an algorithm that guesses what the scene would look like from all angles and then fills in the gaps. An example picture is shown of a person standing in the snow. The algorithm uses the context of the snow around them to fill in the gaps, and creates the ability to view the photo from an entirely new perspective.

Introducing your worst Flightmare

A team from Zurich University has released a new quadcopter simulator called Flightmare, which allows researchers to insert hundreds of digital quadcopters and have them learn to fly. The simulator’s main strength is its flexibility — researchers can have their algorithms make decisions up to 20,000 times per second, meaning they can learn to fly by making incredibly granular adjustments.

A proposal for a new kind of A.I. assistant

Google Assistant and Siri are cool, but a paper from the University of Alberta suggests that a better way to make intelligence assistants would be to train them in creating and editing text. They argue that going beyond mere transcription of dictation is a small enough jump that algorithms would still be able to understand the domain, and that commands like “remove that apostrophe” are clear but require understanding of simple context that the algorithm can see.

More A.I. for anime nerds

There is so much A.I. research that centers around anime and manga, I can only conclude that every A.I. researcher is a massive nerd. This paper is for the cosplayers, though. Research from Doshisha University and Zozo Technologies in Japan attempts to generate images of real clothing from screenshots of anime. They say this would allow realistic costumes to be automatically generated, making them easier to recreate in real life.

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store