The FTC Forced a Misbehaving A.I. Company to Delete Its Algorithm

Could Google and Facebook’s algorithms be next?

Dave Gershgorn
Published in
3 min readJan 19, 2021
Photo: Images By Tang Ming Tung/Getty Images

OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.

In 2019, an investigation by NBC News revealed that photo storage app Ever had quietly siphoned billions of its users’ photos to train facial recognition algorithms.

Pictures of people’s friends and families, which they had thought were private, were in fact being used to train algorithms that Ever then sold to law enforcement and the U.S. military.

Two years later, the Federal Trade Commission has now made an example of parent company Everalbum, which has since rebranded to be named Paravision. In a decision posted January 11, Paravision will be required to delete all the photos it had secretly taken from users, as well as any algorithms it built using that data.

Making a company delete ill-gotten data isn’t new, according to experts who spoke to OneZero. But making them delete an algorithm is.

This decision from the FTC, alongside the statement from commissioner Rohit Chopra, draws a line in the sand warning companies that the penalty for abusing consumer data to train A.I. algorithms won’t just be a slap in the wrist.

“The FTC’s proposed order requires Everalbum to forfeit the fruits of its deception,” Chopra wrote, noting that past data protection cases had allowed companies to keep technology built on ill-gotten data. “This is an important course correction.”

Data is a critically important part of building functional A.I. algorithms. In 2012, an industry competition led to the discovery that a specific A.I. technology called deep learning, crucially coupled with an enormous dataset, was miles more accurate than anything before it.

This idea led to the tech industry adopting mantras like “data is the new oil,” and kicked off the compilation of gigantic datasets to train more A.I. models. Large and specialized datasets, like billions of images of faces, could be the differentiating factor between a failed algorithm and a successful one. But…



Dave Gershgorn

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.