General Intelligence

Facebook Tested a New A.I.-Powered Misinformation Detector Months Before the Election

It’s unclear whether the platform adopted the new approach

Dave Gershgorn
OneZero
Published in
3 min readNov 6, 2020

--

Photo illustration source: Sean Gallup/Getty Images

OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.

Months ahead of the 2020 election, Facebook revealed it had created a better way to identify bad actors on the platform. Published in August 2020, the research details a system Facebook was testing that might seem incredibly simple: Instead of just looking at an account’s posting history or friend list to determine whether that account is a fake or inauthentic account, look at both the friend list and the post history.

On Facebook’s platform of 2.7 billion users, improving methods for finding misinformation and fake accounts by even a percentage point or two could mean thousands or millions more items flagged. Facebook didn’t immediately respond to a request for comment on whether it has put this system into use.

The suggested algorithm works by predicting how likely an account is to be fake or how likely a post is to be hateful by creating a kind of timeline of interactions from the account or with the post, according to the paper. The…

--

--

OneZero
OneZero

Published in OneZero

OneZero is a former publication from Medium about the impact of technology on people and the future. Currently inactive and not taking submissions.

Dave Gershgorn
Dave Gershgorn

Written by Dave Gershgorn

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.