OneZero

OneZero is a former publication from Medium about the impact of technology on people and the future. Currently inactive and not taking submissions.

Follow publication

Member-only story

General Intelligence

Facebook Tested a New A.I.-Powered Misinformation Detector Months Before the Election

It’s unclear whether the platform adopted the new approach

Dave Gershgorn
OneZero
Published in
3 min readNov 6, 2020

Photo illustration source: Sean Gallup/Getty Images

OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.

Months ahead of the 2020 election, Facebook revealed it had created a better way to identify bad actors on the platform. Published in August 2020, the research details a system Facebook was testing that might seem incredibly simple: Instead of just looking at an account’s posting history or friend list to determine whether that account is a fake or inauthentic account, look at both the friend list and the post history.

On Facebook’s platform of 2.7 billion users, improving methods for finding misinformation and fake accounts by even a percentage point or two could mean thousands or millions more items flagged. Facebook didn’t immediately respond to a request for comment on whether it has put this system into use.

The suggested algorithm works by predicting how likely an account is to be fake or how likely a post is to be hateful by creating a kind of timeline of interactions from the account or with the post, according to the paper. The algorithm breaks each interaction into parts, including who was involved, what kind of interaction took place, and when it took place. The algorithm then tries to match the timeline to past examples of bad behavior.

By merging this dynamic data, the timeline of post reactions or comments, and static data already known by Facebook, such as friends the account has or groups it’s in, researchers get a more complete understanding of whether the account is real or fake or whether a post is targeted misinformation.

The research paper also revealed that some of the company’s existing algorithms built to detect fake behavior were laboriously built by hand. Researchers call out one specific algorithm for fake engagement that factors in more than 1,000 different data points, which were hand-chosen by engineers making the algorithm. A deep learning approach would largely allow the algorithm to learn those signals itself.

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

OneZero
OneZero

Published in OneZero

OneZero is a former publication from Medium about the impact of technology on people and the future. Currently inactive and not taking submissions.

Dave Gershgorn
Dave Gershgorn

Written by Dave Gershgorn

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.

Responses (1)

Write a response