Facebook Tested a New A.I.-Powered Misinformation Detector Months Before the Election
OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.
Months ahead of the 2020 election, Facebook revealed it had created a better way to identify bad actors on the platform. Published in August 2020, the research details a system Facebook was testing that might seem incredibly simple: Instead of just looking at an account’s posting history or friend list to determine whether that account is a fake or inauthentic account, look at both the friend list and the post history.
On Facebook’s platform of 2.7 billion users, improving methods for finding misinformation and fake accounts by even a percentage point or two could mean thousands or millions more items flagged. Facebook didn’t immediately respond to a request for comment on whether it has put this system into use.
The suggested algorithm works by predicting how likely an account is to be fake or how likely a post is to be hateful by creating a kind of timeline of interactions from the account or with the post, according to the paper. The algorithm breaks each interaction into parts, including who was involved, what kind of interaction took place, and when it took place. The algorithm then tries to match the timeline to past examples of bad behavior.
By merging this dynamic data, the timeline of post reactions or comments, and static data already known by Facebook, such as friends the account has or groups it’s in, researchers get a more complete understanding of whether the account is real or fake or whether a post is targeted misinformation.
The research paper also revealed that some of the company’s existing algorithms built to detect fake behavior were laboriously built by hand. Researchers call out one specific algorithm for fake engagement that factors in more than 1,000 different data points, which were hand-chosen by engineers making the algorithm. A deep learning approach would largely allow the algorithm to learn those signals itself.
This new tech can only surface posts or accounts that potentially break Facebook’s rules — but how those rules are enforced and when action is taken is mostly left up to Facebook’s human moderators.
The rules can be confusing, especially during an election: On November 2, Vice reported that moderators were told to remove content only if it was phrased a specific way, rather than if it had a certain intended meaning. It’s unclear whether the algorithm is sensitive to the specific phrasings that Facebook will or won’t allow according to its rules, like the difference between prematurely calling a state for a candidate, which is allowed, versus calling the entire election, which isn’t. Facebook also notoriously updates or changes its moderation rules, and it’s not clear whether a new algorithm would need to be trained each time that happens.