The Case To Reform the Share Button, According to Facebook’s Own Research
New leaked document shows Facebook’s Share button spreads misinformation pervasively after two hops down the chain
The following is a selection from Big Technology, a newsletter by Alex Kantrowitz. To get it in your inbox each week, you can sign up here.
In spring 2019, Facebook researchers looked into whether the Share button helped amplify misinformation. In a report called “Deep Reshares and Misinformation,” they confirmed their suspicions.
The report noted that people are four times more likely to see misinformation when they encounter a post via a share of a share — kind of like a retweet of a retweet — compared to a typical photo or link on Facebook. Add a few more shares to the chain, and people are five to ten times more likely to see misinformation. It gets worse in certain countries. In India, people who encounter “deep reshares,” as the researchers call them, are twenty times more likely to see misinformation.
“Our data,” the researchers concluded, ”reveals that misinformation relies much more on deep reshares for distribution that major publishers do.”
A simple product tweak, the research indicated, would likely help Facebook constrain its misinformation problem more than an army of content moderators — all without removing a single post. In this scenario, adding some friction after the first share, or blocking sharing altogether after one share, could help mitigate the spread of misinformation on Facebook.
The study found that 38% of all “viewpoint views” (Facebook speak for views) of link posts with misinformation take place after two reshares. For photos, the numbers increase. 65% of views of photo misinformation take place after two reshares. Facebook pages, meanwhile, don’t rely on deep reshares for distribution. About 20% of page content is viewed at a reshare depth of two or higher.
“I’m an advocate of significant friction around sharing,” Aviv Ovadaya, a misinformation researcher and…