Microprocessing
Why It’s so Hard to Automate ‘Clickbait’ Away
Researchers have tried — and failed — to understand the trickiest part of online media
In 2017, Facebook announced a new initiative to reduce what it termed “clickbait” on its platform. As a social media editor at the time, I remember being baffled by the technology giant attempting to penalize its community for playing by the rules it had created. For years, Facebook rewarded accounts that had high engagement and click-through rates — meaning the proportion of people who click on posts in their news feed — by giving successful publishers a boost in the algorithm. It was confusing to social editors to suddenly be given a directive to stop creating “clicky” headlines and encourage their fans to instead engage by “liking” and commenting within the app.
It felt like Facebook was more concerned with improving the aesthetics of the content on its platform than actually protecting users, and was ignoring a more pressing issue: partisan content farms deliberately creating extremely misleading headlines that inflame and upset their readers. While clickbaity headlines about celebrities, animals, and other nonsense topics may annoy audiences, spammy posts that push Hillary Clinton or Putin conspiracy theories can potentially influence…