Twitter is giving the devil his due.
After a OneZero article about a Satan parody account whose owner felt his tweets were being unfairly hidden from public view, the company acknowledged late last week that a feature intended to limit trolls had been mistakenly affecting some legitimate accounts, including his. Twitter would not say how many other users were inadvertently caught up by its algorithm, which limits the public visibility of certain accounts based on their behavior and other signals, even if they haven’t broken the platform’s rules.
“Upon further investigation, we realize some of these systems were impacting people using Twitter in a healthy way and so we adjusted them,” a Twitter spokesperson said in a statement. “Thank you for surfacing this to us, we’re always working to improve.” The owner of the Satan account, a 26-year-old British man named Michael, confirmed to me that his account had been restored to good standing and has been rapidly gaining followers since then.
The snafu highlights how Twitter’s efforts to automate troll-fighting can quietly go awry, with little recourse for those affected. And it isn’t the first time.
Twitter has been besieged for years by spam bots, porn bots, and human users who use the site to spew hate or harass people. Its human content moderators have proven unable to stem this tide of ugliness, so last year it tried something new: an automated filtering system. The goal of the system — which Twitter has never given a formal name — is to make the site feel friendlier and healthier by downgrading accounts that show signs of obnoxious or spammy behavior. It can downgrade them in different ways, such as burying their replies behind a warning at the bottom of threads, or preventing them from showing up in search results by default. Their followers can still see their tweets, but they become largely invisible to the average user. (When it launched, I dubbed it “Twitter purgatory,” a term that Twitter PR didn’t appreciate.)