How Satan Was Disappeared From Twitter
The deamplification of a popular parody account shows the platform’s inconsistent — and often unexplained — rules
The notification on my phone was waiting for me when I woke up: “Satan has sent you a direct message on Twitter.”
This being 2019 and Twitter being Twitter, I wasn’t particularly surprised or alarmed. I figured it was probably spam — and not an actual missive from the Dark Lord — but I tapped to view the message just in case.
It seemed the Father of Lies was having some trouble with his Twitter account. He’d stumbled across a relevant story I’d written a while back, and wondered if I could help him out somehow. He complained that his account was being affected by a change Twitter made last year to automatically hide certain accounts in replies and search results.
I typically respond to this kind of inquiry by politely clarifying that my job is to report on people’s tech problems, not solve them, and that’s just what I told the Prince of Darkness. But Satan — Twitter handle @s8n, tagline “not evil just misunderstood” — responded so politely and dejectedly that I began to feel a bit of sympathy for the devil. After a bit of idle chitchat, I asked him what exactly made him think he was being unfairly targeted by Twitter.
As we talked, I came to know Satan as a seemingly well-meaning 26-year-old British lad whose fun ride to Twitter fame had been derailed by algorithmic forces beyond his ken, just as he was hoping to turn it into something more than a side gig. His story highlights the challenges of Twitter’s high-minded project to reinvent itself as a safer and healthier platform. And it helps to illustrate how a feature designed to fix one of its most persistent problems has become a lightning rod for some powerful critics — including President Trump, who has referred to it (perhaps misleadingly) as “shadow banning.”
In short, it seems that Satan himself has been shadow banned, for lack of a better term. And no one will tell him why.
The owner of the Satan parody account, which commands an impressive 887,000 followers, asked not to give his full name or location for this story. That’s understandable: Last year he became the target of a harassment and doxxing campaign by QAnon conspiracists, for reasons too dispiritingly absurd to catalogue here. He told me only that his first name is Michael, and that he works in a shop in England.
Michael said he was bequeathed the @s8n handle in 2015 by a friend who had presciently grabbed it in 2008 but eventually tired of running it. Michael imbued the account with a mix of dry humor, cheesy one-liners, and topical references, all in the voice of a powerful but harried archdemon who can’t help but marvel at humanity’s boundless capacity for idiocy. His following ballooned after a 2017 BoredPanda post extolling his best tweets. Trump’s political rise brought him fresh relevance, as he winkingly embraced the president as a prodigal son who would be missed in hell while leading the free world. That might suggest to some that Satan leans left, but Michael told me via DM that he’s not actually political: “I just roast on Trump because it’s what gets me the most interactions ahahha.” He added a moment later: “And he is an idiot.”
Satan is, in some ways, a throwback to a time when parody accounts were a significant part of Twitter’s identity and popular appeal. There was a stretch of years in the early 2010s when every viral phenomenon in pop culture spawned a bevy of Twitter accounts racing to seize the moment, from Hologram Tupac to Sarcastic Mars Rover to Left Shark. Most were inherently short-lived, though some have endured. In most cases, that’s because they were genuinely clever and caricatured either cultural stereotypes (like @guyinyourmfa) or real figures (like @boredelonmusk) whose relevance hasn’t waned.
Satan may not have been the wittiest of the lot, but his subject is timeless, and his tweets are occasionally inspired enough to make him worth the follow. His voice is certainly more consistent than that of his half-baked counterpart @god, an account that Michael says is unrelated to his, though @s8n occasionally needles @god as part of his schtick. (Satan has more than twice as many followers, which may say something about the nature of Twitter.) That helps explain why his account continued to grow as others fell by the wayside.
The apparent decline of the Twitter parody account may be partly due to the novelty wearing off, which happens on social platforms. (Does anyone remember Chad Vader: Day Shift Manager?) But it has also been hastened by Twitter’s changing self-conception and shifting policies. A company that once embraced unfettered speech has gradually tightened the reins in recent years to combat an epidemic of harassment, trolling, hate speech, and bots. In the past year, CEO Jack Dorsey has embarked on an ambitious rethink of the service, with the aim of promoting what he calls “healthy conversations,” among other goals.
Twitter has so far remained committed to allowing anonymity — in contrast to Facebook’s long-standing preference for real names — and generally permits parody accounts, though it now requires those impersonating real people to disclose in their bio that they’re fake. There are good reasons for its heightened vigilance: Impersonation accounts are a popular tool for harassers and scammers, and have been used to spread fake news and propaganda. Separately, Twitter last year cracked down on a network of popular parody accounts, known as “tweetdeckers,” that had been secretly working together to make certain tweets go viral in exchange for money.
Satan survived all that, and was adding several thousand followers a day as recently as two months ago. He had pinned a tweet advertising a page where he sells an array of Satan-themed merch, such as T-shirts and iPhone cases, and Michael told me the page was bringing him a modest income. His account’s growth trajectory, though, had him dreaming of building a business around the brand and maybe even quitting his day job to focus on it.
Crucially, Twitter decided not to tell people whose accounts had been affected by the change, let alone establish a process for appeal or human review.
Things began to go south in the final week of February 2019. That’s when Michael noticed a sudden drop-off in his account’s growth, from almost 6,000 new followers on Sunday, Feb. 10 to 1,237 on Monday, to 535 on Tuesday, to just 217 by that Thursday. (He showed me screenshots from Social Blade, a social media analytics tool.)
At first, he couldn’t figure out what was going on. Then, he got word from some friends that his reply tweets to Donald Trump — formerly a source of some of his greatest engagement — were no longer showing up below Trump’s tweets, and that his account wasn’t appearing at the top of search results for “Satan.” He did some Googling and landed on my May 2018 story about a new feature Twitter had introduced with the goal of making the service feel friendlier.
Part of Dorsey’s “healthy conversations” push, the feature’s intent was to limit “troll-like behavior” that doesn’t otherwise violate Twitter’s policies. It used an algorithm to identify accounts whose behavior suggested they might be deceptive, automated, or just plain rude.
It didn’t stop them from tweeting, or prevent their followers from seeing their tweets, since they hadn’t technically been caught breaking any rules. But Twitter sought to avoid amplifying them by hiding their replies to others’ tweets, and removing both the accounts and their tweets from public search results. Twitter noted that these accounts’ replies could still be found by tapping “more replies” at the bottom of a reply thread, and you could still see them in search results if you adjusted your filter settings.
Crucially, Twitter decided not to tell people whose accounts had been affected by the change, let alone establish a process for appeal or human review. From Twitter’s perspective, it was more akin to tweaking its personalization algorithms — something social networks do all the time, without warning or recompense to those affected — than to taking an enforcement action against an individual account. (When Twitter does take such an action, it informs the account’s owner of exactly what policies they violated.) When I pressed Del Harvey, Twitter’s vice president of trust and safety, on why the company wouldn’t tell the people affected, she said it was because the company was still in the early stages of developing the software and didn’t want to alarm people who might be affected only temporarily.
These concerns were thrust to the forefront weeks later, when some Republican members of Congress noticed that their accounts were no longer being auto-suggested when people searched for them. The conservative media brewed up a backlash to what it called “shadow banning,” borrowing the term for a form of content moderation that has been practiced by social platforms like Reddit in which users are effectively banished from a service by making their posts invisible to everyone but themselves. In a typical shadow ban, the user keeps posting, unaware of the ban, but no one ever replies.
President Trump jumped on the bandwagon, calling for Congress to take action against Twitter for what he claimed was anti-conservative bias. Twitter responded by saying the accounts had been affected by mistake, and quickly reinstated them in search results. But the seeds of mistrust had been sown, and there were plenty of right-wing pundits and conspiracists only too happy to fertilize them. Twitter’s alleged bias was reportedly a topic of conversation when Dorsey met privately with Trump last week, prompting Twitter to recirculate its 2018 post clarifying that it doesn’t shadow ban. (Twitter’s distinction is that the user’s existing followers can still see all their posts as normal, and others can still find them if they look hard enough).
A year after it launched the feature, however, Twitter still isn’t informing people whose accounts are being automatically hidden in search results and replies. (It also hasn’t come up with a better name than “shadow banning” for its delisting of some accounts’ tweets, which helps to explain why the moniker has stuck.) A spokesperson told me that the company is working on ways to make the system more transparent, but declined to offer specifics.
But Satan’s case demonstrates just how inscrutable Twitter’s judgments remain. Michael told me he’s been unable to reach anyone at Twitter. He showed me tweets he has sent to @twitter, to @jack, to @support, and to Harvey (@delbius), all to no avail. By the time he DM’d me, he was growing desperate. His merch sales have dried up, he said, and his dreams of building his parody brand are on hold.
Has @s8n been buried unjustly, or did he do something to deserve the treatment that either he’s unaware of or isn’t owning up to? It’s impossible to know without more information from Twitter, something Twitter seems unwilling to give.
Twitter said it couldn’t comment on @s8n’s situation or confirm that the account had been affected, citing its policy of not speaking publicly about individual accounts. The company did note, generally speaking, that seemingly legitimate accounts can sometimes be caught up by the feature if Twitter’s software sees signals that they’re linked to other, less-legitimate accounts. Michael has a second account, called Satan Spam, that he uses for more personal tweets that aren’t in Satan’s voice. But he said he never uses it for actual spam, and a glance through its tweets turns up nothing glaringly untoward. Michael did mention that he had traded accounts with a friend at least once in the past, which raises the possibility that his former account has since run afoul of Twitter’s policies somehow.
Michael has another theory for why his account might have landed on Twitter’s algorithmic bad side. He routinely tweets replies at accounts that don’t follow him — not just Trump, but random people who mention the word “Satan” in their tweets. He noticed that Twitter has cited these sorts of replies as one example of the signals it uses to evaluate whether accounts are contributing to healthy conversations. But the Twitter spokesperson I talked to emphasized that dozens of different signals go into that algorithm, suggesting that occasional jokey replies to strangers are unlikely to be the deciding factor.
Has @s8n been buried unjustly, or did he do something to deserve the treatment that either he’s unaware of or isn’t owning up to? It’s impossible to know without more information from Twitter, and more information is something Twitter seems unwilling to give, whether to Michael, to me, or to anyone else whose account its software has deemed unworthy of amplification.
It’s understandable, on one level, that Twitter would want a way to algorithmically limit the reach of suspected trolls without having to explain or defend every single decision its software makes, which would be incredibly labor-intensive. And informing users that they’ve been algorithmically delisted for behavior that doesn’t actually violate Twitter’s rules would surely not go over well. By presenting this feature as a matter or personalization and ranking, rather than a “shadow ban,” Twitter is implying that it’s not a big deal — that is, not something the affected users necessarily have a right to know about. After all, they’re still free to tweet whatever’s on their mind. It’s a way of implementing the concept that “free speech” need not equal “free reach,” which has become a cri de coeur of some thoughtful tech critics.
Yet the dominance of algorithmically ranked social feeds, first on Facebook and more recently on Twitter, Instagram, and other social networks, makes clear that such rankings matter deeply in determining who actually gets heard. Entire industries have reshaped themselves around these rankings, and fortunes are made and lost by trying to game them. So it’s disingenuous to pretend that users shouldn’t pay attention to, or care, whether they’ve been affected by a policy like Twitter’s. It’s one thing for an algorithm not to surface a given post or tweet; it’s another for it to downgrade entire accounts.
Dorsey seems earnest in his desire to make Twitter a safer and more salutary place, and the company’s anti-troll features seem to be making a difference. Last month, Twitter announced that reports of abuse by accounts that a user doesn’t follow are down 16% in the past year.
Yet Dorsey has also said on multiple occasions that he’s committed to “transparency” as he tries to reform the site. That claim is undermined by the company’s refusal to talk about one of the more noteworthy changes it has made in the past year, even to the people affected. Dorsey has stood before Congress and taken flack for alleged liberal bias in the feature, and sat back while the conservative press labeled it shadow banning. As a result, a system intended to help shore up Twitter’s reputation has become yet another PR black eye, and alienated some avid users who don’t understand why they’ve been targeted.
It’s great that Twitter is becoming less of a hellsite. But even @s8n knows that if you’re going to banish people eternally to an underworld, they deserve to be told exactly how they sinned.
After publication, a Twitter spokesperson sent the following statement: “We look at linked accounts and their activity — this factors into how we organize search and replies. Actions like transferring or selling, which can affect the security of accounts on Twitter, are prohibited under our rules, and can impact how these systems work. We’re always working to improve our spam and abuse-fighting technology to ensure they don’t negatively impact people using Twitter in a healthy way.”