Late last month, as protests against police brutality kicked off nationwide after the killing of George Floyd by Minneapolis police officers, a photo of a burning McDonald’s spread across Twitter. Posted in a tweet from the bogus account “@Breaking9ll” on May 28, the caption read “McDonald’s Has Fallen,” implying that the fire was the result of recent demonstrations.
Tens of thousands engaged with the tweet, but as Snopes eventually confirmed, the picture had been mislabeled: The fire in the photo actually happened back in November 2016 and had nothing to do with protests — it was the result of a grease fire.
While the burning McDonald’s was just one post’s worth of fake news amid a surge of misinformation surrounding the protests, the tweet — which was sent from a “parody” account, now suspended, that was designed to resemble the handle of popular Twitter account @Breaking911 — simultaneously illustrates two disparate tactics that help spread misinformation online: clever imposter accounts and news aggregators that cut corners.
The first and most obvious is a phony Twitter account that looks like a legit handle.
While Twitter’s display names and avatar images can be easily swapped to copycat someone, Twitter @ handles are harder to believably mimic because they’re unique to each account. However, if the targeted account has certain characters in its handle, impersonators can make their copy look legit by swapping them out for similar-looking letters or symbols. In the Breaking911 example, the imitation account’s handle was spelled with two lowercase L’s instead of the real handle’s two 1’s. In Twitter’s Helvetica Neue, a sans serif font, a lowercase letter L (l) looks pretty close to a number 1 at a glance.
This quirk has been gamed before: In February, an account that was eventually suspended swapped Breaking911’s 1’s with capital letter I’s. It spread misinformation about Jungkook, a member of K-pop group BTS.
People can report instances of impersonation to Twitter by using this form. A spokesperson from Twitter didn’t respond to a question on this particular strategy.
The sans serif trick wasn’t the only misleading tactic at play with the fake McDonald’s inferno tweet. The real Breaking911 account — the one that the spoof account attempted to imitate in order to seem legitimate — is also a sort of imitation, if a less blatant one.
The original Breaking911 account appears to carry the authority (and big-time social following) of a news organization: Its Twitter bio says it provides “breaking news in real time,” and its logo and handle resemble major news organizations’ breaking news Twitter accounts, such as NBC News’s “@BreakingNews” handle. But Breaking911 just aggregates news from other sources, and experts say it has frequently spread misinformation. Though the site is influential enough to merit copycats, it’s been unclear until now who runs and contributes to Breaking911 and what editorial standards its staff might adhere to.
In an interview with OneZero, Breaking911’s Grant Benson, who identifies himself as the organization’s editor-in-chief, explained the basics of the site’s infrastructure and said its staff relies on “common sense” to fact check their tweets. He said Breaking911 is primarily focused on simply getting news out faster than the mainstream media, as those outlets “have to jump through a lot more hoops as far as editors.”
Benson says he’s 33 and based in Nashville, Tennessee. Though there’s no masthead on Breaking911’s website, he lists himself as running the site’s “newsdesk” on his Twitter profile, and PR platform Muck Rack pulls bylines listing his name from the site (though bylines no longer appear on Breaking911 pages).
Breaking911 doesn’t appear to do original reporting. Its Twitter account, which has more than 700,000 followers, reposts news and video clips pulled from social media users as well as local, national, and international outlets. Breaking911 also runs a website, Instagram account, and Facebook page with more than 470,000 followers. Posts on the site don’t have public attribution.
Breaking911 has seen a spike in followers since protests began in late May, according to social media analytics site Social Blade. The account is followed by major journalists, and Breaking911 posts have recently been embedded in articles in publications such as the Daily Beast, Salon, National Review, and Heavy. Its tweets regularly reach thousands of engagements on Twitter — meaning people shared, replied, or liked them — and see similar numbers when reposted to Facebook. Its posts and the media in them are often reshared on video platforms like YouTube and forums like 4chan.
Benson said the site was founded by “just a few guys for fun” with the aim “to spread news quicker than the mainstream media could” during breaking news situations. According to Benson, the Twitter account, which joined the site in September 2011, started gaining traction during the 2013 Boston Marathon bombing. He said the site is currently run by a team of fewer than 20 people who are based around the U.S., including people based in Brooklyn, Houston, Omaha, and Philadelphia.
“Broadly, aggregators like this get their power because people tend to not separate the ideas of story and sources.”
When asked to detail the team’s fact-checking process, Benson said he “wouldn’t get too much into it,” but added, “we just try to go to the local reporters and reach out, that kind of thing. Common sense.”
Breaking911 doesn’t often share blatantly fake news like its fake clones, but the account consistently posts misleading information, removing context and sourcing from its tweets or adding its own descriptions without attribution. For instance, the account’s text tweets and caption on videos often don’t directly link to or even list their sources nor do many posts on its website.
Mike Caulfield, who specializes in the study and application of web literacy as Washington State University’s director of blended and networked learning, told OneZero that separating scoops from their full context in this way can be deceptive.
“Broadly, aggregators like this get their power because people tend to not separate the ideas of story and sources,” he said. “So aggregators like this … find ways to take a story someone else reported and frame it in ways, often inflammatory, that get their version to travel further. Even where the framing is reasonable, [a] lot of times they won’t provide the original link.”
Kate Starbird, an associate professor at the University of Washington who studies how information spreads during crisis events, told OneZero that Breaking911 is a particularly persistent example among a “class” of “breaking news” accounts that use similar strategies.
Starbird says Breaking911 shared three of the six false rumors that a team of academics, including herself, focused on in a 2014 paper about the spread of misinformation after the 2013 Boston Marathon bombing for a gathering of information studies researchers called iConference, including “one that was clearly a hoax and could’ve been refuted with a simple Google search.”
“Studying online crisis events, we’ve run into that account a lot,” she said. “They have been in our datasets since our first study on online misinformation (the 2013 Boston Marathon attacks) … and they’ve showed up in many events after — sharing real-time updates, many of which turn out to be false.”
The account has recently gotten simple facts wrong, too. For instance, on May 30, the Breaking911 handle tweeted a video from protests in Austin with the caption “The InfoWars truck is welcomed in LA.” Reporting from the scene, as well as InfoWars’ website, says the encounter happened in Austin, Texas. Another tweet on June 2 misidentified a protest as occurring in “Asheville, SC”; Asheville is a city in North Carolina.
On May 31, Breaking911 reposted a video showing people in masks grabbing bricks from pallets from a fenced-in area on a New York City street with the unsourced caption, “Videos continue to surface showing protesters stumbling upon pallets of bricks or pavers in areas with no construction taking place.” Snopes rated the claim that pallets of bricks were being strategically placed at protest sites as “mostly false,” saying that there is currently “no evidence proved that was indeed occurring.”
Though the InfoWars tweet was deleted within days and a subsequent correction tweet was issued in a reply to the North Carolina tweet, the Breaking911 account appears to have deleted all its tweets entirely earlier this month. Benson said Breaking911 does “wipeouts” of the feed occasionally to “prevent hateful comments.”
A Twitter spokesperson confirmed that the Breaking911 account was temporarily suspended in 2018 due to a DMCA notice and that the account was also temporarily suspended in March 2020 for violating Twitter policies (Benson told OneZero the account was hacked).
Caulfield recommends people use the SIFT method when they encounter unsourced information online. The acronym stands for a four-step process: stop, investigate the source, find better coverage, trace claims, quotes, and media to the original context. Starbird says journalists should also avoid embedding tweets from accounts like Breaking911 in articles, which can help to validate them as a source of good information.
The Simplest Way to Spot Coronavirus Misinformation on Social Media
A digital literacy expert shares his method
“What we encourage people to do if they see something from an aggregator is not to ignore it but to ‘trade up,’” he says, referring to the SIFT process for seeking better sources. “The aggregator may have called your attention to it, but that doesn’t mean they deserve your attention.”