How to Keep Child Predators Out of Virtual Playgrounds Like ‘Fortnite’ and ‘Minecraft’

There’s no perfect fix, but experts on moderation, sexual exploitation, and internet law tell OneZero there is hope

OOnline games that are wildly popular with kids, like Fortnite, Minecraft, Clash of Clans and Roblox, have become hunting grounds for pedophiles. Recent reporting suggests that by at least the thousands — and perhaps far more than that — kids are being groomed, cajoled, tricked, intimidated, or blackmailed into sending sexual images of themselves or others to predators who trawl gaming platforms and chat apps such as Discord for victims.

While there is often an element of moral panic at play when a new trend or technology poorly understood by adults is identified as a threat to children, an investigation by the New York Times made clear that the problem is real, widespread, and growing. If you’re still skeptical, a firsthand account of what it’s like to hunt predators from an employee at Bark, a startup that develops parental controls for online apps, illustrates just how pervasive the problem is on social media and messaging platforms. It’s impossible for a parent to read these stories without coming away alarmed about kids’ safety and well-being in the world of online gaming.

What the reporting so far has not made fully clear, however, is what can actually be done about the threat. Most of the companies mentioned in the Times story — including Roblox, Discord, Sony (maker of Playstation), and Microsoft (maker of Xbox and Minecraft, among others) — pointed to at least some measures they have put in place to protect underage users, but few could demonstrate meaningful success. Nearly every approach discussed had obvious shortcomings. And Epic Games, maker of Fortnite, didn’t even respond to requests for comment, from either the Times or OneZero. Supercell, maker of Clash of Clans, did not respond to OneZero, either.

Experts on content moderation, online sexual exploitation, and internet law told OneZero that there is hope for addressing the problem in meaningful ways. It just won’t be easy — and some argue it won’t happen without changes to the bedrock law that shields online platforms from many forms of liability.

Part of what makes policing gaming so tricky is that the interaction between predators and kids rarely stays on the gaming platforms. Often, the predators find and form a relationship with kids on a gaming site, only to move to more private chat apps such as Facebook Messenger or Kik to trade sexual images and blackmail them. Sarah T. Roberts, assistant professor of information studies at UCLA, compared unmoderated gaming chats to “a virtual playground with creeps hanging out all around it” — and no parents are present.

“I’m feeling more optimistic about what feels like kind of a piecemeal approach.” — Kat Lo

You can imagine multiple approaches to guarding such a venue. One would be to bring in responsible adults to watch over it — that is, moderation. Another would be to install surveillance cameras — providing oversight via automation. A third approach would involve checking the identities or ages of everyone who enters the playground — a category I’ll call verification. A fourth would be to make sure all the parents and kids in the community are aware of the playground’s risks, and to help them navigate it more safely — i.e., education. Finally, society could introduce laws restricting such playgrounds, or holding their creators liable for what happens on them: regulation.

If there were only one virtual playground, there might be a single correct answer. But there are many, and their features are distinct, making it impossible to craft a single effective policy. “I don’t see a grand technical solution,” said Kat Lo, an expert on online moderation and project lead for the Content Moderation Project at the nonprofit Meedan, which builds open-source tools for journalists and nonprofits.

That doesn’t mean the situation is hopeless. “I’m feeling more optimistic about what feels like kind of a piecemeal approach,” Lo added. The fixes that she, Roberts, and other experts suggested can be roughly divided into the five categories I outlined above.

1. Moderation

Perhaps the most obvious way to police a digital space is to bring in human oversight, whether by deputizing users as moderators (as Reddit does) or hiring contractors or employees to do the job. But putting an employee in every chat is costly for tech companies whose businesses are built on software and scale — and to users who want to talk smack or coordinate their gameplay without a hall monitor looking over their shoulders.

Moderation also doesn’t make much sense in the context of platforms specifically built for private messaging. Many massively multiplayer online games, such as Blizzard’s World of Warcraft, offer players the ability to privately message any other player at any time via a feature called “whisper.” Epic Games’ Fortnite lets you privately message players on your friend list, and offers small-group text and voice chats. Roberts suggested that such platforms “move to a model of less private messaging and more group-oriented messaging,” with monitoring by community managers. While access to private spaces is important to children’s development, she said, there’s no reason gaming platforms need to be among these spaces.

Of course, moderation is far from a perfect solution. Just ask Facebook, whose contracted moderators endure difficult work conditions and struggle to consistently apply its rules. There’s also the risk that limiting private messaging on a gaming platform such as Minecraft simply pushes users onto an even more private platform for chat, such as Discord. But given that a perfect solution doesn’t exist, more moderation in gaming chats would be a good start — assuming you can get the platforms to do it. (We’ll get to that challenge farther down.)

There’s also the option of simply limiting or removing chat features. Hearthstone, a Blizzard game released in 2014, allows players to communicate with matched opponents only via a menu of preset messages. The company told the gaming site Polygon at the time that the goal was “to keep Hearthstone feeling fun, safe and appealing to everyone.”

2. Automation

Fortnite has some 250 million registered players, of whom more than a million may be online at any given time. Anytime scale is part of the problem — in this case because there are far too many people playing massively multiplayer online games at once for humans to watch over everything they say to each other — it’s worth at least considering whether automation could be part of the solution. And as it turns out, automation already is part of the solution, in some settings. It’s just overmatched.

Since 2009, Microsoft has released a free tool called PhotoDNA that scans still images for matches with known examples of child pornography, and it has been widely used by other companies since then. Last year, it expanded the tool to include video. And last week, the company announced that it is releasing new technology called Project Artemis that uses machine learning to scan online conversations for indicators of child grooming and flag suspicious ones for human review. Microsoft developed the technology in collaboration with fellow tech companies, including Roblox and Kik, as well as the nonprofit Thorn, and will make it available for free to other platforms.

Roblox, for its part, told OneZero it already applies filtering tools to all chats on its platform, with extra restrictions for children under 13. The filters block crude language, but also attempt to detect and block when a player is trying to lure another player off the game and into private communication, such as by asking for their phone number. Project Artemis will add another layer to Roblox’s systems, a spokesperson said.

Meanwhile, Facebook has been building its own machine-learning software, including a tool that tries to identify examples of predators grooming children — that is, befriending them for purposes of soliciting sexual images later. Companies that use this sort of software generally partner with the National Center for Missing and Exploited Children, or NCMEC, to report material to law enforcement. But as the Times reported in a separate investigation of child pornography on platforms such as Facebook Messenger, NCMEC itself has been overwhelmed in recent years by the volume of reports.

Image-scanning tools such as PhotoDNA tend to be less applicable on gaming platforms because sexual images tend to be exchanged on more private messaging services. An approach like Project Artemis that analyzes chats for suspicious patterns of speech could hold promise for gaming platforms, Lo said — but it depends on the context. In some online settings, that sort of monitoring might be viewed by users and perhaps regulators as an invasion of privacy. If the system isn’t sufficiently sophisticated and constantly updated, predators will learn how it works and use coded speech to get around it. And it could be skirted altogether on a platform such as Discord, which allows users to chat by voice as well as text.

Attempts to log and analyze voice communications in some settings may be constrained by privacy laws, noted Jeff Kosseff, assistant professor of cybersecurity law at the U.S. Naval Academy and the author of a book about Section 230 of the Communications Decency Act, a foundational piece of internet law. Additionally, if tech companies work too closely with law enforcement in monitoring their users, Kosseff said, those efforts could run afoul of Fourth Amendment restrictions on warrantless searches by the government. Gaming companies looking to do this sort of monitoring have to do so independently, and properly notify users to obtain their consent, such as through their terms of service.

Implementing this sort of A.I. can require resources that smaller game studios lack. That’s where industry cooperation and standards could help, Lo said. But such systems must constantly evolve, or predators will quickly learn what they’re looking for and deploy evasive strategies, such as coded language, to avoid detection. Even the most sophisticated systems probably can’t match the effectiveness of trained human moderators, Lo added. There’s still value in automated detection as a first layer — like flagging suspicious interactions for human follow-up — but only if it functions as part of a more comprehensive approach.

3. Verification

One of the simplest approaches would be simply to separate kids from adults on online platforms. But there are plenty of games, like Minecraft, that appeal to both teens and grown-ups. To fully segregate them by age would be a drastic step and perhaps an unrealistic one.

A more modest measure might be to limit certain features, like private chat, to adults. “When it comes to children and gaming, just because it’s possible to allow them to have unfettered messaging capability, should that be allowed?” said Roberts. “Children’s privacy is important to their development. But maybe not in every place at every time.”

Another possible course of action would be to require users to register under their real names. Facebook requires users to verify their identity, which limits some — though not all — of the anonymous trolling and abuse that plague platforms such as Twitter and Reddit.

But these policies require verification of people’s identities, which has proven to be a headache throughout the history of gaming and the social internet. Kosseff pointed out that attempts to limit anonymity via legislation or regulation can raise First Amendment concerns. For instance, a 1995 Supreme Court ruling defended anonymity as “a shield from the tyranny of the majority” in striking down an Ohio law that required people writing political campaign literature to identify themselves. And in the absence of regulation, companies that require it may scare off users, in addition to raising the stakes of any data breach because users’ activities could be more easily tied to their identities. And Lo pointed to other problems.

“For example, a child who identifies as LGBT might want to have a separate, very private profile” from the one linked to their real identity, Lo said. “If their parent is abusive or doesn’t accept their identity,” then real-name policies “limit the ways in which they engage online, and could very much isolate communities that are already marginalized.”

“To keep your kids safe, the cost of that shouldn’t be to say you have to be a games expert.” — Mary Anne Franks

4. Education

Informing and preparing kids and their parents to deal with online child exploitation might be the least controversial approach. Some gaming platforms offer tutorials for users, while some nonprofits hold digital literacy courses. More of each could help.

“Parental oversight has to be a part of this, at the end of the day,” said Roberts. “The parents should be far better versed and more prepared to invoke parental privilege around restricting functionality on the platforms their kids use. But that would mean they need to have the sophistication to do that. Unfortunately, in most households, it’s the kids who are more sophisticated” when it comes to the workings of their favorite online platforms. One suggestion might be for parents to limit gaming to a public part of the home, she said. “It’s not, ‘you go in the room with your Nintendo Switch and close the door and God knows what happens.’”

Mary Anne Franks, a law professor at the University of Miami and president of the nonprofit Cyber Civil Rights Initiative, agreed that “education is always a good bet.” But she said that expecting parents to do all the work of protecting their kids online is unrealistic, and lets the platforms themselves off the hook for facilitating those interactions. “To keep your kids safe, the cost of that shouldn’t be to say you have to be a games expert,” she said.

The platforms could help with education, though, by building tutorials into their services that kids might actually use. Perhaps they could incentivize completing a tutorial on how to avoid being exploited by offering participants in-game rewards. Ultimately, however, “the conversation really has to be the effectiveness of these techniques,” Franks said. And no one is in a better position to figure out how to stop exploitation on gaming platforms than the companies that build and run them. Which is why Franks believes the only real answer is…

5. Regulation

“I think the reason we’re in this mess is because we kept treating online spaces as if they were so different from offline spaces,” said Franks. “Imagine a school that just allows random adults to come and go, have whatever conversations they want with the kids, and hope for the best.”

With some exceptions, Section 230 shields online spaces from being sued for what their users post. Part of the 1996 Communications Decency Act, the provision makes possible much of the modern internet. Without it, any website with a comment section, let alone social media apps, would face constant legal risk, whether they attempt to moderate their platforms or not. One of the exceptions, however, is for violations of federal criminal law, including the distribution of child pornography. In fact, if platforms have actual knowledge of child porn on their servers, by law they have to report it and take it down. But attempts by adults to groom minors, or to persuade them to send sexual images via another channel, are not necessarily covered by federal criminal law.

The problem, critics say, is that Section 230 has often been interpreted by both courts and the industry as a sort of carte blanche for irresponsibility — for instance, as an excuse not to do anything about child grooming. That’s what needs to change, argued Franks, noting that the companies themselves are in a better position than regulators or nonprofits to know what measures would be most effective in deterring child predation on their own platforms. But until they’re held responsible under the law, they have little incentive to take any actions that could hurt their bottom line.

In theory, one possible remedy would be another carve-out from Section 230 focused specifically on child grooming. That’s the approach that lawmakers took in passing FOSTA-SESTA, the controversial 2018 bill that held platforms liable for violations of state as well as federal sex trafficking laws. But that bill has backfired in multiple ways, including making sex work more dangerous for the people it was meant to protect. Beyond that, Franks said she thinks a piecemeal approach to amending Section 230 amounts to “bad policy,” creating a confusing tangle of overlapping regulations with which only the largest companies, with their phalanxes of lawyers, could realistically comply.

Instead, Franks and Danielle Citron, a law professor at Boston University and vice president of the Cyber Civil Rights Initiative, are collaborating on ideas to amend Section 230 at a more fundamental level. Citron, who co-authored an influential 2017 proposal with Lawfare’s Benjamin Wittes, believes the best approach would be to condition Section 230’s legal protections on platforms being able to demonstrate “reasonable content moderation practice.”

In the case of gaming platforms and child grooming, that might mean at the very least offering reporting mechanisms for users — and acting on them. That would leave in place a strong shield for companies such as Microsoft and Roblox that are acting responsibly, even if they don’t catch every predator. But it would remove Section 230 protections for platforms that don’t take even basic steps to protect their users. And you might be surprised how many of those there are, Citron added. “We’ve seen a lot of deep negligence and recklessness,” she said, like dating apps that take reports of rape or child exploitation but never follow up on them. It’s hard to convince them to invest the resources necessary to change that when Section 230 provides an expectation of legal immunity.

Kosseff said he worries such changes, unless they’re implemented carefully, could curtail options for online speech by making small platforms too risky to operate. The largest tech companies, such as Facebook, would spend big on the moderators, A.I. systems, and legal teams needed to defend themselves against lawsuits for their users’ behavior. But others might simply shut down rather than risk a barrage of litigation they can’t afford.

Lo added that platforms faced with liability for their users’ speech would “overcompensate” by eliminating anonymity and invading privacy in other ways. She said there has to be a balance between compelling platforms to take problems such as child grooming seriously and allowing for the existence of online spaces that don’t aggressively monitor their users.

Reasonableness is obviously a subjective standard, Citron acknowledged. But it’s common in other realms of law. “The idea of reasonableness is to somehow find middle ground between strict liability and no liability.”

Lo and Citron agreed, however, that the responsibility lays foremost with the platforms, one way or another. Legislating particular technological approaches would create systems that quickly became obsolete as users adapted.

“These problems are constantly transforming” as online platforms evolve, Lo said — and the solutions have to do the same. Just because there’s no quick fix doesn’t mean we should accept a system this broken.

Senior Writer, OneZero, at Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store