Listen to this story

--:--

--:--

Instagram’s New Anti-Bullying Nudges Could Actually Work

Simple but effective, according to experts

Photo by S O C I A L . C U T on Unsplash

AAmong the flood of Facebook news last week — from a major visual overhaul to merging the company’s messenger apps — a tiny nugget slipped through: Instagram has a plan to combat cyberbullying. And it involves politely asking everyone to just stop.

In a brief segment of the company’s F8 keynote last Tuesday, Instagram announced that it’s testing several new features to help combat bullying online. Some give power to users directly, like an Away Mode to temporarily leave Instagram if you’re going through a hard time, or tools that let you limit another user who’s being hostile toward you, without blocking them entirely.

Most notably, Instagram is also testing a more proactive tool with comment “nudges.” This feature will use machine learning to detect when a user is about to make a comment that’s aggressive or hostile, then lightly warn them not to do so. When reached for comment, a spokesperson for Instagram clarified that “the notification doesn’t impede posting and no additional taps are needed, but our intention is that it will encourage people to pause and reflect on a potentially hurtful comment before posting it.”

Surely, it can’t be that easy to prevent online bullying. Right?

At first blush, it seems that politely asking someone to behave online would be as effective as telling the tide not to come in. Yet according to Sameer Hinduja, co-director of the Cyberbullying Research Center and professor of criminology at Florida Atlantic University, it might be a more effective tool than we think.

“We all usually try to do the things that are socially desirable… We want to put our best foot forward, we want to appear likable, to others in general,” Hinduja says. “So if the app itself is messaging you and conveying, ‘This isn’t really in line with our standards or the type of kind community we’re trying to create’… Maybe they’ll second-guess that decision and erase that comment, or maybe soften it.”

“Our intention is that it will encourage people to pause and reflect on a potentially hurtful comment before posting it.”

It’s the machine learning-powered digital equivalent of the “Bro! Not cool” moment from Gillette’s famous ad. When communicating face-to-face, people tend to have an instinctive sense of what’s socially acceptable and will adhere to it, even if they think to say something aggressive or mean. Instagram’s nudges could serve a similar function as social pressure without being as directly confrontational as a block or ban might be.

If this were Instagram’s only anti-bullying tool, it probably wouldn’t be very effective. But as J. Nathan Matias, associate research scholar at Princeton’s Center for Information Technology Policy, points out, this is just the first step. “It’s important to remember that this isn’t Instagram’s only line of moderation. So, if people are posting hurtful, harmful comments, and the rest of the infrastructure is working, that helps.”

Those other lines of moderation already use the same machine learning technology. In 2017, Instagram started employing machine learning to identify offensive or bullying comments. According to Instagram, examples include “you are ugly 💩,” “fat kid,” or “all men are trash.” These comments are hidden from anyone with the filter on, and it’s on by default. Unless you turn the filter off in your settings, if someone leaves a particularly hurtful comment, it might never be seen by anyone except the person who wrote it.

Right now, users might not even be aware when their comments are flagged and hidden. The nudge feature would actively let people know when they’re being rude or hurtful, perhaps with unintended side effects. “Sometimes when you create a metric for something,” explains Matias, “and you hope that revealing that metric to people will cause them to reduce that behavior, it can sometimes turn into a game.” Right now, Instagram gets notified if an account repeatedly trips the filter, and users can also report or block offending accounts.

As with any machine learning system, there’s also the risk of coding biases into the system itself. “Companies are constrained by the kinds of training data sets that they have. We’ve already seen in other scientific studies how biases in the underlying data can lead to cultural, racial, and gender discrimination in how these algorithms behave,” Matias explains. “You could easily imagine situations where an algorithm might unequally apply its warnings to one side or another side in a very sensitive conflict.”

OneZero asked Instagram if its existing filters have had any measurable impact on bullying, but the company declined to share specific numbers. Instead, a representative simply said, “A.I. has allowed us to find and remove significantly more bullying.”

While this sounds positive, Matias believes that Instagram shouldn’t be the only one gauging the conclusions of its research. “I think it’s also imperative that we expect companies to release those results, and also allow independent researchers to evaluate the social impact of what they’re doing now.”

Of course, comments are only one type of bullying. It can also take the form of images — another area Instagram wants to use machine learning to address — or sharing someone’s content without their permission.

Still, there’s a very real chance that a subtle nudge, alongside more severe punishments like blocks or bans, could convince some people on the platform to be kinder to each other. Every little bit helps.

Eric Ravenscraft is a freelance writer from Atlanta covering tech, media, and geek culture for Medium, The New York Times, and more.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store