Listen to this story
Instagram’s Latest Anti-Bullying Moves Still Fall Short
The image-sharing website announced new tools and policies designed to curb harassment, but users still aren’t safe
Earlier this week, Instagram finally announced plans to launch a new anti-bullying feature. In a recent blog post, and as reported by the BBC, new Instagram head Adam Mosseri said, “We can do more to prevent bullying from happening on Instagram, and we can do more to empower the targets of bullying to stand up for themselves.”
No one could argue with that, but I do have concerns.
One is that this action has been too long coming. The social media network has faced criticisms over its bullying policy for years. The announcement follows renewed pressure to address the issue after British teenager Molly Russell committed suicide after using the platform to broadcast her intention to do so. Is what Instagram is proposing too little, too late? A cynical ploy to offset criticism that they didn’t act in time?
My other issue is that, while it’s commendable that Instagram is employing A.I. to recognize content that may be abusive or harmful, implementing a feature that simply asks, “Are you sure you want to post this?” is a lackluster response to a life-and-death issue. It’s the sort of thing I would expect my mobile network provider to ask me before sending a drunken text to an ex at 2 a.m. to spare my blushes the morning after!
Introduced as part of Instagram’s “Rethink” strategy, users are also asked to “Learn More” upon receiving the cautionary notice, and in doing so, they are provided with a message that reads: “We are asking people to rethink comments that seem similar to others that have been reported.” This hardly seems like an adequate warning or response.
Mosseri further explained to the BBC that, “These tools are grounded in a deep understanding of how people bully each other and how they respond to bullying on Instagram, but they’re only two steps on a longer path.”
Implementing a feature that simply asks, “Are you sure you want to post this?” is a lackluster response to a life-and-death issue.
Mosseri is right. But Instagram could–and should–do more. Though Instagram is claiming that early tests of the tool have revealed that it “encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect,” the problem is that the user can choose to ignore the message and post the abusive content anyway.
Instagram has claimed that this tool will allow users to manage harassment more easily, without having to block users, which could in turn lead to even more bullying. However, Instagram already has the technology for removing harmful content, as can be seen with its policy on nudity. They even went so far as to remove a post of a painting I once shared of a naked woman! They clearly don’t know art when they see it. Could they not use this technology to recognize abusive or harmful language? They could then actively restrict users’ ability to share this type of content.
Another approach the platform could take would be to give users the option of approving comment requests, as is already possible for direct message requests from users you don’t follow. This would give users more control over the type of content they see, and ultimately protect them from abuse.
This isn’t the first time Instagram has come under fire for how its content may be harmful to its users. The platform is often accused of being the worst of the social media channels for mental health. While this isn’t a viewpoint I agree with, as I stressed in a recent blog post, Instagram needs to do more than simply attempt to curb abuse on the network by providing a choice or suggesting we should be nicer to one another. Abuse simply should not be an option.