Earlier this month, Instagram debuted a new content moderation policy focused on “reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines” — or, to put it more bluntly, making it harder for users to promote and find content deemed by the platform to be violent, graphic, or sexually suggestive.
While the new policy may help reduce the spread of disturbing content within the app, it also has some users worried that they’ll find their posts, and potentially even their entire accounts, hidden away due to an overly broad application of the nebulous label “sexually suggestive.” While the policy won’t scrub content from the platform altogether, it will prevent certain imagery from surfacing in the app’s popular “Explore” tab, once called the “realest place on the web.”
Will Ruben, Instagram’s product lead overseeing Discovery, told TechCrunch that these new guidelines will be implemented by machine learning algorithms. Human content moderators will reportedly train the system to identify posts that push the boundaries of good taste.
Different moderators may come to the project with wildly different ideas of what, exactly, “sexually suggestive” actually means.
Experts reached by OneZero voiced a number of concerns about the update, which seems destined to disproportionately affect women, especially body-positive influencers, women of color, and trans activists.
Suresh Venkatasubramanian, a professor at the University of Utah’s School of Computing, says there are multiple ways this sort of system could go wrong. Establishing consistent labels that an algorithm can learn and enforce is already a tricky task, and much more so when the label is something as vague and subject to debate as “sexually suggestive.” Different moderators may come to the project with wildly different ideas of what, exactly, “sexually suggestive” actually means, creating far more confusion than clarity.
“Is [the question] merely, ‘Is it suggestive or not?’ Is there more nuance than this?”…