Instagram Now Helps You Avoid Making a Fool of Yourself (Unless You Want To)

The best digital wellness tools put us in control

Photo: Jaap Arriens/NurPhoto/Getty

WeWe are in the throes of a technology backlash. Congressional Democrats have introduced legislation that would require software companies to ensure that their software is not engaged in discrimination. The FTC wants to break up the major technology companies. Bold regulatory initiatives like these ultimately may be required to repair the sorry state of the online experience.

But once in a while, technology companies themselves come up with thoughtful solutions. So it is with a new feature that Instagram recently introduced: If you put up a comment similar to one that other users have reported as being problematic, a message appears that asks you to take a moment and rethink whether you really want to post it.

Instagram’s technological fix for offensive comments comes from a bit of A.I. software that monitors the jungle of 95 million photos and messages that users post each day. The software compares each of those to an existing pool of posts that users have flagged as potentially offensive. But rather than designing the algorithm to make the decision to block your post — essentially censorship — Instagram asks you, the user, to think about whether it is a good idea to share this particular sentiment with your followers.

The idea is that the convenience of getting assistance from these algorithms might undermine what is most important about being a human.

In some ways, this is like a classic nudge, a slight reordering of choices that helps people make better decisions. By design, nudges leave the freedom of choice intact. But the Instagram prompt goes beyond a nudge, and this is where it becomes more than just a technological fix. Not only does it let users decide whether to put up a potentially insulting, shaming, or otherwise problematic post, but it also encourages them to reflect on the issue. Such reflection lies at the heart of what philosophers label as autonomy competencies — the ability to reflect on past choices, recognize current options, and consider what might be best in the future.

As we increasingly rely on A.I.-driven algorithms to help us navigate our world, there is a nagging worry that some of these autonomy competencies are under threat. Consider the recommendations we receive when purchasing a Kindle book from Amazon or watching a YouTube video. They are certainly convenient, but they are also a form of what Evan Selinger and Brett Frischmann call cheap bliss in their book, Re-Engineering Humanity. The idea is that the convenience of getting assistance from these algorithms might undermine what is most important about being a human. The Instagram prompt keeps humans in the loop while fostering the sorts of behaviors that are required for humans to flourish in the information age.

Instagram is not alone in designing a smart feature that respects human agency. As we have outlined in a forthcoming paper, technology companies have been releasing a spate of applications aimed at promoting digital wellness. Gmail’s Nudge feature moves emails that you haven’t replied to for some time to the top of the inbox. Apple’s Downtime feature allows you to indicate your goals for iPhone use and schedule time when selected apps and notifications will not interrupt you. Perhaps the leader of the pack is the Moment app, which helps you manage your phone use by reminding you when it has increased and encouraging you when it has decreased. (Apple’s Screen Time is more or less a carbon copy.) These apps all engage the user in the process of making decisions about how their digital lives will unfold, thereby supporting many of the same autonomy competencies as the Instagram prompt.

Once in a while, the technology companies themselves come up with thoughtful solutions.

Only time will tell whether Instagram’s new prompt will be effective in reining in some of the problems plaguing the online world. As most everyone with a social media account already knows, we are in a fairly dark place with respect to the sorts of messages that flood the internet, and even small steps toward improving online discourse are welcome. But we should cheer — loudly — when technology companies come up with solutions that at once improve both the efficiency of the product and the quality of our lives. Indeed, we should demand that technology developers redouble their efforts to help us improve our cognitive abilities, strengthen the social compact, and become better humans. At the end of the day, isn’t that what we mean by digital wellness — the ability to integrate our technology use within the broader scheme of a life well-lived?

Peter Reiner is Professor of Neuroethics, University of British Columbia. Laura Specker Sullivan is Assistant Professor of Philosophy, College of Charleston.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store