An Algorithm May Decide Who Gets Suicide Prevention

A recent study on Google’s search results raises questions about the faith we put in algorithms — and the tech companies that use them

Jake Pitre
OneZero

--

Credit: PhotoAlto/Alix Minde/Getty Images

ToTo anyone unfamiliar with the suffocating feeling of suicidal ideation, the significance of a phone call with a stranger might seem dubious. In that moment, the lowest of lows, speaking with someone who understands what you’re going through can — quite literally — mean the difference between life and death.

While it remains under debate just how effective suicide hotlines are in terms of preventing suicides, there is plenty of anecdotal evidence that lives have been saved by their existence. Based on that evidence, in 2011, Google began to show suicide helpline numbers at the top of results for searches like “effective suicide methods.”

Like so much of our online lives, the decision to show this advice alongside results is determined by an algorithm. Facebook has been doing something similar since 2017, when it began monitoring posts for content about suicide or self-harm with pattern recognition algorithms, and then sending those users relevant resources.

Yet, while these steps are helpful, the algorithms concerned do not perform consistently across the world…

--

--

Jake Pitre
OneZero

A very sensitive piece of horse flesh. Writer for Polygon, Hazlitt, etc. Grad student.