The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence

Trying to “fix” A.I. distracts from the more urgent questions about the technology

Julia Powles
OneZero

--

Image courtesy of Gary Zamchick, strategic designer in residence at Cornell Tech.

Co-authored with Helen Nissenbaum

The rise of Apple, Amazon, Alphabet, Microsoft, and Facebook as the world’s most valuable companies has been accompanied by two linked narratives about technology. One is about artificial intelligence — the golden promise and hard sell of these companies. A.I. is presented as a potent, pervasive, unstoppable force to solve our biggest problems, even though it’s essentially just about finding patterns in vast quantities of data. The second story is that A.I. has a problem: bias.

The tales of bias are legion: online ads that show men higher-paying jobs; delivery services that skip poor neighborhoods; facial recognition systems that fail people of color; recruitment tools that invisibly filter out women. A problematic self-righteousness surrounds these reports: Through quantification, of course we see the world we already inhabit. Yet each time, there is a sense of shock and awe and a detachment from affected communities in the discovery that systems driven by data about our world replicate and amplify racial, gender, and class inequality.

--

--

Julia Powles
OneZero

Associate Professor, Tech Law & Policy at the University of Western Australia. 2018 Poynter Fellow at Yale University.