The Algorithmic Auditing Trap

‘Bias audits’ for discriminatory tools are a promising idea, but current approaches leave much to be desired

Mona Sloane
OneZero

--

Image: LightFieldStudios/Getty Images

This op-ed was written by Mona Sloane, a sociologist and senior research scientist at the NYU Center for Responsible A.I. and a fellow at the NYU Institute for Public Knowledge. Her work focuses on design and inequality in the context of algorithms and artificial intelligence.

We have a new A.I. race on our hands: the race to define and steer what it means to audit algorithms. Governing bodies know that they must come up with solutions to the disproportionate harm algorithms can inflict.

This technology has disproportionate impacts on racial minorities, the economically disadvantaged, womxn, and people with disabilities, with applications ranging from health care to welfare, hiring, and education. Here, algorithms often serve as statistical tools that analyze data about an individual to infer the likelihood of a future event—for example, the risk of becoming severely sick and needing medical care. This risk is quantified as a “risk score,” a method that can also be found in the lending and insurance industries and serves as a basis for making a decision in the present, such as how resources are distributed and to whom.

--

--

Mona Sloane
OneZero
Writer for

Mona Sloane is a Professor of Technology, Culture and Society at New York University. She works on design, inequality, and technology. monasloane.org