“Non-white people are not outliers.”

Dave Gershgorn
OneZero
Published in
1 min readDec 10, 2020

--

Though we often think of white supremacy in its most violent and stark forms, writes A.I. researcher Deborah Raji in MIT Technology Review, it often rears its head in more nuanced and nefarious ways: cornerstone datasets that train algorithms to categorize Black people as drug addicts and criminals, or software that reinforces over-policing in communities of color.

In a beautiful and gutting piece, Raji, who worked alongside Joy Buolamwini and Timnit Gebru — the A.I. researcher currently in the headlines for her firing from Google— details how the ideals of white supremacy are built into software from the ground up.

The writing is a crash course in modern algorithmic injustices, but also a deeply human look at working in a field unprepared for and uninterested in fixing decades of racism encoded into software.

Raji writes:

The fact is that A.I. doesn’t work until it works for all of us. If we hope to ever address racial injustice, then we need to stop presenting our distorted data as “ground truth.” There’s no rational and just world in which hiring tools systematically exclude women from technical roles, or where self-driving cars are more likely to hit pedestrians with darker skin. The truth of any reality I recognize is not in these models, or in the datasets that inform them.

Read the rest of her story here:

--

--

Dave Gershgorn
OneZero

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.