Illustration: Erik Carter

How Bias Ruins A.I.

In wake of Banjo CEO revelations, bias in A.I. comes under new scrutiny

Dave Gershgorn
Published in
5 min readApr 28, 2020


Bias in artificial intelligence is everywhere. At one point, when you Googled “doctor,” the algorithm that powers Google’s namesake product returned 50 images of white men. But when biased algorithms are used by governments to dispatch police or surveil the public, it can become a matter of life and death.

On Tuesday, OneZero reported that Banjo CEO Damien Patton had associated with members of the Ku Klux Klan in his youth, and was involved in the shooting of a Tennessee synagogue.

Banjo’s product, which is marketed to law enforcement agencies, analyzes audio, video, and social media in real time, using artificial intelligence algorithms to determine what is worthy of police attention and what is not.

To what extent do the decisions of these types of algorithms reflect the conscious or unconscious biases of their creators?

The most common type of artificial intelligence used by tech companies today is called deep learning, which…



Dave Gershgorn

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.