How Bias Ruins A.I.
In wake of Banjo CEO revelations, bias in A.I. comes under new scrutiny
Bias in artificial intelligence is everywhere. At one point, when you Googled “doctor,” the algorithm that powers Google’s namesake product returned 50 images of white men. But when biased algorithms are used by governments to dispatch police or surveil the public, it can become a matter of life and death.
On Tuesday, OneZero reported that Banjo CEO Damien Patton had associated with members of the Ku Klux Klan in his youth, and was involved in the shooting of a Tennessee synagogue.
Banjo’s product, which is marketed to law enforcement agencies, analyzes audio, video, and social media in real time, using artificial intelligence algorithms to determine what is worthy of police attention and what is not.
To what extent do the decisions of these types of algorithms reflect the conscious or unconscious biases of their creators?
CEO of Surveillance Firm Banjo Once Helped KKK Leader Shoot Up Synagogue
Documents reveal Damien Patton, CEO of SoftBank-backed Banjo, admitted to being a Neo-Nazi skinhead in his youth
The most common type of artificial intelligence used by tech companies today is called deep learning, which is a technique that analyzes data by breaking it down into smaller, simpler pieces and finds patterns among them. The bigger the datasets, the better an algorithm will be at recognizing patterns.
Say you’re developing a program to identify pets. If you’re a dog person, you might train the algorithm on a million pictures of dogs but only, say, 1,000 pictures of cats. The algorithm’s idea of what cats look like will ultimately be far less fully formed, increasing the likelihood that it will misidentify them. That, in a nutshell, is A.I. bias — poorly collected, or poorly designed, datasets that reflect human biases, and eventually impact real-world outcomes.
“Personal prejudices are all present in the room where choices about which systems get built and which don’t are made, which data is used… and how to determine whether the system is working well or not,” says Meredith Whittaker, cofounder of AI Now, a research institution that…