This op-ed was written by Mona Sloane, a sociologist and senior research scientist at the NYU Center for Responsible A.I. and a fellow at the NYU Institute for Public Knowledge. Her work focuses on design and inequality in the context of algorithms and artificial intelligence.
We have a new A.I. race on our hands: the race to define and steer what it means to audit algorithms. Governing bodies know that they must come up with solutions to the disproportionate harm algorithms can inflict.
Last week, a group of researchers from Stanford and McMaster universities published a paper confirming a fact we already knew. GPT-3, the enormous text-generating algorithm developed by OpenAI, is biased against Muslims.
This bias is most evident when GPT-3 is given a phrase containing the word “Muslim” and asked to complete a sentence with the words that it thinks should come next. In more than 60% of cases documented by researchers, GPT-3 created sentences associating Muslims with shooting, bombs, murder, and violence.
We already knew this because OpenAI told us: In the paper announcing GPT-3 last year, it specifically noted…
Night after night, Fien de Meulder sat in front of her Linux computer flagging names of people, places, and organizations in sentences pulled from Reuters newswire articles. De Meulder and her colleague, Erik Tjong Kim Sang, worked in language technology at the University of Antwerp. It was 2003, and a 60-hour workweek was typical in academic circles. She chugged Coke to stay awake.
The goal: develop an open source dataset to help machine learning (ML) models learn to identify and categorize entities in text. At the time, the field of named-entity recognition (NER), a subset of natural language processing, was…
From car insurance quotes to which posts you see on social media, our online lives are guided by invisible, inscrutable algorithms. They help private companies and governments make decisions — or automate them altogether — using massive amounts of data. But despite how crucial they are to everyday life, most people don’t understand how algorithms use their data to make decisions, which means serious problems can go undetected. (Take, for example, research last year that showed anti-Black bias in a widely used algorithm that helps hospitals identify patients in need of extra medical care.)
Bias in artificial intelligence is everywhere. At one point, when you Googled “doctor,” the algorithm that powers Google’s namesake product returned 50 images of white men. But when biased algorithms are used by governments to dispatch police or surveil the public, it can become a matter of life and death.
On Tuesday, OneZero reported that Banjo CEO Damien Patton had associated with members of the Ku Klux Klan in his youth, and was involved in the shooting of a Tennessee synagogue.
Banjo’s product, which is marketed to law enforcement agencies, analyzes audio, video, and social media in real time, using…
Co-authored with Helen Nissenbaum
The rise of Apple, Amazon, Alphabet, Microsoft, and Facebook as the world’s most valuable companies has been accompanied by two linked narratives about technology. One is about artificial intelligence — the golden promise and hard sell of these companies. A.I. is presented as a potent, pervasive, unstoppable force to solve our biggest problems, even though it’s essentially just about finding patterns in vast quantities of data. The second story is that A.I. has a problem: bias.
It’s human nature to attempt to conceptualize the future. All civilizations, from the ancient to the modern, have had some tradition of divination or fortune telling; these days, we have data and statistical models to give us some insight into what’s to come. But left to our own devices, most of us are really bad at predicting what will happen—and that’s a consequence of the wiring of the human brain.
Collectively, the thought patterns that affect the way we think, make decisions, and interact with other people are known as cognitive biases. Many of these biases also distort our perceptions…