This op-ed was written by Mona Sloane, a sociologist and senior research scientist at the NYU Center for Responsible A.I. and a fellow at the NYU Institute for Public Knowledge. Her work focuses on design and inequality in the context of algorithms and artificial intelligence.
We have a new A.I. race on our hands: the race to define and steer what it means to audit algorithms. Governing bodies know that they must come up with solutions to the disproportionate harm algorithms can inflict.
This story was co-authored by Dr. Rumman Chowdhury, CEO of Parity, an enterprise algorithmic audit platform company.
“The algorithm did it” has become a popular defense for powerful entities who turn to math to make complex moral choices. It’s an excuse that recalls a time when the public was content to understand computer code as somehow objective. But the past few years have demonstrated conclusively that technology is not neutral and instead reflects the values of those who design it, and it is fraught with all the usual shortcomings and oversights that humans suffer in our daily lives.
OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.
Bias in artificial intelligence is notoriously problematic. Facial recognition algorithms have been found to more frequently misidentify people with darker skin; identified poor people as high-risk and less eligible for public services; and have recommended resources to healthier white people before recommending the same resources to sicker Black people.
Now, new research shows that similar racial, gender, and intersectional biases are “automatically” learned by popular computer vision algorithms when learning from a popular image dataset, according to a paper from Carnegie…
Last week, a group of researchers from Stanford and McMaster universities published a paper confirming a fact we already knew. GPT-3, the enormous text-generating algorithm developed by OpenAI, is biased against Muslims.
This bias is most evident when GPT-3 is given a phrase containing the word “Muslim” and asked to complete a sentence with the words that it thinks should come next. In more than 60% of cases documented by researchers, GPT-3 created sentences associating Muslims with shooting, bombs, murder, and violence.
We already knew this because OpenAI told us: In the paper announcing GPT-3 last year, it specifically noted…
In July, I performed an experiment to see how easy it was to run ads on Google that made false claims about Joe Biden.
First, in the Google Ads system, I bought the keyword “should I vote for Biden?” Then I told Google I wanted to run this ad:
Last year at the Conference on Neural Information Processing Systems (NeurIPS), one of the most well-respected computer science conferences in the world, the opening panel discussion on A.I. for social good didn’t go quite as people might have expected.
“I’m not usually in spaces like this, and I’m not entirely convinced that I haven’t surreptitiously walked into a terrorist den,” began Sarah T. Hamid, a community organizer based in Los Angeles and one of the core members of the Carceral Tech Resistance Network. As Hamid explained, “like terrorists, technologists in spaces like this have a concept of what social good…
On my recent birthday, only four of my 711 Facebook “friends” wrote on my wall. It was tempting to assume that people scrolling their news feeds saw it was my birthday and thought “Nah, not interested.”
My rational brain, however, knew it wasn’t my friends who lacked basic decency, but the algorithms that ran their online social behavior. Being an occasional user of Facebook, the algorithm doesn’t freely grant me visibility to others — part-timers like me have to work for it. So I played ball and posted a photo of me enjoying my birthday. My motivations for doing this…
For years, A.I. research lab OpenAI has been chasing the dream of an algorithm that can write like a human.
Its latest iteration on that concept, a language-generation algorithm called GPT-3, has now been used to generate such convincing fake writing that one blog written by the it fooled posters on Hacker News and became popular enough to top the site. (A telling excerpt from the post: “In order to get something done, maybe we need to think less. Seems counter-intuitive, but I believe sometimes our thoughts can get in the way of the creative process.”)
While OpenAI has released…
Join me in an experiment. We’re going to search for various times of day using Google’s Image Search. We’ll use a fresh Google Chrome Incognito window to ensure our results aren’t skewed. This is scientific, after all, and we want the most accurate results possible.
First, let’s try “sunrise.”
Night after night, Fien de Meulder sat in front of her Linux computer flagging names of people, places, and organizations in sentences pulled from Reuters newswire articles. De Meulder and her colleague, Erik Tjong Kim Sang, worked in language technology at the University of Antwerp. It was 2003, and a 60-hour workweek was typical in academic circles. She chugged Coke to stay awake.
The goal: develop an open source dataset to help machine learning (ML) models learn to identify and categorize entities in text. At the time, the field of named-entity recognition (NER), a subset of natural language processing, was…