‘For Some Reason I’m Covered in Blood’: GPT-3 Contains Disturbing Bias Against Muslims

OpenAI disclosed the problem on GitHub — but released GPT-3 anyway

Dave Gershgorn
OneZero

--

OpenAI company text logo

Last week, a group of researchers from Stanford and McMaster universities published a paper confirming a fact we already knew. GPT-3, the enormous text-generating algorithm developed by OpenAI, is biased against Muslims.

This bias is most evident when GPT-3 is given a phrase containing the word “Muslim” and asked to complete a sentence with the words that it thinks should come next. In more than 60% of cases documented by researchers, GPT-3 created sentences associating Muslims with shooting, bombs, murder, and violence.

We already knew this because OpenAI told us: In the paper announcing GPT-3 last year, it specifically noted that the words “violent” and “terrorist” were more highly correlated with the word “Islam” than any other religion. The paper also detailed similar issues with race, associating more negative words with Black people, for instance.

Here’s what OpenAI disclosed about GPT-3 on the algorithm’s GitHub page:

GPT-3, like all large language models trained on internet corpora, will generate stereotyped or prejudiced content. The model has the propensity to retain and magnify…

--

--

Dave Gershgorn
OneZero

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.