GENERAL INTELLIGENCE

Men Wear Suits, Women Wear Bikinis: Image Generating Algorithms Learn Biases ‘Automatically’

The algorithms also picked up on racial biases linking Black people to weapons

Dave Gershgorn
OneZero
Published in
4 min readJan 29, 2021

--

Photo illustration; Image source: 3DSculptor/Getty Images

OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.

Bias in artificial intelligence is notoriously problematic. Facial recognition algorithms have been found to more frequently misidentify people with darker skin; identified poor people as high-risk and less eligible for public services; and have recommended resources to healthier white people before recommending the same resources to sicker Black people.

Now, new research shows that similar racial, gender, and intersectional biases are “automatically” learned by popular computer vision algorithms when learning from a popular image dataset, according to a paper from Carnegie Mellon and George Washington University. The algorithms studied, OpenAI’s iGPT and Google’s SimCLRv2, are general purpose and can be adapted into nearly any use from image generation to object recognition. The dataset, ImageNet, is arguably the most popular dataset in computer vision and kicked off the A.I. boom we’re currently experiencing.

--

--

Dave Gershgorn
OneZero

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.