GENERAL INTELLIGENCE

Men Wear Suits, Women Wear Bikinis: Image Generating Algorithms Learn Biases ‘Automatically’

The algorithms also picked up on racial biases linking Black people to weapons

Photo illustration; Image source: 3DSculptor/Getty Images

OneZero’s General Intelligence is a roundup of the most important artificial intelligence and facial recognition news of the week.

Bias in artificial intelligence is notoriously problematic. Facial recognition algorithms have been found to more frequently misidentify people with darker skin; identified poor people as high-risk and less eligible for public services; and have recommended resources to healthier white people before recommending the same resources to sicker Black people.

Now, new research shows that similar racial, gender, and intersectional biases are “automatically” learned by popular computer vision algorithms when learning from a popular image dataset, according to a paper from Carnegie Mellon and George Washington University. The algorithms studied, OpenAI’s iGPT and Google’s SimCLRv2, are general purpose and can be adapted into nearly any use from image generation to object recognition. The dataset, ImageNet, is arguably the most popular dataset in computer vision and kicked off the A.I. boom we’re currently experiencing.

To see how the algorithm associated male and female faces with body types and clothes, an algorithm was given the image of a person’s head and asked to generate a low-resolution image of their body. It’s a process a lot like the auto-complete function on your phone, just with images.

The original paper featured an example with New York Rep. Alexandria Ocasio-Cortez. The algorithm associated AOC with what it learned about women from the ImageNet dataset it was trained on, and associated her face with other visual traits it associated with women. This resulted in images of the congresswoman in bikinis and revealing shirts.

The result fell in line with other tests, according to Ryan Steed, a Carnegie Mellon PhD student and co-author of the paper. About 40% of the time, male faces would be associated with suits or business attire. More than 50% of the time, women would be associated with more sexualized images of bikinis or low-cut tops. In another test, the algorithm was given the face of a Black man and then generated a body holding a gun.

The result of the research was so alarming to other researchers that the authors actually changed the AOC example in the paper after publication. Steed says that while using public figures is typically a mainstay of computer science examples, he changed the example in the paper after publishing it to reduce any harm to real-world people.

Racial biases were also learned by the image-generation algorithm. In a test where it had to determine whether a person was carrying a weapon or a tool, the algorithm was more likely to suggest that Black individuals were holding weapons.

“If we find that there’s a systematic association between the representations for weapons and Black people, compared to representations for white people and tools, then we should really be cautious if we’re designing some sort of surveillance application especially, whether those predictions are going to show up downstream,” Steed told OneZero.

Understanding the way these algorithms learn “upstream” is important because general-purpose algorithms like GPT are then used to solve a whole host of different problems, and can bring their biases into each new use.

“If there’s bias in the upstream representations that these unsupervised models are producing, then it’s very likely that there could be bias in the downstream representation in the supervised models,” says Steed.

This research focused on unsupervised artificial intelligence algorithms, meaning those that are shown large amounts of data and tasked with figuring out how that data relates to each other. This is in contrast to supervised algorithms, where an algorithm is given data already separated into categories and has the relatively simpler task of discerning patterns among data already deemed to be similar in some way. Supervised algorithms would, for instance, be trained on hundreds of images of cats, and asked to find a cat in future images.

A popular technique, like the one employed by OpenAI’s iGPT (an earlier version of the research lab’s image generation A.I.), is to combine these two approaches. First, the algorithm is trained in an unsupervised fashion, learning the relationships between pictures of trees and cats and rocking chairs, and then fine-tuned on a specific task using supervised learning. That specific, supervised task could be anything from identifying objects, separating a person from the background of a picture, or generating a new image entirely.

While the data used for this research is not representative of every image on the internet, or proprietary datasets used by large and small tech companies across the world, the techniques used show that biases can be learned from datasets, even without humans choosing which photos should represent specific ideas.

But Steed says that ImageNet is composed of images and stereotypes that are still on the internet.

“It’s not like they’ve gone away,” he said.

Senior Writer at OneZero covering surveillance, facial recognition, DIY tech, and artificial intelligence. Previously: Qz, PopSci, and NYTimes.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store