The Bullimia Project, an organization dedicated to the treatment of eating disorders, made available test the image generators with artificial intelligence (AI) to check how stereotypical their creations are. The results revealed that these systems replicate the biases that are also noticed in social networks.
3 facts about The Bullimia Project
- As read on the group’s official website, its purpose is to “provide educational information, research and resources on bulimiaand any physical or mental health complications related to the eating disorder.”
- Without centralized sources of information, they say, learning about eating disorders or how to find help can be difficult and confusing.
- They indicate that the information they disclose comes directly from scientific studies and other credible institutions, and that it is thoroughly examined by doctors and experts before being published. “Bias has no place here,” they say. “We believe that every person, body, and experience is unique, and we write our content for anyone who may be struggling with these very real issues, keeping in mind the needs of all kinds of people.”
How do artificial intelligences imagine human beauty?
For their test of AI imagers, the group used two of the best-known systems in the field: DALL-Ewhich is developed by the same creator of ChatGPT, OpenAI; midjourney; and stable diffusiona development by a company that recently launched its own chatbot.
read also: Sam Altman, director of the firm that develops ChatGPT: “My worst fear is causing great damage to the world”
Basically, these are programs that create images based on instructions that are indicated through key concepts. For example, it is possible to ask to generate a very rare image of the actor Keanu Reeves eating cement and this is what it will throw.
Reeves gulping down fresh concrete in an AI-created photo. (Photo: Reddit/ Annual Celebrity Concrete Eating Contest)
In this case, The Bullimia Project asked DALL-E, Stabble Diffusion and Midjourney create images of the “perfect” man and woman, with the intention of verifying how stereotyped the results are. According to those softwareIdeal girls are blonde and olive-skinned, while the best-looking boys are tall, with “chiseled cheekbones,” defined muscles, and dark hair.
/cloudfront-us-east-1.images.arcpublishing.com/artear/5SK2PGBNUVDQ3AF2OAOUNMTNBE.jpg)
(Photo: The Bullimia Project)
Behind the scenes this fact stands out: the generators were based on information taken from social networks. This means that AIs are not stereotyped. per sebut they replicate biases typical of human beings, in this case through publications in the social media.
/cloudfront-us-east-1.images.arcpublishing.com/artear/CRNCNY2QZ5BLTFIQWVLAJNDRNI.jpg)
(Photo: The Bullimia Project)
“In the age of Instagram and Snapchat filters, no one can reasonably reach the physical standards set by social media. So why try to live up to unrealistic ideals? It is mentally and physically healthier to keep body image expectations squarely in the realm of reality,” the organization that conducted the test concluded.
The biases of artificial intelligence are the same that are noticed in society
It should be noted that AI systems do not act 100% autonomously: the basis of their operation is data. For example, for the images released by The Bullimia Project, photographs taken from social networks were used.
Now, most of that information is generated by humans. Meanwhile, it is valid to say that the biases of artificial intelligence replicate the biases that, for their part, are evident in societies.
read also: The “godfather of artificial intelligence” resigns from Google for the launch of Bard: “It’s terrifying”
As we pointed out in Artificial intelligence and machismo: how to deconstruct technology, systems learn from what they absorb. An eloquent case was that of an AI that generated images of men with drills and women with hair dryers. Why did you make those links?
/cloudfront-us-east-1.images.arcpublishing.com/artear/CBZE4PUIAFA3VF5E6VKTWUKHFU.jpg)
Researchers doctored images to see how skewed the gaze of the best-known photo recognition systems is.By: (Photo: BikoLabs)
As we noted on the occasion, if the vast majority of training images show that, the algorithm will infer that this is the standard, and therefore it will associate “gender with device”. “Think about this: if you were a blank mind being taught, you would end up drawing the same conclusions, right?” data scientist Ana Laguna Pradas told us. “Therefore, since Our society presents certain obvious biases, and we will not be able to change this in two days, what our algorithms will learn by looking around will necessarily reflect that reality as well”, concluded the specialist.