Do Artificial Intelligences Generate Images Representative of Global Diversity?

Do Artificial Intelligences Generate Images Representative of Global Diversity?

Artificial intelligence tools capable of creating images from text have revolutionized visual creation. However, these technologies often reproduce the social stereotypes present in the data used to train them. A recent analysis examined how five popular models generate portraits of people from ten countries in the Americas. The results show systematic gaps between the representations produced and demographic reality.

Researchers used nationality-based descriptions to ask these systems to create portraits. They then compared the characteristics of the generated faces with official census data. It appears that the models emphasize certain traits and ignore others, depending on the country. For example, Latin American countries are frequently associated with Indigenous archetypes, while Canada and the United States are mostly represented by white faces. These distortions appear both in the analysis of the images and in their interpretation by other artificial intelligences.

The study also reveals an overrepresentation of men in most models, except for one that favors women. The cultural and demographic differences between nations are not respected, which poses a major ethical problem. Indeed, these biases can reinforce stereotypes and erase entire social groups, especially when these images are used in media, design, or education.

To assess these gaps, scientists combined several methods. First, they used a classifier that groups images according to their visual similarity. Then, they measured the distance between the average representations of each country and those of ethnic or gender groups. Finally, a linguistic model analyzed the portraits to identify perceived attributes. These complementary approaches confirm that biases concern not only appearance but also how these images are interpreted.

The results highlight the importance of evaluating artificial intelligences by considering geographical and cultural contexts. Without this vigilance, systems risk perpetuating reductive and inaccurate visions of human diversity. Such an approach is essential for developing fairer and more transparent technologies, capable of faithfully reflecting the richness of identities around the world.


Bibliography

Study Source

DOI: https://doi.org/10.1007/s43681-026-01066-7

Title: Bias beyond borders: quantifying gender and ethnic stereotypes across countries in AI image generation

Journal: AI and Ethics

Publisher: Springer Science and Business Media LLC

Authors: Giovanni Franco; Regilene Aparecida Sarzi-Ribeiro; João Paulo Papa; Kelton Augusto Pontara da Costa; Felipe Mahlow

Speed Reader

Ready
500