Okay, let's talk about diversity, inclusion, and stereotypes in AI image generation!
In a previous post, I shared a use case where I used AI (specifically DALL·E and ChatGPT) to support my pitch presentation. Behind the scenes, each concept took several iterations, and I began noticing a pattern.
TL;DR: The initial outputs predominantly featured white people in privileged settings and authoritative roles. Meanwhile, Black and POC characters were portrayed as impoverished or as those being taught. Additionally, I almost always had to specify the inclusion of female figures. Since my work serves diverse, underprivileged communities, I must represent them, particularly when presenting to these audiences. Educational opportunities must not discriminate, and it's crucial that we acknowledge these biases in AI outputs so we can consciously correct them and provide feedback to better reflect the diversity of the world around us.
Below are the iterations and reflections on these concepts:
Concept 1: A delivery person arriving at a learner's door with an ice cream tub
- 1st Iteration: Simplified design with fewer details, more diverse characters, and inclusion of a woman
-- 2nd Iteration: More diverse characters, use of a bicycle, and a poorer setting
Reflection: Reinforcement of stereotypes
Concept 2: A trade school with older students
- 1st Iteration: Included more women
-- 2nd Iteration: Added more people of color (though this wasn't fully achieved)
Reflection: Gender inclusivity challenges
Concept 3: An ice cream coach showing an adult how to eat ice cream at home
- 1st Iteration: Characters made more diverse and placed in a poorer setting (leading to unintended consequences)
-- 2nd Iteration: Both characters were the same age and people of color (still not fully achieved)
Reflection: Stereotypes persisted
These reflections highlight the ongoing need to challenge and refine AI-generated outputs to ensure diversity and inclusivity in representation.