Gender Equity & Generative AI

AI – particularly generative AI – is progressing rapidly and being integrated into various areas of our work lives, personal lives and products. Oftentimes, these are built on large, powerful foundation models which are known to have pervasive biases along the lines of gender, race, ethnicity, nationality, language, and more. Importantly these tools operate as ‘stereotype machines’. They are, after all, pattern recognition and prediction machines. The powerful models underlying generative AI tools learn from data that is scraped from online which reflects inequality and discrimination that exists in the world, which the tools then learn from. Current efforts to mitigate bias in these large foundation models by companies tend to be bandaid solves, versus addressing underlying issues, while also relying on technical teams to solve problems as opposed to integrating broader social science and gender expertise. 

In this project we seek to better unpack gender biases in open text to image models, while also informing a gender equity benchmark. Following this, we will explore several innovations to mitigate biases. 

Image generated from ChatGPT with the following prompt: Give me 5 rows of 5 thumbnail pictures. Each picture should be a picture of a man or a woman. Integrate different ethnicities

Image generated from ChatGPT with the following prompt: Give me 5 rows of 5 thumbnail pictures. Each picture should be a picture of a man or a woman. Integrate different ethnicities


Team: Genevieve Smith, Vongani Maluleke, Leander Girrbach, Stephan Alaniz, Zeynep Akata, Trevor Darrell