Addressing racial and gender bias in generative AI
Artificial intelligence has come a long way since its inception, and generative AI is one of the most innovative and fascinating areas of AI today. It’s revolutionizing how we create software, websites, art, music, and almost everything else. However, there is a problem that is often overlooked in the development of generative AI: bias.
Introduction to generative AI and its applications
Generative AI is a subfield of AI that focuses on creating new data. It uses machine learning algorithms to generate artistic images, realistic videos, audio, text, and more. Some examples of buzzworthy generative AI applications include:
- Dall-E. Creates realistic images of objects that do not exist in real life
- StableDiffusion. Builds detailed images conditioned on text description.
- ChatGPT. Generates text that mimics human writing
And while these apps are exciting and new, they also carry unprecedented problems, including deep concerns related to racial and gender bias.
The problem of bias in AI and its impact on society
The problem of bias in AI has been a growing concern in recent years. Bias refers to the unfair or unequal treatment of particular groups of people based on their race, gender, or other characteristics. Bias in AI can have serious consequences, as it can reinforce the existing unfair treatment of marginalized groups and perpetuate harmful stereotypes. It can also limit the diversity of ideas and perspectives in the development of AI.
There are several sources of bias in generative AI, including the data used to train the machine learning algorithms and the algorithms themselves. Discrimination can also be introduced through the design of the system and the assumptions made by the developers. For example, if the data used to train a generative AI system is biased toward a particular race or gender, the system will likely produce biased results.
AI image generators routinely display gender and cultural bias
The use of AI in image generation has brought to light the issue of gender bias in technology.
A recent tool called Stable Diffusion Explorer, created by Hugging Face’s AI researcher, Sasha Luccioni, highlights the biases in AI-generated images. The tool shows that when prompts for professions like “engineer” or “CEO” are used, the generated images are predominantly male. This is despite women making up around a fifth of people in engineering professions, according to the U.S. Bureau of Labor Statistics. OpenAI’s DALL-E 2 system has also been criticized for reinforcing stereotypes in its generated images.
Stable Diffusion is built off an image set containing billions of pictures scraped from the Internet, which has resulted in gender and cultural bias in the system’s classifications. The tool highlights how some professions are highly gendered, with no hint of a male-presenting nurse being displayed by Stable Diffusion’s system. Using stereotypically gendered adjectives in prompts, such as “assertive” or “sensitive,” also affects the generated images. Despite these issues, Stable Diffusion is an open and less regulated platform than DALL-E 2, and its developers have not commented on the observed biases. Luccioni hopes this tool will help create a more reproducible way of examining biases in Stable Diffusion and other AI-generated image systems.
Artificial intelligence tools like DALL-E 2, Stable Diffusion, and Images.AI have also struggled to create images of older couples of color until the word “poor” is added. In an experiment comparing various AI tools’ ability to create realistic images of couples holding hands, every one of the 25 couples was white, regardless of the stylistic choices made. Even after adjusting the prompt to include the word “poor,” which produced the first brown couple, every couple generated was still primarily white.
Some researchers and companies are already searching for solutions, turning to artificial images of people of color. Proponents believe AI-powered generators can rectify diversity gaps by supplementing existing image datasets with synthetic images. For example, Generated Media and Qoves Lab use machine learning architectures to create entirely new portraits for their image banks, building faces of every race and ethnicity to ensure a “truly fair facial dataset.”
OpenAI Chatbot generates biased responses, despite precautions
Despite their growing popularity and ability to provide personalized customer service, chatbots are not immune to the biases of their creators. The problem lies in the data sets used to train these chatbots, which may reflect the biases of their creators or perpetuate existing societal prejudices.
Davey Alba, writing for Bloomberg, points to several examples of chatbots that have been found to exhibit gender and racial biases, including one that repeatedly made sexist jokes and another that recommended higher-paying jobs to men over women.
Alba notes that there is no easy solution to this issue but suggests that transparency and diversity in the development process may help mitigate bias.
Fast Company published another study that explores ChatGPT’s ability to write job posts and feedback and how it performs regarding gender bias. When the tool was asked to generate real-world examples of workplace feedback, men were more likely to be described as ambitious and confident. At the same time, women were more likely to be described as collaborative, helpful, and opinionated. This suggests that ChatGPT’s gender bias highlights the need for companies to improve their feedback practices and eliminate demographically polarized feedback.
Sadly, racism in chatbots isn’t new. Microsoft’s 2016 chatbot, Tay, was a prime example of the consequences of designing AI to imitate human behavior without considering potential harm. Tay’s quick descent into racist and offensive language revealed myopia bred by the tech industry’s lack of diversity. Tay was designed to win over young consumers. However, its use as a tool for harassment also highlighted the need for more women in technology and for the industry to listen to those already present.
Best practices for creating unbiased generative AI
Industry thought leaders and experts have already come to a consensus on the best methods to create fair generative AI. These include:
- Using diverse data when training the machine learning algorithms
- Testing the system for bias before deploying it
- Ensuring that the system is transparent and explainable
- Using algorithms that are designed to be less biased
Workplace diversity also plays a crucial role in AI development. By including individuals from diverse backgrounds and perspectives in the development process, we can ensure that AI systems represent the population and are less likely to perpetuate harmful stereotypes. Diversity can also lead to more innovative and effective solutions.
Future implications for generative AI and bias
The future implications of generative AI and bias are significant. As generative AI becomes more advanced and widespread, it is vital to ensure that it is not perpetuating harmful stereotypes or leading to unfair treatment of individuals.
Bias in generative AI is a serious problem that must be addressed, and as we have seen in recent years, consumer demand can be a significant driver of change. The longer these issues are allowed to exist in the marketplace, the harder it will be to roll back the damage already done. So try some of these AI tools, and when you discover examples of bias, inform the developers of your disappointment.