Andrew Rogoyski of the Institute for People-Centred AI at the University of Surrey believes bias is a “hard problem in most fields of deep learning and generative AI…and mistakes are likely to occur as a result.

GEMINI & BIAS

A few months after announcing the launch of Gemini AI, Google has had to block the component of Gemini that generates images of people after it portrayed Second World War German soldiers, Popes and Vikings as people of colour and different genders. The glitch has led to it being accused of racism against white people.

Elon Musk, the founder of the competitor xAI, took the opportunity to criticise Google’s AI as “woke” and “racist” compared to the maximum “truth-seeking capacity” of xAI. A claim and criticism that is easy to make given that xAI’s chatbox Grok, released in November 2023, has no image-generating services.

In response, Jack Krawczyk, a senior director on Google’s Gemini team, stated that the “model’s image generator needed adjustment….as it was missing the mark.”

While Gemini has been criticised for racism against white people, other AI image generators have faced the opposite criticism – that of creating racially biased images where there is an over-representation of white people, particularly in response to prompts for a “productive person” or an “attractive person”. To give credit to Google, it was trying to avoid the mistakes of previous image generators; however, it went too far the other way.

This situation raises the ongoing issue of bias in AI, how it is often implicitly built into the AI modules and frequently not picked up until the AI is being utilised in the public realm.

 

The ongoing problem of bias

Bias in AI is not a new issue.

In 2014, software engineers at Amazon built a program to review job applicants' resumes. A year later, in 2015, they realised the system discriminated against women for technical roles, resulting in the system being unable to be used in a fair and non-discriminatory way. In 2019, San Fransisco legislators voted against using Face recognition technology as it was prone to errors when used on people with dark skin or women.

In 2023, an investigation into Stable Diffusion XL, a service offered by Stability AI, showed recipients of food stamps as being primarily non-white or darker-skinned despite 63% of the recipients of food stamps in the US being white. A request for an image of a person “at social services” produced similar results.

Given that bias issues are not new, why do they keep re-occurring?

Andrew Rogoyski of the Institute for People-Centred AI at the University of Surrey believes bias is a “hard problem in most fields of deep learning and generative AI…and mistakes are likely to occur as a result.

Perhaps bias is one of the outcomes of deep learning. However, before considering this question, we must understand the types of bias developers experience in generative AI modules.

 

Types of bias when developing generative AI models

When developing generative AI models, developers must be aware of various biases.

Implicit bias

Implicit bias is the most dangerous and problematic because the person is unaware of their bias. It is a bias that is unconscious within the person who has it. These unconscious biases become conscious in the developed AI and can have negative consequences for people who are discriminated against.

Sampling bias

This occurs when the random data selected from the population does not reflect the distribution of the population. When this happens, the sample data may be skewed towards a subset of the population.

Temporal bias

This is bias based on the perception of time. In other words, developers can develop a learning model that works well in the current time but fails in the future because possible future changes were not considered when building the model.

As discussed below, this is a significant difficulty when developing generative AI models. Technology is developing both faster and in ways that are not fully understood. This makes developing learning models that work effectively in the future challenging.

Over-fitting to training data

With this bias, the AI model can accurately predict values from the training dataset provided to it. However, it cannot accurately predict new data. In other words, the model cannot generalise to the larger population.

 

Biases and deep learning

Current generative AI development uses what is known as deep learning. As queried above, are biases inherent in deep learning?

What is deep learning?

Deep learning is a subset of machine learning that uses multilayered neural networks known as deep neural networks (DNN) to simulate the complex decision-making processes of the human brain. DNNs are trained on vast amounts of data to identify and classify phenomena, recognise patterns and relationships, evaluate possibilities, and make predictions.

Deep learning has long been considered the “reigning monarch of AI” as it is the dominant way to help machines sense and perceive the world around us. Deep learning powers Alexa’s speech recognition and Google’s on-the-fly translations. The Chinese tech giant Baidu has over 2,00 engineers working on neural net AI using DNN.

Despite being the reigning monarch, deep learning, by its very nature, makes it susceptible to a few of the abovementioned biases. Because the AI modules are drawing from current data, they are likely to be impacted by temporal bias. They cannot accurately predict the future because the data they need to draw on to recognise patterns and evaluate possibilities is still in the future.

Likewise, they are susceptible to overfitting training data because they cannot predict new data correctly.

Has deep learning reached its limit?

Some argue that deep learning is reaching its limit. 

The inherent difference between deep learning and human learning

Deep learning is basically self-education for machines. If enough information is provided, AI will eventually begin to discern patterns. However, humans are more than pattern recognisers.

Dileep George, one of the co-founders of Vicarious, states that humans are not just pattern recognisers; we also build models based on the understanding of cause and effect. Humans engage in reasoning and have a store of common-sense knowledge that helps to figure out new situations.

When AI models are entirely data-dependent, they cannot solve certain problems or explain how they reached a conclusion. It also means that if the data is flawed by systematic historical biases, these biases will be replicated at scale in the module. 

Top-down Deep learning

One of the reasons why deep learning is easily flawed by historical biases is because it is top-down learning.

In other words, biases arise not simply because of individual developers but rather because of how the system is built and the data fed into it to train it. Most AI is built in developed nations and first-world cultures, which means the predominant perspectives of these systems are drawn from data from developed and first-world cultures. The result can be a sampling bias that favours first-world cultures.

Google’s glitch with the image-generating component of Gemini is another reminder of the challenges regarding AI and biases. AI and technology are rarely neutral. The biases that occur in AI are often inherent in how AI is trained to recognise patterns and relationships.

Perhaps the deep learning model of training AI is reaching its limit. The question is, what will replace the deep learning model that is currently being used?

The company Vicarious is backed by $250 million from investors such as Elon Musk, Jeff Bezos, Samsung and Mark Zuckerberg to develop AI with imagination. The impact of AI with imagination remains to be seen. For the foreseeable future, it would seem that generative AI will continue to use the deep learning model, which means we must be vigilant to minimise biases and prejudices.

 

TOP