AI is only as good as the data used to train it.

THE CREEPING OF AI

The world of AI is here and impacting our lives and influencing decisions in ways we do not consider. The only attention we give to AI is when there is an exciting breakthrough that makes the news or we watch another TV series  with scenarios of an AI apocalypse, yet AI impacts us every day, from what we see on social media, to what ads pop up as we scroll through Instagram, to the speeding fine we receive. It is all determined by AI.

What is AI?

AI is a computer program that uses complex code and processors to sift through massive amounts of data to make decisions and take actions. It is more accurate to speak of artificial intelligences (AI's) because countless different programs fall under the umbrella of AI.

From the world of online dating, shopping to dealing with government departments, AI is becoming increasingly popular. In marketing, AI is being used to target advertising to people based on their past and present shopping and browsing preferences. Based on this data, predictive algorithms make inferences about choices we are likely to make in the future. It is estimated that these algorithms drive 35% of what people buy on Amazon and 75% of what they watch on Netflix.

Is AI as neutral and objective as we think?

The codes and processors that sift through a large amount of data are known as algorithms, which are the set of rules the computers follow to process the data and arrive at conclusions.  The algorithms follow the mathematical objective set by the designer. AI decisions are often viewed as the result of neutral, objective technology and therefore superior to decisions made by people.

The reality is that AI is, from start to finish, the product of human choice and decisions that are influenced by human biases, shaped by human values, and prone to human errors. This means AI is only as good as the data used to train it. If the data is incorrect, biased or of poor quality, the decisions and actions taken by the AI will also be inaccurate, biased and of poor quality.

Centrelink's Robo-Debt system is a clear example of an assumption about the correctness of decisions made by AI despite evidence of the actual harm it was causing to people.

The recent investigation by US lawmakers into Facebook and other platforms use of algorithms to push emotional and toxic content that amplifies depression, anger, hate and anxiety is further evidence that algorithms are not the ethical neutral, objective technology we may have assumed.

Every automated action on the internet, from ranking content and displaying search results to offering recommendations, is controlled by computer code written by engineers who are often white, well-educated and affluent [12].

An example of this is in the employment area, where AI technology is utilised in hiring decisions. Men are often chosen over better-qualified women because of the program's embedded gender bias, which AI reinforces [13].

Ethics and AI

AI will continue to impact our lives and the decisions we make. Hence the question is how AI can be used ethically and in ways that add value instead of being used destructively as in Robo-debt.  

  • Structural changes

Organisations developing and implementing AI need to ensure that the teams developing the program represent the wider community. This means having equal representation of women, people who have disabilities, and people from diverse cultural and economic backgrounds involved in program design to combat the unconscious biases when white, middle-class, educated males set algorithms.

  • Accountability

Who is accountable for automated decisions, and who are they responsible to [14]?  It is easy for the algorithm makers or the decision-makers who decided to implement an AI program to blame "the system". However, as outlined in this article, the "system" has been generated by people who have biases, assumptions, and beliefs.  

Companies like Google have established principles for 'responsible AI" to guide their practices, including considering issues like fairness, privacy, and security [15]. AI systems also need to be accountable to the end-user, that is, people who are affected by the decisions taken by the AI program. These decisions need to be easily explainable to the end-user; otherwise, the result will be an increase in distrust and a sense of alienation.

  • Remedy

When AI makes the wrong decision, there should be a transparent remedy process for affected people. As demonstrated with Robo-debt, the impact on people's mental health when they feel they are battling an impersonal algorithm can be immense.

Technology is never neutral. While AI can make our lives easier and streamline our choices, we need to ensure that how AI is being used is inclusive and builds communities rather than alienating because of unconscious biases or assumptions in how programs are developed.

TOP