In a previous article, we considered the creeping influence of AI. An influence which makes our life easier. For example, Netflix sends usfilms and series we may enjoy watching based on previous choices making selection more time efficient. AI could write this article for us (it isn't, but it could), leaving our team only to rearrange paragraphs and insert some tweaks to make it more personal.
In the two examples above, our thinking is outsourced to AI. We no longer have to think about what genre of film or Netflix series we want to watch. We choose from a list already prepared by AI. If this article were to be written by AI, we would no longer have to think about word composition, clarity of ideas, and the main point. We would simply rearrange the paragraphs provided by AI.
While outsourcing our thinking to AI on the more minor details of our lives is useful and time efficient, the risk is we develop a subtle reliance on AI that may become problematic when we need to make more complex and nuanced decisions or judgements.
This risk is compounded because the accuracy of AI in presenting choices and prompts that simplify our decision-making on small issues, biases us to trust AI in more complex decisions when we should be questioning and exercising caution.
Decision-making in ethical and moral areas relies not only on our ability to reason between two or more situations but also on the capacity to imagine outcomes, reflect, evaluate, and be compassionate [1]. The example of Centrelink's Robo Debt scheme is a prime example of where decision-making was outsourced to AI without any imagination being done on the potential outcomes, a lack of reflection and evaluation of its impact, and the lack of compassion.
The other concerning aspect of the Robo-Debt debacle was that despite being presented with clear evidence of its impact on people's lives, decision-makers believed in the supremacy of AI over the lived experience of people.
Decision-makers and politicians choose to trust AI, the what over the lived experiences of people, the who. Politicians and decision-makers exercised blind faith in a system that was flawed.
Why did this happen? What was it that drove this blind faith in AI?
The factors that drove the politicians and decision-makers in this example are the factors that drive all of us.
Given the increasing amount of information we are presented with every day, we face information overload. As a result, we look for ways to simplify and streamline the amount of data coming at us. AI does this for us in seamless and quick ways.
2. UNCONSCIOUS BIAS
Because AI is seamless, quick, and effective, we have an unconscious bias to trust it. We don't question our trust; we do not need to, because the results of AI are proof in themselves. We feel healthier since our wearables remind us of when to move and increase our steps. The aircon cools the house for us when we get home, and the task of remembering what to buy at the shops is covered by AI.
Our biases make us forget that for complex and ethical decision making we need imagination, the ability to reflect, and the need for compassion.
The other aspect of bias is that once we commit to or take action, we discount evidence we have made a wrong or poor decision and accentuate those factors which support our original decision making.
Our bias is that we make good decisions; hence, when presented with evidence to the contrary, we discount it because we believe the problem is not us but lies elsewhere.
We see this bias at work in daily interviews with politicians. When journalists present clear evidence a decision having negative repercussions, the information is denied, or the journalist is attacked as incompetent.
Our creeping use of AI in our lives, our desire to simplify the information overload, and our unconscious biases predispose us to trust AI in complex situations when doubt and questioning would be more valuable and effective.
When we lose the ability to doubt and question what we trust, we are on the verge of exercising blind faith which can be dangerous. While AI may simplify our decision-making processes in some areas of our lives, we cannot outsource our thinking without it having a detrimental impact.
Perhaps it isn't so much what we should do as what we should stop doing. Evgeny Morozov writes about technological solutionism, which is finding solutions to problems that are not problems [2].
In western culture, we buy into the belief of eternal improvement. The self-help movement is an industry based on the idea that you can improve and achieve your goals, dreams, and fantasies. People invest time, money, and energy to become a better version of themselves. The gym industry also buys into this theme of eternal improvement. No longer is your exercise to improve your physical health; it is to craft a body that is your ideal self.
Technology and AI facilitate this myth of eternal improvement. If you cannot reach it by yourself, there is always a new wearable with better features to make it easier to achieve your ideal body. Or there is a better app you can download on your phone that will develop your ability to meditate and be centred.
Our desire to keep improving becomes a series of problems to be solved, and if technology and AI can help solve that problem, we are closer to our ideal.
Perhaps rather than creating problems, we should be present to current reality. Life is not a problem to be solved. It is not a series of problems to be solved. Life is to be lived. Lived in the messiness of the present.
Perhaps it is living in the messiness of the present; we develop imagination, the ability to reflect, to hold the doubt we feel without fearing it or trying to solve it. Perhaps with these qualities, we can use AI as a tool without kneeling in blind trust before it.