ChatGPT, launched in 2022 by OpenAI, has created discussion, debate and interest like many previous technological advances.
Some see this advance in AI as a further erosion of academia and opening the floodgates to plagiarism and cheating from students. At the same time, others feel that technology should be embraced and utilised to make life easier.
Technology, in particular AI, is a two-edged sword; it depends on what it is used for and the ethics of the people using it.
Chat GPT stands for Chat Generative Pre-trained Transformer. It is a revolutionary technology because it has been trained to learn what we mean when we ask a question. This means it can answer complex questions conversationally.
It is basically a significant language model (LLM) trained with massive amounts of data to accurately predict what word comes next in a sentence. In addition to LLM, Chat GPT uses Reinforcement Learning with Human Feedback (RLHF) which provides an additional layer of training using human feedback to help ChatGPT learn the ability to follow directions and generate responses that are satisfactory to humans. This makes the recent Chat GPT3 model more intelligent than earlier models.
It is the strength of this AI, its ability to interpret questions and generate conversational-style responses, that is raising concern in academic and educational institutions.
We have previously written about the impact of AI on our lives, particularly around the complexity of ethical judgements.
In that article, we made the point that decision-making in ethical and moral areas relies not only on the ability to reason between two or more situations but also on the capacity to imagine outcomes, reflect, evaluate and be compassionate.
AI is never neutral. It is built and programmed with the unconscious biases of programmers and technicians embedded into it. Likewise, how it is used depends on the ethics of the individual using it.
If our view of people around us is that they are untrustworthy, likely to cheat to get ahead and will use any means at their disposal to win, then we will view ChatGPT as another tool people can use to succeed by cheating. For example, in academia, we will be concerned that students will use this AI to cheat on assessments.
If our view of humanity is more compassionate, then the advantage of this AI to people who have learning disabilities and may need some assistance becomes apparent. This AI may assist in providing a level playing field for people with learning disabilities or for whom English is a second language.
An investigation by Time Magazine found that the multi-billion-dollar owner of ChatGPT (OpenAI) employed workers in Kenya for $2 an hour to weed out the most offensive and sensitive content on the internet for this tool. Workers reportedly had to sift through sexually explicit, racist, and abusive content for hours a day, with many saying they experienced long-term mental health effects and PTSD [1].
This report from Times Magazine raises several ethical questions about using ChatGPT.
ChatGPT acts by scraping content that is already on the web. As powerful as it is, it can only compile, re-arrange and reformat what is already on the web. It cannot produce new, original material that is not on the web.
ChatGPT may reorganise material creatively, but it will never produce original material because it can only re-create what is in existence—at this point, developing something original still requires the human brain.
Originality requires the ability to think and reflect.
There is a nuanced difference between thinking and learning. Often education is reduced to learning. Learning facts and skills that will assist in the workplace. Thinking is the ability to know how to apply what is learned to different situations, situations that can be complicated and nuanced. Effective thinking is knowing how to use the knowledge that is learned in an effective way to achieve positive outcomes.
Many students learn academic facts that theoretically set them up in the profession of their choice, only to find after two or three years that they are struggling in their careers. Why? One of the reasons is that they haven’t learned to think through how to apply their learnings to real-world situations.
One of the other aspects of ChatGPT that is linked to thinking is the ability to hypothesise about the future.
ChatGPT scrapes the web for information that has already been written or documented. However, as we think and reflect on information, we can draw parallels between the past and present and hypothesise about future possibilities.
For example, Futurists are people who try to understand the future. Authors and thinkers who engage in interdisciplinary and systems thinking about emerging trends and possible scenarios.
As advanced as the current ChatGPT is compared to earlier versions, it is a tool to be used wisely, just as with other AI tools. Some will wish to use ChaptGPT to give themselves an immediate advantage. However, any immediate advantage must be weighed against the long-term benefits of reflective thinking. that skill us to apply knowledge effectively to complex situations. We must weigh up whether we will outsource our creativity or if we will harness our creativity in subtle and nuanced ways.
We must also weigh up the ethical issues raised in this article, for if humanity is all connected, then we have a responsibility to those under-paid and traumatised by the racism, violence, and abuse they are exposed to when cleaning the web for us to use ChatGPT.