Will ChatGPT be the fad of 2023, or will it revolutionise the future of businesses?


In the last article, we discussed ChatGPT, what makes it different, and some ethical concerns around its use. In this article, we consider the use of ChatGPT in business.

Will ChatGPT revolutionise the future for businesses, or is it another advance in AI technology where the hype is overblown against reality? Is ChatGPT the Metaverse of 2023? 

The metaverse was one of the big things in 2022, mainly due to Mark Zuckerberg’s rebranding of Facebook to Meta. The rebranding was necessary as Facebook faced numerous allegations about privacy and safety, particularly after Frances Haugen’s testimonies against the organisation. To take the focus off Facebook, it was rebranded as Meta, which fed into the hype of the metaverse.

As the excitement about the metaverse took off, people started buying penthouses in a place called Uphoria and property in a world called TCG. Yet as 2022 progressed, the hype around the Metaverse subsided, and Zuckerberg’s Meta transformation faced challenges as many businesses found the transition caused problems.

Will ChatGPT be the fad of 2023, or will it revolutionise the future of businesses?


81% of businesses consider AI as mainstream in their business. For example, the use of chatbots to provide immediate connections to and information for customers browsing a business website is increasingly common.

Gartner claimed in 2019 that by this year (2023), organisations using AI for digital commerce will achieve at least a 25% improvement in customer satisfaction, revenue, or cost reduction [1]. Given rising interest rates and inflation, achieving a 25% increase in revenue and cost reduction may represent the difference between a business remaining viable or having to close.

Given that so many businesses are already using AI, how can companies use ChatGPT effectively?


The average employee is productive for about 60% of their work day [2]. This may be at the high end of productivity, particularly post-COVID. 

As the business community emerged from COVID, articles began to be written about “quiet quitting”. Quiet quitting is reducing the amount of effort we devote to our job while remaining employed. The phenomenon of quiet quitting has risen post-COVID as many people struggle with issues such as anxiety and online meeting fatigue, and burnout.

Automation and using ChatGPT to take over routine tasks can free up employees’ time to concentrate on tasks that engage them and require intricate strategic or conceptual skills. This can increase productivity and keep staff from quiet quitting as they engage in work that calls for their creativity and talent. 

ChatGPT can do routine tasks: generate reports, handle customer complaints, and create content marketing materials such as email campaigns [3].



While ChatGPT can free up staff members from completing routine and mundane tasks and allow them to concentrate on challenging tasks to increase productivity, it is essential to remember that this technology is not a panacea.

ChatGPT, while more potent than other AIs, still produces incorrect answers, particularly when responding to complex questions or learning [4]. As stated in the previous article, the human component remains crucial when dealing with artificial intelligence. People are still required to check the information provided by ChatGPT. 


One of the challenges of ChatGPT is how to check for mistakes. ChatGPT is a Large Language Model (LLM) that uses multiple information sources. When answering a question, it creates the answer in the form of an article produced without attribution, citation, or benefit to those who published the original content [5].

This raises two issues.

Firstly, checking the accuracy and truthfulness of a document becomes problematic if it has no citations or sources. How do we know if what ChatGPT has produced is accurate and truthful if the source documents cannot be checked? The other aspect is that not all source documents have the same authority. For example, peer-reviewed documents are generally considered to have greater veracity and authority than non-peer-reviewed documents or an article by an individual on their personal blog site. When scraping the web for information, LLM does not differentiate between the authority of the sources they are scraping.

The second issue has to do with plagiarism and intellectual property laws. Intellectual property laws are designed to protect the intellectual property of an individual or organisation. Yet the law has not kept up to date with technological advances, particularly regarding artificial intelligence such as ChatGPT. The application of Intellectual Property Law to AI remains untested and potentially problematic.

Given these potential issues, businesses should be careful in relying on the information, particularly complex information produced by ChatGPT.


I wrote above that ChatGPT could free up staff from performing routine and mundane tasks allowing them to focus on creative, strategic work. While that is one side of the coin, the other is that staff can grow reliant on AI tools to do the work for them.

Creativity requires discipline, routine, and a commitment to persevere through failure. In the heat of the creative moment, inspiration and determination flower into an idea, concept or event that captures our imagination and other people's imagination.

Why spend time and energy being creative and original and persevering through failure when ChatGPT can produce an article or a social media post in minutes? 

Rather than ChatGPT giving staff the time to engage in strategic and creative work, staff may take the path of least resistance and become lazy by relying on it to do their work.

Does it matter if that happens?

There are two reasons why it does.

Firstly, when anxiety, depression and boredom are increasing in society, and the links between mental health and social media are documented, we need to do what we can to improve our mental health. One way is to be conscious of when we use technology and what for. Using technology and AI because it is easier is not necessarily constructive for our mental health.

Secondly, according to a study by Microsoft, people have an attention span of eight seconds, a decrease from twelve seconds in 2000 and a second less than a goldfish, which has an attention span of nine seconds. Furthermore, their study claims that the human attention span decreases by 88% yearly. The Statistic Brain Research Institute also confirmed this study. 

We need our attention, concentration, and ability to think deeply about complex and ethical issues. Thinking is like a physical muscle that must be used to be effective. Over-reliance on AI can reduce our ability to think effectively and in-depth.

While ChatGPT, like other forms of AI, has its uses in the workplace. We need to think about where it is used, how it is used and how it is monitored and checked. Businesses thrive best when they marry AI's strengths with their staff's effectiveness and strengths to create a constructive work environment to meet the needs of their clients.

[1]  https://www.gartner.com/smarterwithgartner/top-10-trends-in-digital-commerce

[2] https://addepto.com/blog/how-can-you-use-chatgpt-in-business/

[3] https://addepto.com/blog/how-can-you-use-chatgpt-in-business/

[4] https://edition.thewest.com.au/html5 

[5]  https://www.searchenginejournal.com/is-chatgpt