Value Creation via NFTs
In April 2021, we wrote about the increasing awareness of NFT technology after Mike Winkelmann sold an NFT for artwork at an eye-watering $69 million. Since then, the use of NFTs has been expanding from the art world into areas unforeseen seven months ago.
The Creeping of AI
The world of AI is here and impacting our lives and influencing decisions in ways we do not consider. The only attention we give to AI is when there is an exciting breakthrough that makes the news or we watch another TV series with scenarios of an AI apocalypse, yet AI impacts us every day, from what we see on social media, to what ads pop up as we scroll through Instagram, to the speeding fine we receive. It is all determined by AI.
What is AI?
AI is a computer program that uses complex code and processors to sift through massive amounts of data to make decisions and take actions. It is more accurate to speak of artificial intelligences (AI’s) because countless different programs fall under the umbrella of AI.
From the world of online dating, shopping to dealing with government departments, AI is becoming increasingly popular. In marketing, AI is being used to target advertising to people based on their past and present shopping and browsing preferences. Based on this data, predictive algorithms make inferences about choices we are likely to make in the future. It is estimated that these algorithms drive 35% of what people buy on Amazon and 75% of what they watch on Netflix.
Is AI as neutral and objective as we think?
The codes and processors that sift through a large amount of data are known as algorithms, which are the set of rules the computers follow to process the data and arrive at conclusions. The algorithms follow the mathematical objective set by the designer. AI decisions are often viewed as the result of neutral, objective technology and therefore superior to decisions made by people.
The reality is that AI is, from start to finish, the product of human choice and decisions that are influenced by human biases, shaped by human values, and prone to human errors. This means AI is only as good as the data used to train it. If the data is incorrect, biased or of poor quality, the decisions and actions taken by the AI will also be inaccurate, biased and of poor quality.
Centrelink’s Robo-Debt system is a clear example of an assumption about the correctness of decisions made by AI despite evidence of the actual harm it was causing to people.
The recent investigation by US lawmakers into Facebook and other platforms use of algorithms to push emotional and toxic content that amplifies depression, anger, hate and anxiety is further evidence that algorithms are not the ethical neutral, objective technology we may have assumed.
Every automated action on the internet, from ranking content and displaying search results to offering recommendations, is controlled by computer code written by engineers who are often white, well-educated and affluent [12].
An example of this is in the employment area, where AI technology is utilised in hiring decisions. Men are often chosen over better-qualified women because of the program’s embedded gender bias, which AI reinforces [13].
Ethics and AI
AI will continue to impact our lives and the decisions we make. Hence the question is how AI can be used ethically and in ways that add value instead of being used destructively as in Robo-debt.
- Structural changes
Organisations developing and implementing AI need to ensure that the teams developing the program represent the wider community. This means having equal representation of women, people who have disabilities, and people from diverse cultural and economic backgrounds involved in program design to combat the unconscious biases when white, middle-class, educated males set algorithms.
- Accountability
Who is accountable for automated decisions, and who are they responsible to [14]? It is easy for the algorithm makers or the decision-makers who decided to implement an AI program to blame “the system”. However, as outlined in this article, the “system” has been generated by people who have biases, assumptions, and beliefs.
Companies like Google have established principles for ‘responsible AI” to guide their practices, including considering issues like fairness, privacy, and security [15]. AI systems also need to be accountable to the end-user, that is, people who are affected by the decisions taken by the AI program. These decisions need to be easily explainable to the end-user; otherwise, the result will be an increase in distrust and a sense of alienation.
- Remedy
When AI makes the wrong decision, there should be a transparent remedy process for affected people. As demonstrated with Robo-debt, the impact on people’s mental health when they feel they are battling an impersonal algorithm can be immense.
Technology is never neutral. While AI can make our lives easier and streamline our choices, we need to ensure that how AI is being used is inclusive and builds communities rather than alienating because of unconscious biases or assumptions in how programs are developed.
[1] https://www.abc.net.au/news [2] ibid [3] https://theconversation.com/ethics-by-numbers [4] https://theconversation.com/how-marketers-use [5] ibid [6] https://theconversation.com/ethics-by-numbers [7] https://www.abc.net.au/news [8] ibid [9] ibid [10] ibid [11] https://www.bloomberg.com/news/articles [12] https://www.abc.net.au/news/ [13] ibid [14] ibid [15] ibidFacebook’s Corporate Citizenship
The business pages of the West Australian on Wednesday, 27 October, reported Zuckerberg was riled as a consequence of the bad press received by Facebook due to the documents provided by Frances Haugen to US Congress and the Securities and Exchange Commission.
Understanding your data
Do you know what your organisations digital business strategy is? Are you clear about the value of a digital business strategy for your organisation?
Nudge Theory
Choosing an apple or banana when paying for petrol because the fruit is where we pay or snacks in a vending machine replaced by healthy choices are examples where customers are nudged in their decision-making process.
- Published in Data, marketing, Psychology
Chess vs Checkers
Are you more of a chess or checkers (draughts) player when it comes to planning and implementing a strategy in your organisation? These boards games generally played amongst friends, unless you play chess competitively, can assist us in reflecting and thinking about our style of leadership and implementing strategies.
Customer Life-Time Value
We give lip service to the importance of customers and clients. For businesses impacted negatively by COVID, whether we are B2B or B2C, maintaining, and indeed growing a solid base of loyal clients will be the difference between success and failure, there it is crucial to know the value of our most important asset.
Where to hang your NFT?
Finding hanging space for your recently purchased NFT artwork is not something you need to worry about, for your artwork will in all likelihood never need to be hung.
5 stages of data analytics
Data analytics is the process of examining data sets in order to analyse and draw conclusions from historical outcomes, increasingly this is done through the aid of software such as dashboards.
There are five stages of data analytics which we will explore in this article.
- Published in Data
Decision-making traps using data
KEY TAKEOUTS
DECISION-MAKING TRAPS USING DATA

The nature of descriptive analytics exposes an array of decision-making pitfalls for decision-makers. By developing an awareness around common cognitive traps, we can use our understanding of this process to make more balanced and meaningful decisions.
In this article, we reflect on the individual traps in the data decision-making process, learn how to frame a problem so that we can use insights to make better decisions, and understand these golden rules around decision-making pitfalls:
- The way we make decisions is far from the rational model
- Decision biases affect every step of the decision-making process
- Analytics can offer multiple tools to overcome biases and decision traps
Decision trap 1: Availability bias

Availability (bias) is a heuristic; whereby people make judgments about the likelihood of an event based on how easily an example, instance, or case comes to mind. We all experience availability bias in everyday dialogue, but especially in marketing when we hear; “but, you know it’s much harder to get data about that”, and this is often true. This emphases that it must be taken into consideration that there is an awareness and acknowledgement that the data being used may not necessarily answer your question.
It depends on how important the question that you’re trying to answer is; if it’s a high-stakes decision, you should be creative about collecting various sources of data to avoid this trap.
Decision trap 2: Short-term Emotions

Every decision we make is influenced by our emotions, and when high pressure situations are present, it’s very easy for our decision-making judgment to get clouded by these emotions, especially in group environments.
This obvious trap simply emphasises the importance of taking a moment to assess the external view. By allowing some distance from short-term pressures and emotions, we can assess the data without having our judgement being clouded and distracted. These windows create an opportunity for emotional intelligence (EQ) to flourish, where awareness and mindfulness will act as a key deterrent to this trap.
There is room for both emotions and data, particularly in creative environments. However, it’s often the case that the former can outweigh the latter when the pressure is very high, and that’s precisely what we need to avoid.
Decision trap 3: Confirmation Bias

Confirmation bias is the tendency to process information by looking for, or interpreting, information that is consistent with one’s existing beliefs. This biased approach to decision-making is largely unintentional and often results in ignoring inconsistent information.
We’re all very likely to fall into confirmation bias for the simple reason that it makes us feel intelligent. When we pursue a risky avenue, we seek people who will give us positive feedback and encourage us which is precisely what we want in terms of building our confidence. At the same time this should not be interpreted as objective evidence. And it’s not necessarily what we need to make a good decision.
What we actually need is the disconfirming evidence, which is evidence that refutes an opinion or forecast. Interestingly disconfirming evidence is regularly overlooked in data analysis and widely-considered to be the most underused.
Decision trap 4: Overconfidence

Overconfidence is ever pervasive in communications, and is arguably a requirement in this field to validate pitches or concepts. However, it’s a pitfall in the data science world, even for people whose jobs are to look at data and be objective. While we don’t want people who aren’t hopeful, as we would never have those breakthrough findings, at the same time, we need to stay objective and understand when something works and when something doesn’t. Therefore, we can see precisely why being able to use data in the right way is so critical.
Decision trap 5: Narrow Framing

Psychologically, the framing effect is a cognitive bias where people decide on options based on whether the options are presented with positive or negative connotations. When we use analytics, we can clearly define the objectives and the boundaries of those projects from the start. And this is not just important for our own understanding, it’s also critical for our peers so everybody can build a common consensus about what is it that we’re trying to achieve, how we measure success and what might be some of the limitations of our analysis.
When we suffer from a narrow frame effect, big data and advanced analytics can help us widen our frame because we can explore directions or hypothesis that normally we wouldn’t be able to simply because we wouldn’t have the opportunity to collect this data and infer patterns from them.
Summary

The general consensus indicates that the main obstacle for good decision-making is that we’re unaware of our own biases. If we think about common behaviour in the workplace, such as seeking positive feedback and recalling this only, we can see how our thinking can be susceptible to all these traps. However, as we know, avoiding negative feedback equals missing important insight for continued learning and development, a sentiment that is echoed greatly in data analysis.
- Published in Data, Psychology