Sentiment Analysis
SENTIMENT ANALYSIS

Sentiment analysis, also known as emotion AI, is a technique in natural language processing (NLP) that identifies and categorises opinions and feelings expressed in text—such as customer reviews, social media posts, or survey responses—to determine whether the sentiment behind the text is positive, negative, or neutral.
Sentiment analysis gives organisations insight into customer opinions about their products and services, enabling them to improve and enhance their overall customer experience. It benefits marketing, customer service, and brand reputation management, where understanding public perception is critical.
How Sentiment Analysis Works.
Several key steps are necessary if emotions and opinions expressed in text are to be analysed effectively. These steps include:
1. Text Preprocessing. This is cleaning the text by removing unnecessary punctuation, special characters, and stop words. Stop words are words that do not carry any meaning on their own, such as “and”, “the”, “of.”
Text preprocessing also involves breaking down the text into smaller chunks or phrases and reducing words to their base or root form—a process known as stemming and lemmatisation.
2. Feature Extraction. In this step, the text data is converted into numerical features that can be used for analysis. Common techniques include:
· Bag of Words (BoW), where text is represented as a collection of its words.
· Word Embeddings. This is where neural network-based algorithms like Word2Vec or GloVe are used to capture semantic meaning.
3. Model Building. In this step, a machine learning model is chosen and trained to classify the sentiment. Simpler models like Naïve Bayes or SVM use statistical models. Deep learning models such as LSTM (Long-Short-Term Memory) are recurrent neural networks designed to handle and process natural language and can classify emotions such as anger, joy, sadness, or aspects like product features.
4. Once the model has been trained, it can classify text into sentiment categories such as positive, negative or neutral.
5. It is essential to evaluate the accuracy and precision of the data collected, share insights with stakeholders, and continuously improve based on feedback and new data.
The benefits of sentiment analysis for businesses
Applying sentiment analysis in your business can provide valuable insights into customer opinions, improve decision-making, and enhance overall customer experience. Some practical applications include:
1. Customer Feedback Analysis
Sentiment analysis enables businesses to monitor reviews or survey responses on platforms like Google, Yelp, or product review sites. It helps understand customer satisfaction levels and identify the key pain points or areas for improvement that businesses should consider.
2. Brand Monitoring
Analysing sentiment on social media mentions (Twitter, Instagram, etc.) or blog posts enables companies to track how their brand is perceived online. They can measure brand sentiment over time and assess the impact of marketing campaigns, PR crises, or new product launches.
3. Competitor Analysis
Sentiment analysis can help businesses understand how consumers feel about competitors. Tracking sentiment for competing brands can reveal areas where a company can improve their product or services to become more competitive.
4. Market Research and product improvement
Analysing sentiment around competitors and broader industry trends can provide valuable insight into market dynamics and consumer preferences, which are essential information when considering marketing strategies.
Customer feedback on product features such as design or specific components assists businesses in understanding which features need further development and innovation to meet customers’ needs.
5. Crisis Management
When a brand faces negative press or a crisis, sentiment analysis can alert businesses to a spike in negative sentiment, enabling them to respond quickly with corrective actions and manage the damage to their reputation.
While sentiment analysis provides valuable business insights, it has risks and challenges. Understanding these risks is crucial for effectively making informed decisions and using sentiment analysis.
Applying sentiment analysis in your business can provide valuable insights into customer opinions, improve decision-making, and enhance overall customer experience
The challenges to consider when using sentiment analysis
Several challenges must be kept in mind when using sentiment analysis. These are:
1. Inaccuracy in interpreting the Sentiment.
There are three areas where this is particularly relevant.
· When sarcasm and Irony are used. Sentiment analysis often struggles to detect sarcasm, irony, or nuanced language, leading to incorrect sentiment classification. For example, the sentence “Oh great, another update that crashes my phone!” might be incorrectly labelled as positive when it is negative.
· Contextual Understanding. Sentiment analysis models may fail to grasp the context of a sentence or text. Words that are positive in one context can be negative in another. For instance, “light” could be positive for a laptop’s weight but negative when describing its performance.
· Ambiguity. Some statements are inherently ambiguous, making it hard to assign clear sentiment. For example, “It works fine for now” could be neutral or carry hidden dissatisfaction.
2. Cultural and Linguistic Bias
Different languages, dialects, or slang can affect the accuracy of sentiment analysis. A tool trained primarily in English might not perform well when analysing non-English languages or regional variations. This may lead to misinterpretations. Likewise, sentiment analysis systems may not fully capture the subtleties of different cultures, potentially leading to misunderstandings in a global market.
3. Over-reliance on Automation
Sentiment analysis works most effectively when there is human oversight and input. Where businesses rely solely on sentiment analysis tools without human oversight, it can lead to poor or incorrect decision-making. Sentiment analysis is not infallible and can miss important nuances that only human judgment can capture.
Sentiment analysis results must be considered part of a broader data set rather than the sole factor driving strategic actions.
4. Data Quality and Bias
Like any automated system, the quality and integrity of the results depend on the quality and accuracy of the data being entered. If the text preprocessing is insufficient and there is still “noise” in the data, such as irrelevant comments, spam or off-topic discussions, this can skew the results and lead to inaccurate sentiment assessments.
Likewise, if the algorithm has been trained on biased or incomplete datasets, it can produce skewed results. For example, if the training data is more representative of a particular demographic or user base, the sentiment analysis might be less accurate for other groups.
5. Privacy and Ethical Concerns
Sentiment analysis often involves analysing large amounts of text data from social media, reviews, or emails, raising privacy concerns. Businesses must ensure that they use data responsibly and comply with privacy regulations.
If businesses misuse sentiment analysis to manipulate public opinion or present an overly favourable view of their brand by downplaying negative feedback, this can lead to ethical issues and damage trust.
6. Limited Emotion Detection
It must always be remembered that while sentiment analysis typically focuses on broad categories like positive, negative, and neutral, it may not capture specific emotions (e.g., joy, anger, frustration) that are driving the sentiment. A negative sentiment doesn’t reveal whether it’s mild disappointment or outright anger, which can be crucial for customer service.
How to manage the challenges
Despite these challenges, sentiment analysis remains a valuable tool for businesses, and businesses can mitigate the risks in various ways.
1. Combine Sentiment with Human Analysis
As mentioned above, over-reliance on sentiment analysis can lead to poor or incorrect decision-making. Automated sentiment analysis must be combined with human review and oversight to ensure a more nuanced and accurate interpretation of the data.
2. Custom Models and Industry Tuning
Sentiment analysis models trained in one domain, such as general product reviews, may not perform well in another domain, such as legal or technical. Hence, it is essential to tailor sentiment models to fit specific industries or domains to improve accuracy and ensure the model is trained on relevant data to capture the right sentiments and emotions.
3. Monitor Bias and Continuously Train Models
It is essential to periodically review the data used to train sentiment models to reduce the risk of bias. Models should also be continuously updated to adapt to new language trends, cultural shifts, and user feedback.
4. Transparency and Ethical Use
One of the challenges with using sentiment analysis is privacy and ethical concerns. Therefore, it is essential to ensure transparency in how sentiment data is collected and analysed and handle customer data ethically to avoid privacy violations.
By recognising these risks and taking proactive steps, businesses can leverage sentiment analysis more effectively and responsibly.
Sentiment analysis will continue to be an essential tool for businesses, particularly as further developments in AI and machine learning (ML) enable sentiment analysis models to be more adept at understanding context, idiomatic expressions, and cultural nuances.
With the rise of real-time data from social media and other platforms, sentiment analysis will continue to evolve. This will enable businesses to respond quickly to customer feedback and market trends, monitor brand health, and make data-driven decisions to meet customer preferences.
Metaverse in 2025
With the arrival of ChatGPT later in 2022, the metaverse bubble burst, resulting in financial losses for companies that had invested heavily in it.
Personalisation in 2025
The marketing landscape continues to change and evolve rapidly due to technological advances, evolving consumer behaviours and shifts within the marketing industry.
Driving Digital Breakthrough for Non-Profits in 2025
Many non-profit organisations find it increasingly challenging to have their message heard by the wider community and potential donors and funders. The impact of technology, which once enabled organisations to amplify the good news stories and the change they brought to clients, seems to be decreasing as people scroll past the good news stories.
2024 presented challenges for not-for-profit organisations communicating their message. These challenges centred around the organisation’s ability to keep up with technological trends, societal shifts and audience behaviour. It is essential to review the challenges of 2024 and then consider how organisations can improve how they communicate with the broader community in 2025 to ensure their message has the potential to be heard.
The impact of technology in 2024
There were several trends in technology in 2024 that impacted non-profit organisations.
New Technologies
In 2024, the influence and use of AI and automation has become more mainstream for many individuals and for-profit organisations. However, many non-profit organisations, particularly small to medium-sized organisations, have struggled implementing AI due to a lack of resources, time pressure, and uncertainty about balancing AI use with maintaining confidentiality and privacy.
Rapidly changing technologies require financial and time investment to set up and check systems. The majority of non-profit organisations lack the financial resources needed and are time-poor. This means that many non-profit organisations are using older technology that limits what can be achieved regarding marketing and messaging.
Social Media Challenges
These challenges include:
● Algorithm Biases. Social media platforms prioritise engaging viral content, often sidelining non-profit messages that focus on serious or more complex issues.
● Pay-to-Play Environment. Organic reach has continued to be challenging on platforms like Facebook and Instagram. The alternative to organic reach is paid advertising. However, many non-profit organisations are reluctant to pay for advertising, particularly when facing budget uncertainty.
● Platform Fragmentation. With established platforms like X losing relevance and the emergence of new platforms like Threads and BeReal, non-profit organisations must continually adapt to these new platforms and how they can be used to reach the broader community.
Digital overload and declining attention spans
Throughout 2024, people have been continually inundated with messages from various sources. It is estimated that in 2024 a person has processed around 100,000 words per day made up of social media, emails, news, ads, streaming services, and work-related information.
The result is information fatigue, difficulty focusing on meaningful content and a preference for bite-sized, easily digestible content, preferably under 30 seconds. This makes it challenging for many non-profit organisations to convey meaningful messages quickly.
Societal Shifts
Societal shifts have included
Cost of living pressures.
For many people, rising cost-of-living pressures in 2024 reduced disposable income, leading to donor fatigue and lower levels of engagement by community members.
This, in turn, impacts donations.
Misinformation and scepticism
With the rise of social media, there seems to have been an increase in false and misinformation. When the misinformation concerns social issues, it can undermine the credibility of legitimate not-for-profit organisations and dilute public understanding or support for their work.
False and misleading information can increase scepticism within the community, which can be a barrier to the message of non-profit organisations.
Audience Behaviour
Changes to audience behaviour include generational shifts and balancing the messaging requirements of different generations. For example, Gen Zs need authentic, values-driven communication that is quick and impactful. Traditional methods, such as lengthy reports that may reach Baby Boomers, are not ineffective with Gen Zs.
Tailoring messages that resonate across generations is an ongoing challenge for many non-profit organisations.
The other aspect of the audience’s behaviour is fatigue, and there are two aspects to this.
Ask Fatigue
As mentioned under society shifts, many people feel donor fatigue with increased cost-of-living and constant fundraising appeals from multiple organisations. However, another form of fatigue also impacts non-profit organisations.
Crisis Fatigue
With the ongoing prevalence of crises, such as climate emergencies, the ongoing impact of COVID and the increase of regional conflicts, many within the community feel desensitised or overwhelmed. There is also a sense that problems are too complex. These feelings reduce people’s willingness to be engaged and assist in an ongoing capacity.
Given these challenges, what steps can non-profit organisations take in 2025 to give themselves the best possible opportunity to have their message heard?
It is estimated that in 2024 a person has processed around 100,000 words per day made up of social media, emails, news, ads, streaming services, and work-related information, resulting in information fatigue.
Optimising communication in 2025
Not-for-profit organisations can take several steps to ensure they optimise their communications in 2025 despite the challenges and changes that emerged in 2024.
Embrace and leverage technology.
Many not-for-profit organisations avoid technology and staying updated with emerging platforms or trends, such as short-form videos, and using AI to create interactive experiences for people to watch. The result is that their message is not being heard or is being ignored because of the impact of other messages.
Part of this avoidance is due to financial and time constraints; however, a large part of the issue is the leaders’ mindset. Leaders and Management boards often do not understand the value and importance of embracing and using technology effectively. When leaders do get enthusiastic about the possibility of using technology effectively, the enthusiasm is often short-lived. Hence, it never becomes a consistent priority that achieves positive results over the long term.
When leaders begin to understand the potential impact technology can achieve in getting the organisation’s message out, we begin to see the following shifts within an organisation.
1. There is a budget allocation for social media and technology use.
2. There is a requirement for the CEO to report to the Board on the effectiveness of the budget allocation in getting the organisation’s message out into the community.
3. SEO and content marketing are prioritised and optimised for voice and AI-powered search tools to improve discoverability.
4. Technological innovation is encouraged, such as using AI-powered chatbots for instant communication and FAQs on websites and social media platforms.
The priority is the personal.
The days when sending out generic information was acceptable have long gone. One of the changes that has occurred in marketing is that audiences expect that if an organisation is going to market to them, it will have done enough research and collected enough data to understand its audience’s preferences, behaviour, and demographics so that there is a more targeted outreach.
This expectation extends to not-for-profit organisations. If NFPs market to potential donors, current donors, and/or community members, they must have enough data to understand their targets’ preferences, behaviour, and demographics.
In other words, the messaging and marketing must be personalised to the people they are targeting.
For example, if the organisation is going to target Gen Zs and Millennials, it will need to develop a completely different message from the one it would use for Baby Boomers. To effectively target Gen Zs and Millennials, an organisation needs to demonstrate the impact of their work on social justice, sustainability, or other causes relevant to young people in these generations.
Another area where personalisation is crucial is accessibility and inclusion. Content must be accessible to people with disabilities, such as screen-reader-friendly video captions. Translation tools are also essential when communicating effectively with diverse communities.
The importance of impact
What is the impact of what the organisation does? Can the impact be measured? Can the impact be measured in dollar figures?
Many organisations, particularly small to medium-sized not-for-profit organisations, struggle to clarify their service’s impact and how to measure it effectively. Becoming clear about the impact the organisation is having and communicating it effectively with compelling stories that demonstrate the impact in practical ways is a powerful way to build trust within the broader community.
The essential factor is trust.
Improving communication in 2025 is about building trust.
Trust is built over time in personal relationships. The more you get to know a person, the more time you spend with them, the greater the possibility of trust developing.
Trust is also built when the other person relates to you in a personal way. When someone relates to you in general, non-specific ways, it is harder to build trust, just as it is challenging to build trust when the other person has a negative impact on us.
The same principles apply to getting your message heard in 2025.
● You need to use and leverage technology consistently. The importance of SEO cannot be stressed enough, particularly SEO optimised for AI power search tools. SEO allows your message to be found, and trust develops when the messaging is consistent. Many not-for-profits post on an ad hoc basis. The lack of consistency negatively impacts the organisation.
● The importance of the personal. The challenge with messaging using social media and technology is that we must clearly demonstrate in our posts that we understand our audience. If people don’t feel understood, they will keep scrolling, and our message will be lost.
● Share your impact in a personal way. Impact has two aspects. On the one hand, we must clearly articulate the organisation’s impact on the community. The second aspect of impact is the consistency with which we repeat the message. The clearer we can get the message, and the more frequently we repeat that message, the more impact we will have.
As we improve our communication tactics, more people will trust our message and what the organisation is achieving.
By thinking creatively and developing a consistent, coherent communication strategy, not-for-profit organisations can continue to make meaningful connections to their communities and audiences in 2025.
Willowchip
Quantum computing uses the principles of quantum mechanics to process information and applies it to the computing industry.
AI > Web 2.0?
Greenwashing – is it bad for business?
Greenwashing gives the false impression that the brand is ethical in ensuring its environmentally sustainable practices.
Building NGOs AI Confidence
Does your non-profit organisation have a clear strategy and policies on artificial intelligence (AI) that covers:
● The areas in which AI will be used.
● The protocols the organisation will follow for the use of AI and
● Clear procedures that will be followed if and when things go wrong?
Many non-profit organisations lack a clear strategy and basic policies and procedures in this area for several reasons.
Reasons for a lack of clear strategy and policies in the AI area
These include:
Executive expertise gap
Many leaders in non-profit organisations lack basic knowledge or understanding of AI’s benefits and risks. This is because:
● Many executives’ training and background are in people skills, such as psychology or social work, rather than AI or IT.
● The current demands of their role leave little time to prioritise understanding the basics of AI. Many executives are juggling the demands of increasing government and statutory reporting requirements, increasing client demand and complexity of client problems with decreasing financial resources. For small to medium-sized non-profit organisations, much of the executive’s time is spent applying for and trying to secure funding. This means AI has a lower priority and never gets addressed.
● Lack of incentive due to how funding is structured and provided. Most of the funding for non-profit organisations is provided for direct service delivery, and the funders do not provide sufficient funding for technology or AI.
The lack of time and incentive for executives to prioritise considering the role of AI in the organisation, combined with the rapid pace of technological advancements in this area, means the expertise gap widens.
Lack of investment in IT and AI
This is linked to the third dot point above. While government departments give lip service to the importance of functional and efficient technology and IT within a service, they do not see it as the government’s responsibility to fund IT development in non-profit organisations. This is despite the clear link between efficient IT services, including AI, and effective service delivery.
Consequently, most non-profits have old technology and IT systems that negatively impact service delivery.
Many Management boards are also reluctant to invest in the organisation’s IT systems because they fail to understand the link between efficient technology and effective service delivery.
Outdated view of service delivery
Many management boards still view service delivery in the non-profit sector as a service provided by one individual to another individual or group of individuals.
However, in a technological society, service delivery must include services provided through and with technology, including AI. If government funders and Boards were to recognise and accept this fact, there would be greater investment in technology and understanding of AI’s importance.
The stories of where it all goes wrong
There are always stories of things going wrong. These stories increase people’s aversion to seriously considering AI and developing risk mitigation strategies. Fear and risk aversion are often increased by a lack of understanding and the abovementioned expertise gap.
However, as AI becomes more important and influential in all aspects of our lives, non-profit organisations can no longer dismiss AI as too overwhelming or complex to understand and try to ignore. Management Boards and executives need to move beyond fear and apprehension and develop a holistic AI strategy that covers the responsible and ethical use of AI across the organisation.
Why is it essential to have clear policies covering AI?
There are several reasons why non-profits need to have policies and protocols for using AI.
Good Governance
Having a clear strategy for how the organisation will use AI is part of good governance and an essential component of the Board’s risk management responsibilities.
A Board of Management must consider many risk areas in their governance responsibilities. For example,
● Risk of insufficient funding or being defunded.
● Risks associated with service delivery can range from client complaints to confidentiality breaches or client harm caused by staff action or inaction.
● Risk of bad publicity.
● Risks of non-compliance with financial reporting requirements.
These are just a few of the many risks that the Board of Management must consider. Just as Boards accept the need to think through and have clear policies about these risks, so with AI. It is no longer acceptable for Boards to ignore this area, claiming it is too complicated.
AI is already being used by staff in the organisation
As staff members increasingly use AI in their private lives, they bring these skills and how they use AI into the workplace.
However, using AI in a personal capacity differs from how it should be used in an organisational setting. Without clear policies, the Board is derelict in its duties because it fails to provide clear guidance to staff, who may inadvertently use AI in ways that put the organisation at risk.
Organisations are already using AI
Outlook, Gmail, Word documents, and Google search all have an AI component. This means any non-profit with IT and an email account already uses AI, whether the board or executive staff are consciously aware of it.
This is a further reason why the organisation must develop clear AI policies.
Management Boards and executives need to move beyond fear and apprehension and develop a holistic AI strategy that covers the responsible and ethical use of AI across the organisation.
Best practice for developing a clear strategy and policies for using AI in your organisation.
There are several steps Boards and executives can take to develop strategies and policies for how AI will be used within the organisation.
1. Consider the spheres of AI
Three spheres must be considered.
A) AI in products that are currently being used.
As mentioned above, many IT tools organisations use, such as Outlook, Gmail, Adobe, and Word, already have AI embedded in them. The AI in these tools enhances staff productivity and is essential for the organisation’s efficient operations.
B) AI for specific uses.
Examples of specific uses are:
i) Using AI to develop fundraising campaigns. Many non-profit organisations recognise the benefits of using AI for ethical fundraising. These benefits include:
● Targeted campaigns.
● Freeing up administrative time previously spent developing fundraising material for direct service delivery.
● Being able to analyse data that shows the effectiveness of any campaign. This data analysis is essential when reporting to Boards of Management or funders. By analysing previous campaigns, future campaigns can be more targeted to donors and the organisation’s needs.
ii) Data Analysis.
AI allows organisations to analyse data in ways that previously were not possible. Effective data analysis is essential for non-profit organisations in several ways:
● Accurate data analysis reveals trends in client services, enabling the organisation to respond nimbly and provide the required services.
● Accurate data is essential when applying for more or new funding.
C) Utilisation of publicly available AI tools.
Using publicly available AI tools can increase productivity and efficiency. However, this must be balanced with careful consideration of these tools’ privacy and ethical use.
It is in this third area that most non-profit organisations have concerns about.
The advantage of considering the spheres of AI is that it allows Boards and executives to break down AI into segments and ensure a clear strategy and policy for each segment rather than feeling overwhelmed by the totality of AI.
Invest in training
With budgets that have little room for additional expenditure and lack of funding, most non-profit organisations do not allocate funding for training.
This is short-sighted and has detrimental flow-on effects for the organisation and staff.
The organisation is setting staff up to fail
Providing tools that will enable staff to perform their roles with greater efficiency and effectiveness but not providing the necessary training sets staff up to fail or to use AI in ways that put the organisation at risk.
Impact on staff
If staff don’t feel confident using AI they will revert to the work practices they are comfortable with. These work practices generally are not as efficient and take more time. The result is the gap between demand for services and the ability of staff to meet demand will widen.
In this situation, most non-profit organisations fall into the trap of employing more staff to meet increased demand. However, having more staff adds further costs to the budget, and new staff use the same inefficient methods that caused the problem in the first instance.
Resistance to change
When staff have experienced an organisation bringing in new methods but not providing sufficient training and support, they grow to distrust any new initiative from senior management. This has broader implications when non-profit organisations try to implement a change management process. The lack of trust over time makes any change management process more challenging.
Find the staff member who is the AI mentor
In addition to providing sufficient budget for training, it is also essential to find and allocate a staff member who will be the AI mentor for other staff.
Implementing AI safely and effectively in an organisation involves training staff and building their confidence in using AI. A mentor is essential for this.
For many staff, having a mentor they can approach with their concerns is much easier than encouraging them to approach a manager or supervisor. Staff are often concerned about appearing incompetent or ‘bothering’ a manager with something they feel is unimportant. Providing staff with an AI mentor means staff can clarify their concerns without these anxieties.
Non-profit organisations cannot ignore AI. It is embedded in many of the organisation’s tools, and staff increasingly use AI in the workplace as they see the benefits in their private lives.
Therefore, the Board of Management and executives must build their confidence and understanding of AI, develop a clear strategy and policies to protect the organisation and give staff confidence and direction when using AI.
Live Shopping Impact on E-Commerce
Is live shopping an opportunity for innovative businesses to engage with customers and make a profit, or is it a new threat to offline markets that will negatively impact economies?
Humanising Brands in an AI-Driven World
When was the last time you set aside time to reflect on the personality of your organisation’s brand?