Many leaders in non-profit organisations lack basic knowledge or understanding of AI's benefits and risks.

BULDING NGOs AI CONFIDENCE

Does your non-profit organisation have a clear strategy and policies on artificial intelligence (AI) that covers:

      The areas in which AI will be used.

      The protocols the organisation will follow for the use of AI and

      Clear procedures that will be followed if and when things go wrong?

Many non-profit organisations lack a clear strategy and basic policies and procedures in this area for several reasons.

Reasons for a lack of clear strategy and policies in the AI area

These include:

Executive expertise gap

Many leaders in non-profit organisations lack basic knowledge or understanding of AI's benefits and risks. This is because:

      Many executives' training and background are in people skills, such as psychology or social work, rather than AI or IT.

      The current demands of their role leave little time to prioritise understanding the basics of AI. Many executives are juggling the demands of increasing government and statutory reporting requirements, increasing client demand and complexity of client problems with decreasing financial resources. For small to medium-sized non-profit organisations, much of the executive's time is spent applying for and trying to secure funding. This means AI has a lower priority and never gets addressed.

      Lack of incentive due to how funding is structured and provided. Most of the funding for non-profit organisations is provided for direct service delivery, and the funders do not provide sufficient funding for technology or AI.

The lack of time and incentive for executives to prioritise considering the role of AI in the organisation, combined with the rapid pace of technological advancements in this area, means the expertise gap widens.

Lack of investment in IT and AI

This is linked to the third dot point above. While government departments give lip service to the importance of functional and efficient technology and IT within a service, they do not see it as the government's responsibility to fund IT development in non-profit organisations. This is despite the clear link between efficient IT services, including AI, and effective service delivery.

Consequently, most non-profits have old technology and IT systems that negatively impact service delivery.

Many Management boards are also reluctant to invest in the organisation's IT systems because they fail to understand the link between efficient technology and effective service delivery.

Outdated view of service delivery

Many management boards still view service delivery in the non-profit sector as a service provided by one individual to another individual or group of individuals.

However, in a technological society, service delivery must include services provided through and with technology, including AI. If government funders and Boards were to recognise and accept this fact, there would be greater investment in technology and understanding of AI's importance.

The stories of where it all goes wrong

There are always stories of things going wrong. These stories increase people's aversion to seriously considering AI and developing risk mitigation strategies. Fear and risk aversion are often increased by a lack of understanding and the abovementioned expertise gap.

However, as AI becomes more important and influential in all aspects of our lives, non-profit organisations can no longer dismiss AI as too overwhelming or complex to understand and try to ignore. Management Boards and executives need to move beyond fear and apprehension and develop a holistic AI strategy that covers the responsible and ethical use of AI across the organisation.

Why is it essential to have clear policies covering AI?

There are several reasons why non-profits need to have policies and protocols for using AI.

Good Governance

Having a clear strategy for how the organisation will use AI is part of good governance and an essential component of the Board’s risk management responsibilities.

A Board of Management must consider many risk areas in their governance responsibilities. For example,

      Risk of insufficient funding or being defunded.

      Risks associated with service delivery can range from client complaints to confidentiality breaches or client harm caused by staff action or inaction.

      Risk of bad publicity.

      Risks of non-compliance with financial reporting requirements.

These are just a few of the many risks that the Board of Management must consider. Just as Boards accept the need to think through and have clear policies about these risks, so with AI. It is no longer acceptable for Boards to ignore this area, claiming it is too complicated.

AI is already being used by staff in the organisation.

As staff members increasingly use AI in their private lives, they bring these skills and how they use AI into the workplace.

However, using AI in a personal capacity differs from how it should be used in an organisational setting. Without clear policies, the Board is derelict in its duties because it fails to provide clear guidance to staff, who may inadvertently use AI in ways that put the organisation at risk.

Organisations are already using AI

Outlook, Gmail, Word documents, and Google search all have an AI component. This means any non-profit with IT and an email account already uses AI, whether the board or executive staff are consciously aware of it.

This is a further reason why the organisation must develop clear AI policies.

Best practice for developing a clear strategy and policies for using AI in your organisation.

There are several steps Boards and executives can take to develop strategies and policies for how AI will be used within the organisation.

1. Consider the spheres of AI

Three spheres must be considered.

A)        AI in products that are currently being used.

As mentioned above, many IT tools organisations use, such as Outlook, Gmail, Adobe, and Word, already have AI embedded in them. The AI in these tools enhances staff productivity and is essential for the organisation's efficient operations.

 

B)        AI for specific uses.

Examples of specific uses are:

i)                              Using AI to develop fundraising campaigns. Many non-profit organisations recognise the benefits of using AI for ethical fundraising. These benefits include:

      Targeted campaigns.

      Freeing up administrative time previously spent developing fundraising material for direct service delivery.

      Being able to analyse data that shows the effectiveness of any campaign. This data analysis is essential when reporting to Boards of Management or funders. By analysing previous campaigns, future campaigns can be more targeted to donors and the organisation's needs.

ii)                           Data Analysis.

AI allows organisations to analyse data in ways that previously were not possible. Effective data analysis is essential for non-profit organisations in several ways:

      Accurate data analysis reveals trends in client services, enabling the organisation to respond nimbly and provide the required services.

      Accurate data is essential when applying for more or new funding.

 

C)       Utilisation of publicly available AI tools.

Using publicly available AI tools can increase productivity and efficiency. However, this must be balanced with careful consideration of these tools' privacy and ethical use.

 

It is in this third area that most non-profit organisations have concerns about.

The advantage of considering the spheres of AI is that it allows Boards and executives to break down AI into segments and ensure a clear strategy and policy for each segment rather than feeling overwhelmed by the totality of AI.

Invest in training

With budgets that have little room for additional expenditure and lack of funding, most non-profit organisations do not allocate funding for training.

This is short-sighted and has detrimental flow-on effects for the organisation and staff.

The organisation is setting staff up to fail.

Providing tools that will enable staff to perform their roles with greater efficiency and effectiveness but not providing the necessary training sets staff up to fail or to use AI in ways that put the organisation at risk.

Impact on staff

If staff don’t feel confident using AI they will revert to the work practices they are comfortable with. These work practices generally are not as efficient and take more time. The result is the gap between demand for services and the ability of staff to meet demand will widen.

In this situation, most non-profit organisations fall into the trap of employing more staff to meet increased demand. However, having more staff adds further costs to the budget, and new staff use the same inefficient methods that caused the problem in the first instance.

Resistance to change

When staff have experienced an organisation bringing in new methods but not providing sufficient training and support, they grow to distrust any new initiative from senior management. This has broader implications when non-profit organisations try to implement a change management process. The lack of trust over time makes any change management process more challenging.

Find the staff member who is the AI mentor.

In addition to providing sufficient budget for training, it is also essential to find and allocate a staff member who will be the AI mentor for other staff.

Implementing AI safely and effectively in an organisation involves training staff and building their confidence in using AI. A mentor is essential for this.

For many staff, having a mentor they can approach with their concerns is much easier than encouraging them to approach a manager or supervisor. Staff are often concerned about appearing incompetent or ‘bothering’ a manager with something they feel is unimportant. Providing staff with an AI mentor means staff can clarify their concerns without these anxieties.

Non-profit organisations cannot ignore AI. It is embedded in many of the organisation's tools, and staff increasingly use AI in the workplace as they see the benefits in their private lives.

Therefore, the Board of Management and executives must build their confidence and understanding of AI, develop a clear strategy and policies to protect the organisation and give staff confidence and direction when using AI.

TOP