Skip to main content

The state of play

Wednesday 15 – Friday 17 May 2024 I WP3368

Female,Doctor,Weighting,Cute,Baby,In,Clinic.,Aleppo,,Syria,October

Participants discussed the current trends in AI technological progress, various AI models, the pace of change, regulatory spaces, and emerging risk-based approaches with implications for the humanitarian sector.

The humanitarian sector is facing some of its biggest challenges in 2024, with 293 million people estimated by the UN to need humanitarian assistance and protection.  UN support is only targeted to reach 60% of them.  Despite more than 12,000 organisations responding to humanitarian crises across the globe, the sector’s overall capacity falls short.

With the humanitarian sector operating under huge resource constraints, the search for effective and innovative solutions inevitably turns to technology. AI offers the potential to help address this gap and unlock rapid progress. It can be used to predict when crises will happen and what the impact is likely to be on populations. It can potentially improve the breadth and depth and effectiveness of humanitarian responses.

The humanitarian sector is built on principles of humanity, neutrality, impartiality, and independence.  Currently, humanitarian, development, peacebuilding and human rights organisations are staking out positions on AI, its application and governance, and their perspectives on shared global ownership. 

AI is being applied in different ways in the humanitarian sector, for example, to predict natural disasters, displacement, famine, and air strikes, to identify crop pests, and to provide support to vulnerable people through chatbots.

“When we talk about AI, think of humanitarian principles of humanity, neutrality, impartiality, and independence.”

While development in AI carries huge potential benefits to transform the humanitarian sector, participants identified major risks, challenges, and societal consequences in this uncertain space. Questions emerged around how AI might be developed and used by a range of stakeholders including malign actors; whether the humanitarian sector has enough resources to understand and harness AI; and how to build AI models that focus on putting populations who are often regarded as peripheral to humanitarian and development efforts.

Agreements with technology companies and humanitarian organisations already exist, yet there are major concerns around data storage, ownership, and use.  Tension exists between collecting and owning data and protecting humanitarian principles.

“We don’t know how to be as effective as we can while also being responsible.”

How can the sector be as effective as possible to meet its humanitarian goals through AI, while also acting responsibly? The principle of ‘first do no harm’ is important.  Questions emerge around the use of AI and human rights and risk, for example, the rights of populations at risk of harm, the right to security, and the right to freedom of speech. The conflict of rights is exacerbated by AI.

Governance and regulation

Governance and regulation of AI in the humanitarian sector was a major concern. It is important to get the language right around regulation, including terms like ‘interested parties’, ‘customers’, and ‘affected persons’ but the humanitarian sector uses terms like ‘beneficiaries’ and ‘recipients’. The position of people trying to survive a major disaster is not that of a ‘customer’ or ‘interested party’.

A range of global and regional regulations exist which provide standards for best practices such as EU regulations, General Data Protection Regulation (GDPR), and the new EU AI Act that sets out a range of risks from high to low. This is not therefore a legal vacuum, and the humanitarian sector needs to consider how any new regulations on AI can square with what already exists nationally and globally. However, caution is needed as GDPR, for example, has customer-focused incentives rather than considering highly vulnerable populations.

The moral underpinnings for humanitarian action have produced digital ethics in the past, and, in today’s climate, ethics and regulation for AI in the humanitarian field are urgently needed. Industry standards can be helpful and shape production and procurement.

Participants discussed governance and regulation with a view that there is no one single framework to address the problem or meet the varied outcomes the sector is seeking to achieve. In this initial discussion, key themes of data, assurance and ethics, and governance and regulation emerged, to be explored further throughout the meeting.

Previous

The risks and opportunities of AI on humanitarian action

Next

The humanitarian world in 2030

Want to find out more?

Sign up to our newsletter