Participants worked in small groups to analyse three different scenarios that set out states for how AI might impact humanitarian action by 2030. This session was created and facilitated in collaboration with The Government Office for Science. The scenarios informed a discussion on the potential impact that different opportunities and risks may have on humanitarian delivery and the actions that are most likely to steer humanity towards a positive future.
Wild West of AI
In the groups that discussed the scenario of a ‘Wild West of AI’, concerns focused inevitably on the harms caused by the proliferation of unregulated and ungoverned AI including misinformation, erosion of trust, and creation or escalation of conflict and war. Duplicate and unused solutions to problems would lead to inefficiency and wastage of precious resources. Ownership, control, and ability to validate AI and its impact could easily fall into the hands of malign actors.
However, opportunities are huge with a catalogue of potential solutions to use, learn, and build upon with new actors and alternative power dynamics emerging. This relies upon sector collaboration characterised by transparency, knowledge sharing, and reusable applications along with strong global governance and interoperability. An AI Global Compact could provide principles and a framework to guide a network of solutions, and effective collaboration with standards for evaluation.
“We have a window of opportunity to shape the future trajectory of AI in the humanitarian sector. But this window has already narrowed since 2021.”
In this scenario, participants questioned whether humanitarian principles are as central as they should be. What do these principles mean in the modern world of AI? Humanitarian actors and technology companies and infrastructure are at the mercy of huge power imbalances, and the humanitarian sector lacks the financial resources to obtain the most impressive AI tools.
Is there an opportunity for a new type of public partnership tailored to different risk positions, allowing the sector to reframe relationships? Can the sector explore open-source data in a marketplace that allows for joint ownership, and allows all humanitarian actors to access data equally?
“We must look at AI from a local, community-oriented perspective.”
AI On a Knife Edge
Other groups discussed ‘AI on a Knife Edge’, a scenario in which Artificial General Intelligence (AGI) has been achieved, and although there are early developments in safety and regulation, the risks of harms are strong. In this scenario, local NGOs are using and relying on AI for multiple operations, AI labs are developing their own humanitarian assistance, and AGI can devise its own subgoals.
“Conversations about safe and responsible AI are not held strongly enough by the humanitarian sector.”
The key focus of discussion was on tensions between delivering humanitarian aid as efficiently as possible, and what it means to be human. Participants identified the activities that only humans could provide including relating to others through emotional intelligence, and the ability to interact with vulnerable communities in the ways that communities prefer. Although AI systems may support negotiation and conflict resolution, there remains an element of humanity that surpasses technology in these domains.
The use of AI in humanitarian back-office functions and processes efficiencies would free up staff time allowing them to engage more in human activities with populations; however, donor expectations might change, and people’s time may not be funded in this area.
“There is a constant mention of the need for localisation and to be embedded in communities. Why is it so difficult to do?”
A major question was whether the use of AI is good for localisation and the decolonisation of aid. The sector is not good at listening to local voices. Could AI provide the tools to do this better? If so, how to ensure the co-creation of AI with local actors, tech developers, and communities? It is important to build a strong business model identifying the underlying economic incentives, and sustainability concerns, along with prioritisation of locally identified needs and humanitarian principles.
Safety was a major consideration, with suggestions of a categorisation of risks that mirrors the EU AI Act, and auditing to avoid harm and AI hallucinations later down the line.
AI disappoints
Other groups discussed a scenario where ‘AI disappoints’, and AI capacities have developed more slowly than expected. The humanitarian sector is disappointed with the results of using AI, with bad decisions hurting groups of vulnerable people. Donors are withdrawing funding.
Groups shared reflections that this scenario is a real possibility for AI in the humanitarian sector, with concerns that expectations are too high and that the sector should not allow the hype around AI to drive engagement. There are clear issues arising around capacity and capability in the sector, with concerns over safety, ownership, access, and the constraints of working with a private sector driven by revenue and profit.
It is important to build communities of practice in the sector including learning from failures and holding continuing dialogues among trusted stakeholders to make meaningful progress. There is a need to reduce and remove interagency competition as much as possible and create a joint donor fund for the development of AI practices and learning.