AI will have profound implications for humanitarians. If its potential is realised, experts suggest it could help to address chronic issues that have hindered effective and accountable humanitarian action. AI could transform how humanitarian actors coordinate, how decision-makers access critical information, or how accountability is provided to communities affected by humanitarian disasters. Predictive analytics models and image processing algorithms for satellite imagery are already helping humanitarians to respond faster and more effectively.
Conversely, AI may turbocharge risks. Irresponsible use of AI could unintentionally exacerbate bias in delivery and exclude vulnerable groups or individuals from lifesaving support, or unintentionally spread false information. Affected communities may not be involved in how AI tools, that impact them, are used undermining their privacy and the effectiveness of delivery. The use of AI may also have systemic impacts, for example disadvantaging smaller, local organisations who lack the resources to use it – shifting power and resources away from local actors. Acute harms may come from malign actors deploying AI-powered tools to disrupt humanitarian delivery through cyber-attacks or disinformation. Given the vulnerability of humanitarian populations these risks may be felt particularly starkly.
Collaboration amongst humanitarian actors and with industry experts is key to realising a positive vision for AI. The inevitable use of AI will increasingly shape all parts of the humanitarian system as organisations rely on AI-powered systems to drive efficiency gains. Many challenges will therefore be shared and tackling them jointly will deliver better outcomes. In some cases, like data responsibility, we can build on good practice where the humanitarian community has previously come together. However, there will be novel challenges where there is no precedent of broad collaboration. Building a shared, deep understanding of the impacts of AI relevant to humanitarian crises is therefore key to foster conversations on where further engagement would be most valuable, and where redlines and minimum standards may be appropriate.
The recent UK AI Safety Summit provided a forum for industry and governments to come together to build consensus on how AI can safely be used for good. This event will provide a humanitarian lens to the impact of AI and bring together leading experts’ views from local and national NGOs, INGOs, industry, academics and governments. Discussion now, whilst the application of AI on humanitarian action is still in a formative period, can shape the use and development of AI to be consistent with humanitarian ethics, principles, and standards, it will identify the responsibilities different actors have, and determine pathways for further collaboration.