Skip to main content

Creating the enabling environment for safe AI uptake

Wednesday 15 – Friday 17 May 2024 I WP3368

Female,Doctor,Weighting,Cute,Baby,In,Clinic.,Aleppo,,Syria,October

Participants discussed the constraining factors and enabling environment for safe AI uptake including infrastructure requirements such as access to data, AI models, procurement skills, and technical expertise.

Exchange and learning

A review on the use of AI in the humanitarian sector revealed that increasingly AI conversations are polarised (a ‘silver bullet’ versus the need to ‘close the gate’).  Charting a meaningful future requires action to relinquish overly simplistic mental models. The humanitarian sector and technology groups need more productive conversations, moving from raising broad concerns to taking practical steps.

Coordination around initiating pilots and ensuring less duplication and competition was a common area of discussion. Participants were keen to consolidate case studies, use cases, learning, and the sharing of experiences including failures, and coming together as partners with a common interest to classify lessons, avoid duplication, and work together.

Another problem is the lack of transparency. Beyond all the hype, it is difficult to understand the extent of all the pilot projects and who is doing what within a big picture perspective. Therefore, the same mistakes may be repeated time and time again.

Evidence-based AI in humanitarian contexts is a work in progress, and the sector needs to invest human time to understand how staff are interacting with AI and using it, along with an analysis of power and politics in its application.

“Where does agency stand between human and the machine?”

Locally-led action and meeting needs

Digital divides remain a concern globally, with digital gaps (hardware and software) among generations, women and girls, and other intersectional communities. Protection and freedom from violence, along with human rights in a digital age are all serious considerations.

An AI literacy gap exists in local communities.  Are local populations sufficiently knowledgeable to deal with the complexity of risks and biases around AI?  AI can deepen divides, biases, exclusions, and censorship.

AI literacy, capacity, and capability within the humanitarian sector is a big issue. Digital literacy skills are difficult to find, and when the capacity is not in-house, how far and how fast are organisations falling behind? Ensuring due diligence without AI capacity in house is problematic.

“Are we at risk of bringing in culture debt, process debt, and reproducing our own failings as we go forward?”

The humanitarian sector still needs to ask the basic and fundamental question: what humanitarian challenges are AI appropriate for?  A plethora of AI initiatives do not seem to be guided by meaningful engagement with communities.

Some participants voiced that the use of AI use in the humanitarian sector is inherently extractive.  Organisations are extracting data from vulnerable people, systems are being designed by those in power, and little genuine engagement with communities is occurring.

The issue of consent and data collection at local level is a chronic problem, and general discussions around AI have criticised business models as data theft; in this case, the humanitarian sector has lessons to learn, and it is urgent and important to ensure community engagement and participatory co-design of data collection and use.

“Don’t expect tech companies to prioritise ethics. The humanitarian sector should be doing this.”

Data concerns

The importance of high-quality data for AI cannot be underestimated. Barriers and challenges to accessing or using quality data for AI in humanitarian contexts must be solved to enable good, efficient, and responsible AI.

One major challenge to this is that sometimes data does not exist. Many AI-powered tools will only work where local data is available. This requires community knowledge, and data that has not been collected.

Here a people-centred approach, to identify what data could be invisible to technologists who may be building for and not with communities is important. Definitions also matter in data collection, for example if a category for gender is binary, then anyone who does not identify in those categories will be invisible in the data.

Where data does exist organisations may not be willing to share it with others, and sometimes organisations may not have the capability to use the data that is available. Both are barriers.

Data selection and interoperability are crucial elements for consideration, as it is important to blend local and other data. A similar key is needed to match across data sets; however, data is not often standardized to enable this.

Data lives in many places, and with varying quality, and the use of data repositories such as the Humanitarian Data Exchange (HDX) for standardisation could drive interoperability and support better understanding of the limitations of the data. Data-sharing agreements are also critical, to preserve privacy and confidentiality and to agree gradients or variations of sharing, such as sharing raw data or sharing insights, which can mitigate risk.

Data for analysis needs to be locally relevant, representative, standardised, and of high quality. In exploring how generative AI relies upon such data, there appears to be relative opacity regarding what went into the training of the massive models, and it is hard to identify biases that are created at foundational stage.

Data labelling and annotation can introduce human biases at this stage, which can have negative impacts on populations. Synthetic data which can mitigate some issues like privacy, is not a panacea.

Working with the private sector

Humanitarian actors should be careful when adopting tools and supply chains from tech companies without a deeper understanding of where the data is from, what decisions were made, and therefore what the model can and cannot do. Then humanitarians can identify the parameters of use, and likely outcomes and restrictions.

A blended, harm-mitigation approach to understand inherent biases within models is important. A common view was that it is necessary to work with tech companies to explain their process and actions. Transparency and regulation should be enforced upon companies.

Equitable outcomes may not emerge in the AI sector. Most of the world’s population does not have access to AI in their native language, and market pressures mean a lack of commercial viability. Small language models may be critical for equitable outcomes, and tech organisations have an important part to play here.

The humanitarian sector also faces the challenge of due diligence capabilities and compliance with standards across governments and the private sector. Tech providers who work with the humanitarian sector also work with governments on surveillance and the military, and the sector must ask whether this is a conflict of interest.

There are also geopolitical dimensions to AI, with different views among China, the US and the EU which forces the sector to think about AI in the concrete, not the abstract. For example, are there risks if a humanitarian actor relies on tools that suddenly become unavailable to them due to wider geo-political shifts? It is important to have conversations about monitoring and evaluation, and why there are positive or negative outcomes. There is a huge gap between the tech developer and the end user, and a ‘black hole’ around transparency, accountability, and advocacy.

“Informing ourselves in the humanitarian sector is vital. Listen to podcasts, read tech correspondence and support investigative journalism.”

Previous

The ‘green shoots’ of AI use in humanitarian action

Next

Risks when humanitarians use AI

Want to find out more?

Sign up to our newsletter