Bert Maerten, International Program Director for Oxfam Denmark, recently expressed the challenges that many NGOs face as they navigate the rapidly evolving landscape of Generative AI. In a conversation about how to harness this technology effectively, he noted the common sentiment among his peers: “We’re all on our own AI journey. Many of us are already experimenting with it, and everyone has a learning curve.” The pressing questions for organizations like Oxfam are how to advance strategically and discern valuable innovations from mere hype.
Despite the buzz around AI, many within the aid and development sectors feel they are lagging behind. A recent webinar highlighted that most organizations remain in the “Experimentation” phase of AI adoption, with many still grappling with basic awareness of its potential. This situation echoes the digital transformation wave seen a decade ago, where organizations faced a similar sense of urgency to adopt new technologies. However, the difference now lies in the speed of change, with AI advancements occurring in mere months rather than years.
Beyond the surface-level excitement, a deeper exploration reveals that many organizations are cautiously testing AI technologies. Observations from recent conferences and personal experiences indicate a range of activities across five levels of AI engagement. The first layer, personal use, is where employees have already begun employing AI tools like ChatGPT and Claude for tasks such as drafting reports and analyzing data, often without formal recognition from their organizations. This informal incorporation of AI can yield significant productivity gains but also introduces risks when not managed properly.
The next layer involves institutional use, where NGOs adopt AI tools for internal processes such as Knowledge Management and Monitoring, Evaluation, and Learning (MEL). More advanced applications are emerging, including systems that utilize Retrieval Augmented Generation to enhance insights from internal documents. However, many initiatives remain isolated within specific teams, lacking a cohesive organizational strategy.
Some organizations are beginning to apply AI externally, engaging communities with tools like WhatsApp bots for information dissemination and AI-driven diagnostics for humanitarian response. Yet, this raises ethical considerations about bias, transparency, and data security, particularly regarding data collection from marginalized groups.
As NGOs venture into this new territory, several key challenges have emerged. First, the governance frameworks in many organizations are not equipped to keep pace with the rapid advancements in AI technology. This disconnect can result in outdated policies that fail to address emerging risks. Additionally, much of the AI usage occurs in a “shadow” capacity, where staff leverage tools without oversight, complicating risk management and organizational learning.
Skills gaps further complicate matters, as many staff members are thrust into roles demanding AI-related decision-making without adequate training. This skills deficit extends to leadership, where boards often lack the expertise to guide AI-related strategies effectively. Furthermore, weak data infrastructure can undermine the effectiveness of AI systems, as the quality of outputs is directly tied to the quality of the input data.
Despite these challenges, there are opportunities for NGOs. Maerten suggests turning informal AI use into a learning opportunity by encouraging staff to share their experiences with AI tools. By doing so, organizations can identify useful applications and address risks collaboratively. Establishing robust governance that adapts to the fast-paced nature of AI is another crucial step. This entails defining clear, flexible red lines and ensuring community input in decision-making processes.
Finally, fostering AI literacy across all staff levels is essential. As AI becomes increasingly integrated into daily operations, understanding its implications will become a foundational skill, akin to proficiency in spreadsheets or email. By investing in training and resources, NGOs can better prepare their teams to navigate the complexities of AI.
Oxfam’s journey into AI reflects a broader narrative within the NGO sector. The challenge lies not in shunning AI but in embracing its potential while addressing the inherent risks. As organizations begin to openly engage with this technology, they will not only enhance their operational capabilities but also contribute to a more responsible and ethical application of AI in the development sector.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health

















































