As frontier artificial intelligence systems evolve at a remarkable pace, global policymakers are confronted with the urgent need to establish governance mechanisms that can keep up. At the India AI Impact Summit 2026, the session “International AI Safety Coordination: What Policymakers Need to Know” gathered ministers, multilateral leaders, and AI safety experts to discuss how developing economies can shape global AI safety frameworks proactively, rather than merely adhering to fragmented rules set by others.
This closing dialogue of the International AI Safety Coordination track focused on practical strategies to align AI innovation with public trust, fundamental rights, and long-term global stability. Speakers emphasized that for the Global South, collaboration on AI safety is an economic and technological necessity rather than an option.
With AI already being integrated into critical sectors like public health, agriculture, education, social protection, and public service delivery, the urgency for nations to transition from isolated national approaches to a more coordinated strategy has never been clearer. Participants noted that the next phase of AI governance will hinge on institutions’ ability to build capacity and operationalize common standards at a speed that can match rapid technological advancements.
Josephine Teo, Minister for Digital Development and Information in Singapore, emphasized the need for evidence-based policymaking and globally interoperable standards. Drawing parallels to aviation safety, she argued that AI governance should rely on rigorous testing and simulation rather than intuition. Without international coordination, she warned, “fragmentation will persist, trust will weaken, and the safe scaling of frontier technologies will become far more difficult.”
Echoing these sentiments, Gobind Singh Deo, Minister of Digital Development and Information in Malaysia, stressed that credible regional cooperation hinges on strong domestic capacities. He highlighted the importance of middle powers bolstering their enforcement capabilities, building domestic AI governance expertise, and developing institutional capacity. Platforms like the ASEAN AI Safety Network were identified as essential mechanisms for translating shared commitments into operational risk-sharing and preparedness systems.
Mathias Cormann, Secretary-General of the OECD, underscored that public trust is critical to AI’s long-term trajectory. “Trust in AI is built through inclusion and objective evidence,” he stated. He called for coordinated action across governments, industry, and civil society to bridge the growing gap between innovation and oversight, suggesting that in certain instances, it may be necessary “to slow down, test, monitor and share information” to ensure that systems respect fundamental rights.
Sangbu Kim, Vice President for Digital and AI at the World Bank, focused on the importance of embedding safety into AI systems from the design phase, especially in low-capacity environments. He described AI as both “the spear and the shield,” asserting that effective risk management requires ongoing learning and structured global partnerships prior to large-scale deployment.
Jaan Tallinn, an AI investor and Co-Founder of the Future of Life Institute, contextualized the discussion within the competitive dynamics of frontier AI development. He cautioned that the intense rivalry among leading labs renders unilateral restraint unlikely. However, he noted that the concentration of compute and capital in advanced AI development could actually facilitate governance—if global alignment is achieved. He stressed the necessity for heightened political awareness and coordinated international action at this critical juncture.
The session distilled a pragmatic operational agenda for the next 12 to 18 months that included establishing shared safety benchmarks, creating structured information-sharing mechanisms, building coordinated institutional capacity, strengthening South–South collaboration, and transitioning from high-level principles to actionable cooperation.
Speakers emphasized that for developing economies, collective action is essential in shaping AI governance frameworks, moving beyond mere adaptation to rules set by others. The discussion highlighted a pivotal moment in global AI governance, underscoring the imperative for safety coordination to evolve in tandem with accelerating capabilities.
For the Global South, the message was unequivocal: collaboration is not just about alignment—it is a matter of agency. By pooling expertise, evidence, and institutional capacity, developing economies can influence how AI scales, thereby enhancing public trust, protecting fundamental rights, and supporting long-term global stability.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































