On January 22, 2026, Singapore unveiled the Model AI Governance Framework for Agentic AI at the World Economic Forum in Davos, Switzerland. This pioneering Framework offers organizations guidance on the responsible deployment of AI agents, emphasizing the importance of human accountability while recommending both technical and non-technical measures to mitigate associated risks. The initiative aligns with Singapore’s practical and balanced strategy towards AI governance, ensuring that safety measures coexist with opportunities for innovation.
The Framework was developed by the Info-communications Media Development Authority of Singapore (IMDA) and builds on the governance foundations established by the Model Governance Framework for AI, which was launched in 2020. It is tailored for organizations looking to deploy agentic AI—an advanced form of artificial intelligence capable of taking actions, adapting to new information, and interacting with other systems to execute tasks on behalf of humans.
Agentic AI can significantly enhance productivity by automating repetitive tasks, particularly in customer service and enterprise settings. However, these capabilities also introduce new risks. AI agents’ access to sensitive data and their ability to execute transactions can lead to unauthorized actions or errors. The autonomy of these agents raises challenges related to human oversight and accountability, including increased automation bias—where organizations may overly trust AI systems based on past performance. Thus, it is crucial for organizations to understand these risks and implement governance measures that maintain effective human control over AI agents.
The Framework provides a structured overview of the risks associated with agentic AI and outlines best practices for managing these risks. Organizations are advised to undertake an upfront assessment of potential risks posed by AI agents and adapt their internal processes accordingly. This includes setting boundaries on the scope and impact of AI agents, such as limiting their access to external systems and ensuring that their actions are traceable through effective identity management.
Moreover, the Framework stresses the importance of meaningful human accountability. Organizations need to clearly define the roles and responsibilities of stakeholders both internally and with external vendors. This involves establishing significant checkpoints in the agentic workflow that necessitate human approval for high-stakes or irreversible actions. Regular audits of human oversight are also recommended to ensure that this accountability remains effective over time.
To enhance the safe operationalization of AI agents, organizations are encouraged to implement technical controls throughout the AI agents’ lifecycle. This includes embedding technical measures during the development phase to address new risks arising from advanced functionalities. Prior to deployment, organizations should conduct thorough testing of AI agents to ensure baseline safety and reliability, with new testing methodologies required to evaluate their performance effectively.
End-user responsibility is another crucial aspect highlighted in the Framework. Organizations should ensure that users are aware of the AI agent’s capabilities and the data it can access, along with their own responsibilities in managing interactions with the agents. Providing training to employees can further equip them with the necessary knowledge to oversee these human-agent interactions effectively.
The IMDA views the Framework as a living document, open to refinement based on feedback from both governmental bodies and private sector stakeholders. As AI continues to evolve rapidly, the IMDA encourages the submission of case studies that can demonstrate the practical application of the Framework for responsible agentic AI deployment.
In this context, the Model AI Governance Framework for Agentic AI is positioned to play a pivotal role in shaping the future of AI governance, ensuring that the benefits of this transformative technology can be realized without compromising safety and accountability.
See also
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control
California Court Rules AI Misuse Heightens Lawyer’s Responsibilities in Noland Case
Policymakers Urged to Establish Comprehensive Regulations for AI in Mental Health



















































