The Monetary Authority of Singapore (MAS) has proposed new guidelines mandating financial institutions to manage risks associated with the use of artificial intelligence (AI), including generative AI and AI agents. This initiative, outlined in a consultation paper published last week, aims to mitigate potential financial losses, operational disruptions, and reputational damage stemming from AI-related activities.
The proposed Guidelines on Artificial Intelligence Risk Management will apply to all financial institutions operating in Singapore, tailored according to the size and nature of their operations, the extent of their AI utilization, and their unique risk profiles. Stakeholders are invited to submit feedback by January 31, 2026, after which a 12-month transition period will follow the finalization of the guidelines.
These guidelines build upon MAS’s earlier principles for AI use in the financial sector, established in 2018. They designate the responsibility for AI governance to the boards and senior management of financial institutions, requiring them to approve AI governance approaches and set up cross-functional committees when overall AI risk exposure is considered material.
Financial institutions will also be obligated to maintain precise inventories of their AI applications and perform risk materiality assessments that evaluate the potential impact, complexity, and degree of reliance on these technologies. Moreover, any third-party AI products and services integrated into these institutions will necessitate rigorous testing in relation to their specific use cases. Institutions will remain accountable for ensuring fairness in outcomes and must assess concentration risks related to over-reliance on key providers.
This initiative follows a series of MAS information papers released over the past two years, addressing various AI-related risks. In July 2024, MAS highlighted cyber risks linked to generative AI, followed by a focus on AI model risk management in banks in December 2024, and concerns regarding cyber risks associated with deepfakes in September 2025.
To assist in implementing the proposed guidelines, an industry consortium is currently developing an AI Risk Management Handbook, expected to be published by January 2026. This handbook is intended to serve as a companion guide, providing additional insight and practical recommendations for financial institutions navigating the complexities of AI risk management.
The MAS’s proactive approach reflects a growing recognition of the critical need for robust risk management frameworks as financial entities increasingly incorporate advanced technologies into their operations. As AI continues to evolve, the implications for the financial sector are profound, necessitating a comprehensive strategy to ensure that innovation does not compromise the integrity and stability of financial systems.
India’s Fujifilm Expands AI-Powered Diagnostics, Eyes $30B Medical Market by 2030
MAGA Republicans Express Concerns Over AI’s Impact on Jobs Amid Trump’s Expansion Push
OpenAI’s Rogue AI Safeguards: Decoding the 2025 Safety Revolution
US AI Developments in 2025 Set Stage for 2026 Compliance Challenges and Strategies
Trump Drafts Executive Order to Block State AI Regulations, Centralizing Authority Under Federal Control





















































