The rapid adoption of artificial intelligence (AI) in the financial services sector has introduced new and complex risks that traditional Model Risk Management (MRM) frameworks are ill-equipped to address. As organizations integrate advanced AI models into critical applications—such as trading, credit scoring, and fraud detection—over 70% of financial institutions have embraced generative AI (GenAI) and sophisticated machine learning (ML) techniques, according to a recent study by PwC.
While these advanced AI models promise transformative benefits, the Bank for International Settlements (BIS) has cautioned that their use may exacerbate existing risks. These include model risk, characterized by a lack of explainability that complicates assessments of AI model appropriateness, as well as data-related risks encompassing privacy, security, and bias. Unlike deterministic models that yield predictable and comprehensible outcomes, advanced AI models often operate probabilistically, producing plausible yet sometimes inaccurate results that can lead to fair lending risks and adverse effects.
A January 2025 report from the Consumer Financial Protection Bureau analyzed underwriting practices based on advanced AI algorithms. The findings revealed a concerning trend: models relying on over 1,000 variables exhibited disproportionately high rates of adverse outcomes due to overfitting. This underscores the urgent need for improved oversight.
Integrating MRM with AI Governance
Both the Basel Committee on Banking Supervision (BCBS) and the U.S. Federal Reserve have emphasized the necessity of weaving AI into existing MRM frameworks. The Federal Reserve’s SR 11-7 guidance explicitly encompasses advanced algorithms, advocating for rigorous governance, validation, and an “effective challenge” to these models. BCBS publications, including its 2022 AI and Machine Learning Newsletter and 2024 Digitalization of Finance Report, have similarly warned that AI introduces new dimensions of model risk, governance, and financial-stability risks, primarily through its opacity and potential for data bias.
These regulatory developments indicate a shift in supervisory expectations: financial institutions need to modernize their MRM frameworks to adapt to the complexities and scale of AI systems. Regulators expect firms to integrate AI models into MRM frameworks, applying robust testing, documentation, and oversight to ensure reliability and accountability.
Global Regulatory Developments
Globally, regulators are coalescing around a unified principle: AI risks must be managed with the same rigor as traditional model risk, with an added focus on transparency, bias mitigation, and accountability. While the Federal Reserve’s SR 11-7 and SR 13-19 serve as foundational documents for MRM, additional guidance highlights the importance of AI model interpretability and third-party oversight.
The EU AI Act, adopted in 2024, categorizes financial AI systems as “high-risk,” necessitating ongoing monitoring, documentation, and human oversight. The Bank of England’s AI Model Governance Principles (2025) require firms to maintain AI model inventories, bias assessment logs, and validation reports akin to traditional MRM documentation. Similarly, regulators in Singapore and Hong Kong have implemented AI Ethics and Governance Codes, mandating firms demonstrate MRM alignment for any AI-based decision-making processes. This shift is prompting a transition from model validation to model assurance, where MRM must ensure not only technical soundness but also ethical and regulatory compliance throughout the AI lifecycle.
The Evolution of MRM
MRM has historically focused on validating models to mitigate risk and ensure explainability. However, as models dynamically retrain in real-time using streaming data, MRM must evolve into a continuous assurance function. Modern MRM frameworks must incorporate Explainable AI (XAI) to clarify how black-box models generate outputs. Furthermore, AI-driven tools should monitor model drift, bias, and data quality issues in real time. Third-party oversight is essential for managing vendor risks, while governance, cybersecurity, and data privacy measures—including data lineage, encryption, and security validation—must be integrated into all MRM processes.
An integrated approach is increasingly required to ensure MRM remains effective in a rapidly evolving technological landscape. Some enterprises are already engaging in collaborations to minimize risks and facilitate the development of explainable AI for MRM.
As AI technologies continue to advance, leveraging such technologies to enhance MRM is not merely a compliance issue; it is about building resilience, ensuring trust, and equipping financial institutions for the future of intelligent, data-driven decision-making. Companies should also consider establishing a Risk Management Policy for AI, fostering responsible and ethical technology use while aligning with evolving regulations and ethical standards.
Ultimately, MRM is transforming from a compliance checkpoint into a strategic differentiator. Firms that view MRM as a dynamic, AI-enabled framework rather than a static control function will be better positioned to build resilience, trust, and competitive advantage in the marketplace. With regulators, investors, and customers demanding greater transparency in AI decision-making, MRM is set to become the cornerstone of responsible AI adoption in finance.
White House Push for AI Regulation Ban Faces Opposition Ahead of NDAA Vote
Trump Advocates for Federal AI Regulation to Prevent State-Level Overreach
White House Executive Order Aims to Block Florida’s AI Regulation Efforts
Samsara Reveals New AI Fleet Solutions, Sees 32% ARR Growth Amid Valuation Questions
Trump Administration Proposes Federal AI Regulation Amid Delayed Executive Order























































