The UK government is considering the introduction of standardised testing for artificial intelligence models used by banks, following rising regulatory concerns about insufficient oversight of their deployment. The proposal, reported by the Financial Times, was initiated last month by Harriet Rees, chief information officer at Starling Bank, and submitted to the Department for Science, Innovation and Technology as policymakers seek to bolster safeguards surrounding widely adopted AI systems.
Rees, who also serves as a government financial services AI “champion,” argued that implementing independent evaluations would help address significant gaps in current practices. “Lots of firms are using [AI models] and we can assume that [they] have done the necessary due diligence and, therefore, hopefully we’re happy. But we’ve not done that independent assessment,” she stated.
This proposal follows warnings from the Bank of England’s Prudential Regulation Authority, which indicated during October meetings that banks’ monitoring of AI models was “not frequent enough,” as noted in presentation materials. Regulators have increasingly scrutinised how banks manage third-party technologies that are vital to their operations.
Rees elaborated that a centralised testing approach could reduce redundancy and establish uniform standards across the sector. “Given our reliance on US models, it would give [the government] the comfort that they’ve at least looked at [the models] and they know that they all are at a certain standard,” she explained.
Interestingly, there is currently no legal requirement for AI models to be assessed before they are deployed in regulated industries within the UK. While companies like OpenAI and Anthropic have voluntarily sought review from the government’s AI Security Institute, these assessments have primarily focused on frontier risks rather than the routine commercial use of AI in banking.
Rees asserted that an independent testing regime would serve as a “fail-safe” rather than replacing the internal controls that firms already have in place. She warned against assigning responsibility to a sector-specific regulator, given the cross-industry application of general-purpose AI. Rees identified the AI Security Institute as the “most obvious body” to spearhead this initiative and reported positive discussions with its director-general, Ollie Ilott, who acknowledged the absence of similar frameworks to date.
However, a spokesperson for the government indicated that ministers are not currently planning to broaden the institute’s mandate. “The AI Security Institute is focused on frontier-AI security research, and we are not exploring expanding its remit into assurance or any testing of third-party AI models,” the spokesperson conveyed to the Financial Times.
This ongoing dialogue about the regulation of AI in banking reflects a broader concern within the financial sector regarding the adoption of new technologies. As AI models become increasingly integral to banking operations, the need for stringent oversight and standardised testing mechanisms is likely to become a focal point for regulators. The outcome of these discussions could significantly influence the future landscape of financial services, potentially enhancing consumer trust and ensuring that AI systems operate within acceptable risk parameters.
See also
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake
JobSphere Launches AI Career Assistant, Reducing Costs by 89% with Multilingual Support
Australia Mandates AI Training for 185,000 Public Servants to Enhance Service Delivery



















































