Connect with us

Hi, what are you looking for?

AI Government

UK Government Considers Standardised AI Testing for Banks Amid Regulatory Concerns

UK government considers standardised AI testing for banks to enhance oversight as Starling Bank’s Harriet Rees advocates for independent evaluations amid rising regulatory concerns.

The UK government is considering the introduction of standardised testing for artificial intelligence models used by banks, following rising regulatory concerns about insufficient oversight of their deployment. The proposal, reported by the Financial Times, was initiated last month by Harriet Rees, chief information officer at Starling Bank, and submitted to the Department for Science, Innovation and Technology as policymakers seek to bolster safeguards surrounding widely adopted AI systems.

Rees, who also serves as a government financial services AI “champion,” argued that implementing independent evaluations would help address significant gaps in current practices. “Lots of firms are using [AI models] and we can assume that [they] have done the necessary due diligence and, therefore, hopefully we’re happy. But we’ve not done that independent assessment,” she stated.

This proposal follows warnings from the Bank of England’s Prudential Regulation Authority, which indicated during October meetings that banks’ monitoring of AI models was “not frequent enough,” as noted in presentation materials. Regulators have increasingly scrutinised how banks manage third-party technologies that are vital to their operations.

Rees elaborated that a centralised testing approach could reduce redundancy and establish uniform standards across the sector. “Given our reliance on US models, it would give [the government] the comfort that they’ve at least looked at [the models] and they know that they all are at a certain standard,” she explained.

Interestingly, there is currently no legal requirement for AI models to be assessed before they are deployed in regulated industries within the UK. While companies like OpenAI and Anthropic have voluntarily sought review from the government’s AI Security Institute, these assessments have primarily focused on frontier risks rather than the routine commercial use of AI in banking.

Rees asserted that an independent testing regime would serve as a “fail-safe” rather than replacing the internal controls that firms already have in place. She warned against assigning responsibility to a sector-specific regulator, given the cross-industry application of general-purpose AI. Rees identified the AI Security Institute as the “most obvious body” to spearhead this initiative and reported positive discussions with its director-general, Ollie Ilott, who acknowledged the absence of similar frameworks to date.

However, a spokesperson for the government indicated that ministers are not currently planning to broaden the institute’s mandate. “The AI Security Institute is focused on frontier-AI security research, and we are not exploring expanding its remit into assurance or any testing of third-party AI models,” the spokesperson conveyed to the Financial Times.

This ongoing dialogue about the regulation of AI in banking reflects a broader concern within the financial sector regarding the adoption of new technologies. As AI models become increasingly integral to banking operations, the need for stringent oversight and standardised testing mechanisms is likely to become a focal point for regulators. The outcome of these discussions could significantly influence the future landscape of financial services, potentially enhancing consumer trust and ensuring that AI systems operate within acceptable risk parameters.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Technology

UK government considers a common testing regime for AI models used by banks, urged by Starling Bank's Harriet Rees to ensure independent assessments and...

Top Stories

OpenAI acquires TBPN podcast for a reported low hundreds of millions, tapping into $30M advertising revenue potential and bolstering AI discourse.

AI Technology

Memory chip stocks plummet by $100 billion this week as Micron suffers a 15% drop, signaling a shift in AI hardware demand dynamics.

Top Stories

OpenAI halts its adult-themed chatbot initiative amid 14 lawsuits and investor pushback, signaling a major shift in AI's sensitive engagement landscape.

AI Technology

Siemens CEO Roland Busch warns that the EU's tech sovereignty initiative could delay AI innovation, urging against prioritizing local systems over U.S. technology.

Top Stories

Mistral CEO Arthur Mensch proposes a revenue-based content levy for AI firms in Europe to fund creative sectors and enhance legal protections for copyright...

Top Stories

Mistral AI proposes a revenue-based levy system for AI training data in Europe, aiming to level the playing field and support local content creation.

AI Government

UK government fails to initiate any trials with OpenAI's ChatGPT eight months post-agreement, raising concerns over accountability and public benefit.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.