Connect with us

Hi, what are you looking for?

AI Technology

UK Government Proposes Common AI Testing Standards for Banks Amid Model Concerns

UK government considers a common testing regime for AI models used by banks, urged by Starling Bank’s Harriet Rees to ensure independent assessments and uniform standards.

The UK government is considering the introduction of a common testing regime for general-purpose AI systems used by lenders, following concerns raised by the Bank of England (BoE) regarding the assessment of such models. This initiative was suggested by Harriet Rees, Chief Information Officer of Starling Bank and the government’s financial services AI champion, during discussions with the Department for Science, Innovation and Technology last month, as reported by the Financial Times.

Rees, who co-chairs the BoE’s AI task force, noted the widespread use of AI models across financial institutions. “Lots of firms are using [AI models] and we can assume that [they] have done the necessary due diligence and, therefore, hopefully we’re happy. But we’ve not done that independent assessment,” she stated. The proposed regime aims to reduce redundancy among firms, ensure uniformity in testing, and confirm that algorithms developed in the US meet required benchmarks.

This discussion follows two meetings held in October by the BoE’s Prudential Regulation Authority, which oversees lenders. During these sessions, banks were informed that AI model monitoring was “not frequent enough,” according to presentation slides. In a statement to the Financial Times, Rees emphasized the importance of independent assessments, particularly given the UK’s reliance on US AI models. “It would give [the government] the comfort that they’ve at least looked at [the models] and they know that they all are at a certain standard,” she remarked.

Currently, there is no legal mandate for AI systems to undergo assessments prior to deployment in regulated sectors, although banks perform internal reviews. Companies such as **OpenAI** and **Anthropic** have voluntarily submitted their models, like **ChatGPT** and **Claude**, to the AI Security Institute (AISI), a governmental unit focused on testing advanced AI systems and investigating associated risks.

Rees advocated for the responsibility of examining general-purpose models to rest with an independent body, rather than a single sector regulator, as their applications extend beyond just financial services. She identified AISI as the “most obvious body” to take on this responsibility. Following a meeting in early March, Rees reported that Ollie Ilott, the director-general for AI who founded AISI, received the proposal positively. “They agreed that there was nothing else out there like this today,” she noted.

However, a government spokesperson indicated that AISI is unlikely to expand its remit to include the testing of third-party AI models. “The AI Security Institute is focused on frontier AI security research, and we are not exploring expanding its remit into assurance or any testing of third-party AI models,” the spokesperson stated.

Despite this, Rees maintained that oversight from an independent entity would not replace the checks that lenders currently perform. Instead, she argued that it would act as a “fail-safe” and provide reassurance regarding the inner workings of these AI systems. The BoE declined to comment on the discussions surrounding the proposed testing regime.

The UK’s potential move to establish a standardized testing framework for AI models reflects growing recognition of the complexities and risks associated with the integration of AI in finance. As the sector increasingly depends on advanced technologies, ensuring robust oversight and accountability will be essential in maintaining trust among consumers and stakeholders alike.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Government

UK government considers standardised AI testing for banks to enhance oversight as Starling Bank's Harriet Rees advocates for independent evaluations amid rising regulatory concerns.

AI Education

AI in education is set to explode from $5.21B in 2025 to $61.34B by 2034, driven by a 31.52% annual growth rate and personalized...

AI Tools

Cadence Design Systems surpasses Q1 2026 earnings expectations with a projected $7.9B revenue target, bolstered by record AI-driven demand and a strong workplace culture.

Top Stories

OpenAI acquires TBPN podcast for a reported low hundreds of millions, tapping into $30M advertising revenue potential and bolstering AI discourse.

AI Tools

Patients increasingly rely on AI tools for mental health, with 13% of US youth seeking support, prompting clinicians to assess their usage regularly.

AI Education

ThoughtLeadr replaces traditional training with AI-generated posts, driving a 312% increase in employee visibility and transforming workforce development.

AI Technology

Memory chip stocks plummet by $100 billion this week as Micron suffers a 15% drop, signaling a shift in AI hardware demand dynamics.

Top Stories

OpenAI halts its adult-themed chatbot initiative amid 14 lawsuits and investor pushback, signaling a major shift in AI's sensitive engagement landscape.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.