The UK government is considering the introduction of a common testing regime for general-purpose AI systems used by lenders, following concerns raised by the Bank of England (BoE) regarding the assessment of such models. This initiative was suggested by Harriet Rees, Chief Information Officer of Starling Bank and the government’s financial services AI champion, during discussions with the Department for Science, Innovation and Technology last month, as reported by the Financial Times.
Rees, who co-chairs the BoE’s AI task force, noted the widespread use of AI models across financial institutions. “Lots of firms are using [AI models] and we can assume that [they] have done the necessary due diligence and, therefore, hopefully we’re happy. But we’ve not done that independent assessment,” she stated. The proposed regime aims to reduce redundancy among firms, ensure uniformity in testing, and confirm that algorithms developed in the US meet required benchmarks.
This discussion follows two meetings held in October by the BoE’s Prudential Regulation Authority, which oversees lenders. During these sessions, banks were informed that AI model monitoring was “not frequent enough,” according to presentation slides. In a statement to the Financial Times, Rees emphasized the importance of independent assessments, particularly given the UK’s reliance on US AI models. “It would give [the government] the comfort that they’ve at least looked at [the models] and they know that they all are at a certain standard,” she remarked.
Currently, there is no legal mandate for AI systems to undergo assessments prior to deployment in regulated sectors, although banks perform internal reviews. Companies such as **OpenAI** and **Anthropic** have voluntarily submitted their models, like **ChatGPT** and **Claude**, to the AI Security Institute (AISI), a governmental unit focused on testing advanced AI systems and investigating associated risks.
Rees advocated for the responsibility of examining general-purpose models to rest with an independent body, rather than a single sector regulator, as their applications extend beyond just financial services. She identified AISI as the “most obvious body” to take on this responsibility. Following a meeting in early March, Rees reported that Ollie Ilott, the director-general for AI who founded AISI, received the proposal positively. “They agreed that there was nothing else out there like this today,” she noted.
However, a government spokesperson indicated that AISI is unlikely to expand its remit to include the testing of third-party AI models. “The AI Security Institute is focused on frontier AI security research, and we are not exploring expanding its remit into assurance or any testing of third-party AI models,” the spokesperson stated.
Despite this, Rees maintained that oversight from an independent entity would not replace the checks that lenders currently perform. Instead, she argued that it would act as a “fail-safe” and provide reassurance regarding the inner workings of these AI systems. The BoE declined to comment on the discussions surrounding the proposed testing regime.
The UK’s potential move to establish a standardized testing framework for AI models reflects growing recognition of the complexities and risks associated with the integration of AI in finance. As the sector increasingly depends on advanced technologies, ensuring robust oversight and accountability will be essential in maintaining trust among consumers and stakeholders alike.
See also
Analysts Predict 100% Upside for IREN as AI Compute Demand Surges
Tesseract Launches Site Manager and PRISM Vision Badge for Job Site Clarity
Affordable Android Smartwatches That Offer Great Value and Features
Russia”s AIDOL Robot Stumbles During Debut in Moscow
AI Technology Revolutionizes Meat Processing at Cargill Slaughterhouse



















































