April 12 (Reuters) – UK financial regulators are engaging in urgent discussions with the government’s cyber security agency and major banks to evaluate risks associated with the latest artificial intelligence model from Anthropic, as reported by the Financial Times on Sunday.
Officials from the Bank of England, the Financial Conduct Authority (FCA), and HM Treasury are collaborating with the National Cyber Security Centre to scrutinize potential vulnerabilities in essential IT systems identified by Anthropic’s new AI model, known as Claude Mythos Preview. These talks aim to assess how the model may impact the financial sector’s cyber security landscape.
Representatives from leading British banks, insurance companies, and exchanges are slated to receive briefings on the cyber security risks posed by this AI model in a meeting with regulators expected in the coming weeks. This initiative reflects a proactive approach by UK financial authorities, as they seek to preemptively address any concerns raised by the model’s capabilities, as cited by two individuals privy to the discussions.
While Reuters could not immediately verify the Financial Times report, Anthropic did not respond to requests for comment. The Bank of England declined to provide a statement, and the Treasury, NCSC, and FCA were also unavailable for immediate comments, highlighting the ongoing nature of these discussions.
The urgency of the UK’s regulatory response follows a similar gathering led by U.S. Treasury Secretary Scott Bessent, who met with major Wall Street banks to evaluate the model’s potential cyber risk implications. This international focus underscores a growing concern regarding the impact of advanced AI technologies on financial stability and security.
The AI startup, Anthropic, has indicated that the Claude Mythos Preview is part of a controlled initiative termed Project Glasswing. This project allows select organizations to utilize the unreleased model specifically for defensive cyber security applications. Early indications from Anthropic suggest the model has successfully identified “thousands” of significant vulnerabilities across operating systems, web browsers, and other commonly used software.
The proactive stance taken by UK regulators signifies a recognition of the intersection between financial technology and national security, particularly as financial institutions increasingly rely on digital infrastructures. This assessment of Anthropic’s AI model is expected to not only inform internal measures within financial institutions but also influence broader regulatory frameworks as they adapt to rapidly evolving technological environments.
As discussions continue among UK financial regulators and stakeholders, the response to Anthropic’s AI model may set important precedents for how emerging technologies are managed in the finance sector. This situation highlights the pressing need for regulatory bodies to balance innovation with adequate safeguards, ensuring that advancements in AI do not compromise the integrity and security of financial systems.
Reporting by Mihika Sharma in Bengaluru. Editing by Bernadette Baum and Christina Fincher.
See also
Finance Ministry Alerts Public to Fake AI Video Featuring Adviser Salehuddin Ahmed
Bajaj Finance Launches 200K AI-Generated Ads with Bollywood Celebrities’ Digital Rights
Traders Seek Credit Protection as Oracle’s Bond Derivatives Costs Double Since September
BiyaPay Reveals Strategic Upgrade to Enhance Digital Finance Platform for Global Users
MVGX Tech Launches AI-Powered Green Supply Chain Finance System at SFF 2025


















































