Connect with us

Hi, what are you looking for?

AI Technology

Anthropic Halts Mythos AI Release Amid Unauthorized Access and Cybersecurity Threats

Anthropic halts the release of its advanced AI model Mythos after unauthorized access raises cybersecurity threats, prompting heightened scrutiny from major banks and regulators.

Anthropic has decided against releasing its latest artificial intelligence model, Mythos, citing significant threats to global cybersecurity. The US tech startup, known for its Claude chatbot, announced the decision on Wednesday, while simultaneously confirming an investigation into reports of unauthorized access to Mythos. This incident underscores growing concerns regarding the rapid development of advanced AI technologies and the challenges tech companies face in safeguarding their most sensitive products.

Mythos, unveiled on April 7, is characterized by Anthropic as an AI model with the potential to identify previously unknown flaws in IT systems. The company emphasized that these vulnerabilities could be exploited by malicious actors, thereby escalating the risk for organizations. According to Anthropic, Mythos possesses the capacity to detect and exploit “zero-day” vulnerabilities across major operating systems and web browsers upon request. Zero-day vulnerabilities are particularly perilous because they remain unpatched and unknown to developers and organizations until exploited.

Anthropic referred to the creation of Mythos as a “watershed moment for cybersecurity,” noting that some of the flaws it can identify have been overlooked for decades. The startup has permitted select tech companies and banks, including giants like Apple and Goldman Sachs, to evaluate the model’s risks through an initiative called Project Glasswing, launched on April 8.

The concerns surrounding Mythos reflect broader anxieties voiced by cybersecurity experts. According to the UK’s AI Security Institute (AISI), the model epitomizes the disruptive potential of advanced AI technologies. Since the emergence of OpenAI’s ChatGPT in 2022, experts have warned that AI could inflict substantial harm in the real world. Compounding these concerns is the rapid pace of AI advancements; sophisticated models can quickly be replicated by other firms, including those developing open-source alternatives. UK technology secretary Liz Kendall and security minister Dan Jarvis recently urged businesses to prepare for an accelerated growth in AI capabilities over the coming year. While AI can also be harnessed to defend against cyber threats, the risks it poses are becoming increasingly apparent.

While Anthropic has withheld Mythos from public release to mitigate risks, fears have emerged that the model may still fall into the wrong hands. This week, the company confirmed that a small group of users on a private online forum had gained access to the model, raising alarms about its potential misuse. Questions also linger regarding the significance of the numerous vulnerabilities flagged by Mythos. While the identification of these flaws is alarming, drawing a distinction between identifying and exploiting them is crucial.

The AISI has evaluated Mythos, concluding that it represents a notable escalation in terms of cybersecurity threats compared to prior models. Among its capabilities, Mythos can conduct complex, multi-step cyber-attacks and identify vulnerabilities autonomously. In a notable test, the model successfully executed a 32-step simulation of a cyber-attack. However, the AISI could not assess its effectiveness against well-defended systems. The institute cautioned that AI systems are likely to become increasingly sophisticated over time.

Contrastingly, some experts argue that Mythos represents more of an evolution than a revolution in AI cybersecurity capabilities. Aisle, a firm specializing in this domain, examined Anthropic’s claims regarding the discovery of zero-day vulnerabilities. They concluded that other less expensive models were also capable of identifying similar flaws, suggesting a more nuanced understanding of Mythos’s impact than Anthropic’s urgent narrative might imply. Experts also reiterated that most cyber breaches still stem from established risks, such as weak authentication and unpatched vulnerabilities.

The involvement of major tech companies and financial institutions in the evaluation of Mythos is significant. Approximately 40 organizations, including Google and JP Morgan, have been granted early access through Project Glasswing. This initiative aims to enable companies to integrate the AI model into their cybersecurity frameworks. While these organizations have yet to disclose their assessments of Mythos’s capabilities, speculation regarding its potential impact is rife, particularly in the banking sector. Should Mythos’s capabilities be as pronounced as Anthropic suggests, its misuse could disrupt banking operations and threaten the broader financial system.

In anticipation of potential risks posed by Mythos, US Treasury Secretary Scott Bessent convened a meeting with leaders from major American banks, including Goldman and Citi, earlier this month. UK regulators have also placed Mythos on the agenda of high-level discussions among senior bankers and officials from critical financial bodies, including the Treasury, Bank of England, Financial Conduct Authority, and National Cyber Security Centre. As the rapid evolution of AI technology continues, the implications of models like Mythos will remain a focal point of concern throughout the tech and financial sectors.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

Top Stories

Amazon's $200 billion investment in AI infrastructure fuels 115% growth for Astera Labs to $852.5 million and 201% for Credo, highlighting soaring demand for...

AI Cybersecurity

Unauthorized access to Anthropic's Mythos AI tool by an outside group raises urgent cybersecurity concerns, highlighting vulnerabilities in third-party vendor security.

AI Regulation

Tennessee's AI Public Safety Act mandates $500M companies to disclose child protection policies while addressing catastrophic risks, following White House input.

AI Finance

Google unveils TPU 8t and TPU 8i AI processors, achieving a 2.8x price-to-performance boost, intensifying competition with Nvidia and AMD in AI chip market.

Top Stories

TSMC targets $311.5 billion in revenue by 2030, solidifying its role as a key manufacturer in the AI chip market alongside Nvidia's dominance.

AI Tools

PolyAI's Agent Development Kit enables rapid AI agent creation, cutting development time from weeks to hours, empowering teams with 60% autonomous workflow efficiency.

AI Regulation

Ambrosia Behavioral Health highlights that the rise of AI search tools in Florida is transforming mental health treatment decisions, emphasizing the need for professional...

AI Marketing

AI in B2B sales enhances efficiency by automating tasks and providing predictive insights, potentially generating trillions in value but risking buyer trust if mismanaged.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.