A leaked draft from Anthropic suggests that its upcoming artificial intelligence model, *Mythos*, could enable unprecedented cyberattack capabilities, raising significant alarms among security teams globally. This revelation comes amidst growing concerns that the use of AI in cybercrime could outpace anything previously observed in both scale and speed.
The details emerged from an unpublished blog post, reportedly due to a human error in the company’s content management system. Although Anthropic did not respond to specific inquiries, the draft indicated that *Mythos* and related systems could exploit vulnerabilities at a speed that current defenders may struggle to mitigate. “Although *Mythos* currently far outpaces other models in cyber capabilities, this is a prophecy of a new wave of models that will be able to exploit vulnerabilities in ways defenders cannot,” the document stated.
Security experts have voiced their concerns about the potential impact of AI on cybersecurity. A December warning from OpenAI highlighted that future models pose a significant high cybersecurity risk. AI’s ability to rapidly create new software exploits could exacerbate existing threats. The emergence of AI-based agents—autonomous systems that operate without human oversight—could further elevate these risks. Such agents can inspect systems and exploit vulnerabilities more quickly and reliably than multiple human hackers combined.
“Agent attackers are coming,”
– Shlomo Kramer
“This is a watershed moment in cybersecurity,”
– Shlomo Kramer
Industry sources indicate that Anthropic is proactively allowing select organizations to test *Mythos* ahead of its official release, aiming to bolster defenses against potential cyber threats. The company has also warned government officials about the likelihood of large-scale attacks connected to the new model.
“Behind *Mythos* lies the next OpenAI model, and behind them—the next Google Gemini; a few months later open Chinese models appear,”
– Shlomo Kramer
Evan Penya, Chief Security Officer at Armadin, emphasized that the rapid pace of AI allows attackers to exploit vulnerabilities almost immediately upon discovery. However, he also noted that these models have limitations, particularly regarding context and the identification of valuable information for specific organizations.
“AI gives attackers ‘superpowers’ by making technical knowledge for exploiting systems easier,”
– Evan Penya
In a worrying case from February, a hacker utilized the *Claude* model and China’s *DeepSeek* to compromise over 600 devices worldwide, leveraging a well-known firewall. The attacker reportedly communicated with *Claude* in Russian to create a control panel for hundreds of targets, as shown in chat logs reviewed by sources.
“In some scenarios *Claude* and *DeepSeek* tailor the attack to specific targets,”
– Eyal Sela
Joe Lin, co-founder and CEO of Twenty, pointed out that AI could lower the barriers for hackers of varying skill levels, emphasizing the necessity for maintaining human oversight in decision-making. “We need to ensure that we are building weapons systems where humans remain in control of the decisions and their outcomes, because as long as the machine acts, the human is always responsible for the outcomes,” he noted.
The escalating role of AI in cyberattacks poses significant challenges for defenders. As open-source developers and private labs face growing threats, the technologies available for tracking threats and responding quickly offer some advantages. However, attackers only need one entry point, while defenders must secure every potential vulnerability. As such, the cybersecurity landscape necessitates rapid yet cautious advancements to maintain control over the implications of these technologies.
Looking ahead, the development of AI in cybersecurity demands a balanced and responsible approach. It is imperative to preserve the human role in decision-making while carefully considering the associated risks and ethical dilemmas. The future of cybersecurity will depend on effectively integrating new capabilities with security and accountability requirements, marking a crucial juncture in the ongoing battle against cyber threats.
See also
Anthropic’s Claims of AI-Driven Cyberattacks Raise Industry Skepticism
Anthropic Reports AI-Driven Cyberattack Linked to Chinese Espionage
Quantum Computing Threatens Current Cryptography, Experts Seek Solutions
Anthropic’s Claude AI exploited in significant cyber-espionage operation
AI Poisoning Attacks Surge 40%: Businesses Face Growing Cybersecurity Risks




















































