An AI startup is making headlines with a bold claim: it has identified the world’s first hacking campaign led by artificial intelligence. This assertion, made by Anthropic, has elicited a range of reactions from cybersecurity experts, with some expressing alarm over the implications while others remain skeptical about the details.
In its recent report, Anthropic stated that its AI assistant, Claude Code, was manipulated to execute between 80% and 90% of a “large-scale” and “highly sophisticated” cyberattack, requiring human involvement only “sporadically.” The targeted entities reportedly included government agencies, financial institutions, tech firms, and chemical manufacturing companies. However, Anthropic noted that the operation was only partially successful.
The company, based in San Francisco, attributed the attack to state-sponsored hackers from China, yet it did not disclose how it uncovered the operation or identify the “roughly” 30 entities that were targeted.
Roman V. Yampolskiy, an AI and cybersecurity expert at the University of Louisville, acknowledged the serious threat posed by AI-assisted hacking but indicated that the specifics of Anthropic’s claims remain difficult to verify. “Modern models can write and adapt exploit code, sift through huge volumes of stolen data, and orchestrate tools faster and more cheaply than human teams,” Yampolskiy explained. “They lower the skills barrier for entry and increase the scale at which well-resourced actors can operate. We are effectively putting a junior cyber-operations team in the cloud, rentable by the hour.” Yampolskiy anticipates that AI will not only increase the frequency of attacks but also their severity.
See also
Beaver Dam Residents Voice Water Concerns as Meta Begins AI Data Center ConstructionConversely, Jaime Sevilla, director of Epoch AI, remarked that while AI-assisted attacks are feasible and likely to become more common, he did not see much novelty in Anthropic’s report. He noted that medium-sized businesses and government agencies might be particularly vulnerable, as they have historically been overlooked as targets and often underinvested in cybersecurity. He suggested that these organizations would likely adapt by hiring cybersecurity specialists and launching vulnerability-reward programs.
Despite the seriousness of the claims, many analysts have called for more transparency from Anthropic. Senator Chris Murphy has raised concerns about the potential for AI-led attacks to “destroy us” if prioritization of regulation does not take place. However, Yann LeCun, chief scientist at Meta AI, criticized Murphy’s warnings, accusing him of being “played” by a company aiming to secure regulatory advantages. He suggested that Anthropic is leveraging sensational claims to campaign against open-source models.
A spokesperson for the Chinese embassy in Washington, D.C., countered by stating that China “consistently and resolutely” opposes cyberattacks, urging relevant parties to approach such incidents with a “professional and responsible attitude.”
Toby Murray, a computer security expert at the University of Melbourne, expressed doubt regarding Anthropic’s claims that attackers could leverage Claude AI for highly complex tasks with minimal oversight. While acknowledging the effectiveness of some AI assistants in such tasks, he noted that hard evidence was lacking for the specific actions undertaken during the supposed attack. “I don’t see AI-powered hacking changing the kinds of hacks that will occur,” Murray said, “but it might usher in a change of scale. We should expect to see more AI-powered hacks in the future, and for those hacks to become more successful.”
As AI poses increasing risks to cybersecurity, experts agree that it will also play a crucial role in enhancing defenses. Fred Heiding, a research fellow at Harvard University specializing in computer security and AI security, believes that AI will provide a “significant advantage” to cybersecurity professionals over time. He highlighted the current shortage of human cyber-professionals, suggesting that AI could help address this bottleneck by enabling comprehensive testing of systems at scale.
However, Heiding cautioned that hackers could take advantage of the gap between the rapid evolution of AI technology and the slower adaptation of security practices. “Unfortunately, the defensive community is likely to be too slow to implement the new technology into automated security testing and patching solutions,” he warned. If this occurs, attackers might exploit vulnerabilities with ease before defenses can be properly established.


















































