Recently, Anthropic disclosed a sophisticated cyber-espionage campaign executed by hackers linked to the Chinese state. This operation reportedly leveraged Claude models to automate a significant portion of its activities, affecting at least 30 major corporations and government agencies. Although the media has characterized this as the first “AI-run spy mission,” suggesting that artificial intelligence operated autonomously, it is crucial to recognize that the actions were directed by humans. As the report emphasizes, while the technology facilitated the execution of the hack, it was human intent that guided it.
Humans selected the targets, crafted the root prompts, and strategized the campaign. The AI merely handled operational tasks efficiently and at a scale unimaginable in prior years. The essence of the threat lies not in runaway intelligence but in the widespread access to powerful tools.
Key Insights from the Hack
During their investigation, Anthropic found that attackers utilized Claude and Claude Code to automate 80-90% of the operational work in a September campaign. This included reconnaissance, writing custom malware, generating phishing lures, and processing stolen data—all while human operators maintained strategic oversight.
This incident follows an earlier report from August, where Anthropic noted a similar group employing Claude models for data theft and extortion targeting at least 17 organizations. These findings highlight a recurring trend: agentic AI is transforming cyber operations into a high-efficiency assembly line.
See also
Nvidia Faces Earnings Pressure as AI Stocks Dip; Analysts Predict Year-End RallyThough state actors like China are at the forefront, such tools quickly become available to others. With the current landscape, where anyone can prompt AI to create malicious tools, the scale of potential attacks has dramatically shifted. The interaction might look like this:
- Human: “Claude, research these organizations for vulnerabilities.”
- Human: “Claude, write malicious code to exploit those vulnerabilities.”
- Human: “Execute the code, steal and store data, spear-phish targets, and so on.”
While AI conducted the majority of the operational tasks, humans orchestrated the whole effort.
The Cost Revolution in Cyberattacks
To grasp the significance of this shift, consider the resources required for a cyberattack a decade ago. One would need specialized skills, a team of engineers, and often financial backing from a well-funded organization. Today, however, the requirements have drastically changed; it is now possible to launch an attack with just a smartphone and an internet connection paired with a capable model.
Agentic AI effectively eliminates traditional barriers, including:
- Cost of Expertise: Models now provide the necessary technical know-how.
- Cost of Labor: Agents operate continuously without fatigue, allowing a single individual to execute large-scale operations.
- Cost of Speed: Automated systems can perform tasks at unprecedented speeds.
- Cost of Coordination: Automated workflows enable complex, multi-step tasks to be executed seamlessly.
The evolving threat landscape is increasingly defined not just by capability but by intent. We are witnessing the rise of AI-generated malware kits on platforms like Telegram, fully automated fraud rings, and synthetic extortion operations utilizing deepfakes and voice clones.
The Existential Risk of Super-empowerment
While concerns about AI evolving beyond human control capture much attention, the more pressing issue appears to be the growing democratization of these powerful tools. As the report articulates, “A violent AI is not the threat. A violent human with an agent is.”
Unlike weapons of mass destruction, which can be restricted, AI’s general-purpose nature complicates efforts at regulation. As such, the question arises: how do we address the existential risks associated with technology access? The potential for small, discontented groups to inflict significant harm is unprecedented.
As agentic AI becomes the ultimate tool of asymmetric retribution, the historical model of restricting access to prevent mass harm seems increasingly obsolete. The focus must shift from building ever-larger firewalls to fostering conditions that empower individuals.
Rethinking National Security
National security strategies typically emphasize software defenses, treaties, and budgets. However, destabilization often arises from social discontent rather than mere access to technology. As the report suggests, “the best defense is fewer enemies.”
To mitigate the risks associated with agentic AI, we must urgently address social disparities. Initiatives aimed at enhancing community development, fostering empathy, and creating economic opportunities can reduce the motivation for destabilizing actions.
Policymakers must balance algorithmic safeguards with investments in education, healthcare, and civic infrastructure. Tech companies should design AI systems with built-in deterrents against misuse. Local governments and businesses need to view social cohesion as integral to cybersecurity efforts.
Ultimately, the future hinges not on AI’s capabilities but on the choices we make as a society. By prioritizing human flourishing and tackling inequality, we can better harness AI’s potential to drive positive change rather than destruction.

















































