Researchers from Anthropic have revealed that three AI agents can autonomously exploit vulnerabilities in smart contracts, simulating approximately $4.6 million in stolen funds. Their findings, published in a blog post on Monday, highlight the increasing capability of artificial intelligence (AI) to target weaknesses in blockchain technology.
The study focused on three AI models—Claude Opus 4.5, Sonnet 4.5, and GPT-5—demonstrating their proficiency in identifying and exploiting flaws in smart contracts deployed after March 2025, leading to substantial simulated financial losses. The AI agents not only replicated existing vulnerabilities but also discovered new ones in recently launched contracts.
Among the identified flaws was a defect that enabled attackers to manipulate a public “calculator” function, originally designed to determine token rewards, to inflate token balances. Furthermore, another vulnerability allowed attackers to withdraw funds by submitting fraudulent beneficiary addresses. Remarkably, GPT-5 accomplished this with a mere operational cost of $3,476 in a simulated environment.
This low cost, compared to the staggering $4.6 million in simulated theft, underscores the potential for AI-driven cyberattacks to be both feasible and cost-effective. Such findings raise alarm bells regarding the profitability and attractiveness of these cyber threats to potential criminals.
The trend in AI-driven exploits is not just a one-off occurrence; it reflects a broader and rapidly escalating issue. Over the past year, the amount stolen from these types of attacks has doubled approximately every 1.3 months, illustrating a concerning trajectory of increasing profitability. As AI models enhance their capabilities to detect vulnerabilities and execute attacks with greater efficiency, organizations face mounting challenges in safeguarding their digital assets.
What is particularly alarming is the ability of AI to conduct these attacks autonomously, with minimal human oversight. Anthropic’s research marks a pivotal moment in cybersecurity, as it illustrates that AI can not only identify vulnerabilities but also autonomously develop and implement exploit strategies. The implications of these advancements extend well beyond the realm of cryptocurrency, threatening any software system with inadequate security measures, including enterprise applications and financial services.
As the landscape of cyber threats continues to evolve, organizations must contend with the reality that AI-driven exploits are becoming more rampant and sophisticated. The ongoing development of these technologies suggests a future where the risks associated with digital vulnerabilities will require increasingly advanced defensive measures to ensure security and mitigate potential losses.
See also
Lumen Technologies Expands APAC Cybersecurity with Palo Alto Networks’ Cortex XSIAM Specialization
Ireland’s Cybersecurity Report Reveals AI Infrastructure Vulnerabilities, Urges National Action
eNOugh Raises $2.7M to Launch AI-Powered Wearable Safety Device eNO Badge
Experian Predicts AI Will Drive Major Cybersecurity Threats in 2026, Exposing 345M Records
Senators Hassan and Ernst Urge Action Against Chinese AI-Enabled Cyber Threats


















































