Connect with us

Hi, what are you looking for?

AI Cybersecurity

Anthropic Reveals AI Agents Exploit Smart Contract Vulnerabilities, Simulate $4.6M Theft

Anthropic’s AI agents exploit smart contract vulnerabilities, simulating $4.6M in theft, highlighting escalating risks in blockchain security.

Researchers from Anthropic have revealed that three AI agents can autonomously exploit vulnerabilities in smart contracts, simulating approximately $4.6 million in stolen funds. Their findings, published in a blog post on Monday, highlight the increasing capability of artificial intelligence (AI) to target weaknesses in blockchain technology.

The study focused on three AI models—Claude Opus 4.5, Sonnet 4.5, and GPT-5—demonstrating their proficiency in identifying and exploiting flaws in smart contracts deployed after March 2025, leading to substantial simulated financial losses. The AI agents not only replicated existing vulnerabilities but also discovered new ones in recently launched contracts.

Among the identified flaws was a defect that enabled attackers to manipulate a public “calculator” function, originally designed to determine token rewards, to inflate token balances. Furthermore, another vulnerability allowed attackers to withdraw funds by submitting fraudulent beneficiary addresses. Remarkably, GPT-5 accomplished this with a mere operational cost of $3,476 in a simulated environment.

This low cost, compared to the staggering $4.6 million in simulated theft, underscores the potential for AI-driven cyberattacks to be both feasible and cost-effective. Such findings raise alarm bells regarding the profitability and attractiveness of these cyber threats to potential criminals.

The trend in AI-driven exploits is not just a one-off occurrence; it reflects a broader and rapidly escalating issue. Over the past year, the amount stolen from these types of attacks has doubled approximately every 1.3 months, illustrating a concerning trajectory of increasing profitability. As AI models enhance their capabilities to detect vulnerabilities and execute attacks with greater efficiency, organizations face mounting challenges in safeguarding their digital assets.

What is particularly alarming is the ability of AI to conduct these attacks autonomously, with minimal human oversight. Anthropic’s research marks a pivotal moment in cybersecurity, as it illustrates that AI can not only identify vulnerabilities but also autonomously develop and implement exploit strategies. The implications of these advancements extend well beyond the realm of cryptocurrency, threatening any software system with inadequate security measures, including enterprise applications and financial services.

As the landscape of cyber threats continues to evolve, organizations must contend with the reality that AI-driven exploits are becoming more rampant and sophisticated. The ongoing development of these technologies suggests a future where the risks associated with digital vulnerabilities will require increasingly advanced defensive measures to ensure security and mitigate potential losses.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

AI Business

Infosys partners with Anthropic to integrate Claude AI, targeting a $300-400 billion market opportunity, boosting shares by 5% amid AI disruption concerns.

Top Stories

Pentagon plans to designate Anthropic a "supply chain risk," jeopardizing contracts with eight of the ten largest U.S. companies using its AI model, Claude.

AI Technology

CodePath partners with Anthropic to integrate Claude into AI courses, empowering low-income students to access high-demand skills with a 56% wage premium.

Top Stories

Anthropic's Claude Cowork triggers a $300 billion market shift as investors pivot to resilient sectors like Vertical SaaS and Cybersecurity amidst AI disruption.

AI Research

OpenAI and Anthropic unveil GPT-5.3 Codex and Opus 4.6, signaling a 100x productivity leap and reshaping white-collar jobs within 12 months.

AI Technology

OpenAI hires OpenClaw creator Peter Steinberger, sustaining the project's open-source status amidst fierce competition for AI engineering talent.

AI Regulation

Pentagon warns Anthropic to comply with AI safety standards or risk losing government support amid rising concerns over national security implications.

Top Stories

Pentagon considers ending partnership with Anthropic over AI ethics as the company resists military use of its models, prioritizing responsible technology governance

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.