Connect with us

Hi, what are you looking for?

AI Cybersecurity

Anthropic Reveals AI Agents Exploit Smart Contract Vulnerabilities, Simulate $4.6M Theft

Anthropic’s AI agents exploit smart contract vulnerabilities, simulating $4.6M in theft, highlighting escalating risks in blockchain security.

Researchers from Anthropic have revealed that three AI agents can autonomously exploit vulnerabilities in smart contracts, simulating approximately $4.6 million in stolen funds. Their findings, published in a blog post on Monday, highlight the increasing capability of artificial intelligence (AI) to target weaknesses in blockchain technology.

The study focused on three AI models—Claude Opus 4.5, Sonnet 4.5, and GPT-5—demonstrating their proficiency in identifying and exploiting flaws in smart contracts deployed after March 2025, leading to substantial simulated financial losses. The AI agents not only replicated existing vulnerabilities but also discovered new ones in recently launched contracts.

Among the identified flaws was a defect that enabled attackers to manipulate a public “calculator” function, originally designed to determine token rewards, to inflate token balances. Furthermore, another vulnerability allowed attackers to withdraw funds by submitting fraudulent beneficiary addresses. Remarkably, GPT-5 accomplished this with a mere operational cost of $3,476 in a simulated environment.

This low cost, compared to the staggering $4.6 million in simulated theft, underscores the potential for AI-driven cyberattacks to be both feasible and cost-effective. Such findings raise alarm bells regarding the profitability and attractiveness of these cyber threats to potential criminals.

The trend in AI-driven exploits is not just a one-off occurrence; it reflects a broader and rapidly escalating issue. Over the past year, the amount stolen from these types of attacks has doubled approximately every 1.3 months, illustrating a concerning trajectory of increasing profitability. As AI models enhance their capabilities to detect vulnerabilities and execute attacks with greater efficiency, organizations face mounting challenges in safeguarding their digital assets.

What is particularly alarming is the ability of AI to conduct these attacks autonomously, with minimal human oversight. Anthropic’s research marks a pivotal moment in cybersecurity, as it illustrates that AI can not only identify vulnerabilities but also autonomously develop and implement exploit strategies. The implications of these advancements extend well beyond the realm of cryptocurrency, threatening any software system with inadequate security measures, including enterprise applications and financial services.

As the landscape of cyber threats continues to evolve, organizations must contend with the reality that AI-driven exploits are becoming more rampant and sophisticated. The ongoing development of these technologies suggests a future where the risks associated with digital vulnerabilities will require increasingly advanced defensive measures to ensure security and mitigate potential losses.

See also
Rachel Torres
Written By

At AIPressa, my work focuses on exploring the paradox of AI in cybersecurity: it's both our best defense and our greatest threat. I've closely followed how AI systems detect vulnerabilities in milliseconds while attackers simultaneously use them to create increasingly sophisticated malware. My approach: explaining technical complexities in an accessible way without losing the urgency of the topic. When I'm not researching the latest AI-driven threats, I'm probably testing security tools or reading about the next attack vector keeping CISOs awake at night.

You May Also Like

Top Stories

Anthropic removes OpenClaw from Claude AI plans, imposing new charges for users and risking developer goodwill in a competitive landscape.

AI Technology

Emerging AI behaviors in enterprise ecosystems, as highlighted by Pareekh Jain, threaten operational integrity, risking governance as deployment outpaces safeguards.

Top Stories

Anthropic's Claude Sonnet 4.5 identifies 171 emotion concepts, revealing a 72% increase in blackmail likelihood when influenced by "desperation" signals.

AI Technology

OpenAI’s Fidji Simo takes medical leave as Greg Brockman steps in to lead product strategy amid fierce competition in the AI sector.

AI Business

Salesforce cuts 2,700 jobs while boosting AI investment, with Q4 revenue up 10% as firms grapple with AI's disruptive impact on SaaS revenues.

Top Stories

Meta suspends all collaboration with $10B AI startup Mercor after a significant security breach threatens the integrity of proprietary training data for major AI...

AI Regulation

GSA's new AI procurement rules risk compromising privacy and safety by enforcing mass surveillance on contractors, amid ongoing disputes with Anthropic.

Top Stories

Meta halts its $10 billion partnership with Mercor after a breach exposes sensitive AI training methodologies, impacting over 40,000 individuals.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.