Connect with us

Hi, what are you looking for?

AI Generative

Study Reveals 26 LLM Routers Injecting Malicious Code, Draining ETH Wallets

UC Santa Barbara study finds 26 LLM routers injecting malicious code, with one draining Ethereum wallets, exposing developers to severe security risks.

A new study from researchers at UC Santa Barbara has uncovered serious vulnerabilities in large language model (LLM) API routers, revealing a hidden supply chain threat that could compromise developers’ credentials and drain cryptocurrency wallets. Published on arXiv, this peer-reviewed research highlights the risks associated with these middleman services, which handle communications between AI coding agents and upstream model providers without enforcing any cryptographic integrity.

The researchers tested 428 LLM API routers, comprised of 28 paid options sourced from platforms like Taobao and Shopify, and 400 free routers from public developer communities. Alarmingly, one of the paid routers was found to be injecting malicious code, while eight of the free routers exhibited similar behavior. Some routers employed sophisticated adaptive evasion techniques, activating attacks only under specific conditions to avoid detection. Furthermore, 17 routers accessed researcher-owned AWS credentials, and one was responsible for draining Ethereum (ETH) from one of their private wallets, marking a real and serious loss.

The study identifies four specific classes of attacks. The first, known as payload injection—designated AC-1—directly embeds harmful instructions into an agent’s tool-calling process. The second class, termed secret exfiltration (AC-2), discreetly copies credentials and sends them to unauthorized parties. More advanced variants include dependency-targeted injection (AC-1.a), which waits for specific software packages to appear before executing an attack, and conditional delivery (AC-1.b), which holds the attack until certain behavioral triggers are detected.

To demonstrate these vulnerabilities, the researchers created a tool called Mine, which was able to run their attack classes against four public agent frameworks. They also tested three client-side defenses: a fail-closed policy gate, response-side anomaly screening, and append-only transparency logging. Notably, these defensive measures do not require any changes from the model providers, suggesting that implementation could be achievable in the short term.

The findings also include two disturbing scenarios of API poisoning. In one instance, a seemingly benign router exploited a leaked OpenAI key to generate 100 million tokens for GPT-5.4 and more than seven Codex sessions. In another case, a decoy router produced 2 billion billed tokens, obtained 99 separate credentials across 440 Codex sessions, and operated 401 sessions autonomously, without human oversight.

This raises significant concerns, particularly as AI agents with wallet access and tool-execution permissions become increasingly lucrative targets when supply chain components are compromised. The crux of the issue lies in the architectural design of LLM agents, which route tool-calling requests through third-party API proxies that have full plaintext access to all in-flight payloads. The absence of cryptographic binding between client communications and upstream requests leaves developers exposed to multiple potential attacks.

The study, authored by Hanzhi Liu, Chaofan Shou, Hongbo Wen, Yanju Chen, Ryan Jingyang Fang, and Yu Feng, is available for review at arxiv.org/abs/2604.08407. The findings prompt a critical reevaluation of how developers interact with third-party LLM routers, urging them to treat these intermediaries as untrusted entities until robust integrity verification measures become standardized across the tech stack.

As the field of AI continues to advance, the risks associated with these vulnerabilities underscore the importance of implementing stronger security protocols. The current landscape indicates that while the technology offers transformative potential, it also requires vigilant oversight to protect against emerging threats.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Regulation

Bureau Veritas unveils AI systems audit to help European firms comply with the EU AI Act, as shares rise to €27.21, highlighting 8.93% annual...

AI Research

Caltech and Google Quantum AI researchers reveal that small quantum computers can achieve up to 6x memory efficiency over classical systems in machine learning...

AI Research

Google's TurboQuant algorithm achieves 6x reduction in LLM cache memory with zero accuracy loss, revolutionizing AI efficiency for smaller labs and businesses.

AI Education

Anthropic unveils Project Glasswing, committing $100M to harness AI for cybersecurity, uncovering thousands of vulnerabilities across major software systems.

AI Cybersecurity

AWS outage disrupts services for major companies like Netflix and Airbnb, highlighting vulnerabilities in cloud infrastructure that affects 32% of the market.

AI Cybersecurity

Anthropic warns that its Claude Mythos AI could reduce cyberattack preparation from months to minutes, urging urgent upgrades to cybersecurity defenses.

AI Education

Ellucian appoints former Shopify executive Josh Rice as Chief Commercial Officer to enhance SaaS and AI adoption across 3,000 global institutions.

AI Technology

BTQ Technologies reveals that quantum Bitcoin mining could require an astronomical 10^23 qubits and 10^25 watts by 2025, urging immediate action on security vulnerabilities.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.