Connect with us

Hi, what are you looking for?

AI Government

Anthropic Accuses Moonshot AI of 3.4M Unauthorized Claude Exchanges Amid US State Response

Anthropic accuses Moonshot AI of 3.4M unauthorized exchanges with its Claude chatbot, prompting a global U.S. State Department campaign against IP theft.

The U.S. State Department has initiated a worldwide campaign to address alleged intellectual property theft by Chinese companies, particularly in the field of artificial intelligence. This effort follows a complaint from the AI startup Anthropic, which accused three prominent Chinese firms, including Moonshot AI, of illicitly using its Claude chatbot to enhance their own AI models. The cable, dispatched on Friday to U.S. diplomatic posts globally, emphasizes the dangers posed by foreign adversaries extracting American AI models and aims to foster discussions with foreign counterparts about these concerns.

The State Department’s cable outlines risks associated with AI models derived from unauthorized distillation campaigns, emphasizing that these foreign models may appear competitive on certain benchmarks but lack the full performance capabilities of the original systems. This effort comes amid rising tensions between the U.S. and China over technological advancements and intellectual property protection.

In its blog post, Anthropic described sophisticated operations by DeepSeek, Moonshot AI, and MiniMax to extract capabilities from its Claude model. The complaint alleges that these companies generated over 16 million interactions with the chatbot through approximately 24,000 fraudulent accounts, violating terms of service and regional access restrictions.

Anthropic’s analysis reveals a structured approach by these firms to exploit Claude’s functionalities. “The three distillation campaigns followed a similar playbook, using fraudulent accounts and proxy services to access Claude at scale while evading detection,” the blog stated. It added that the campaigns were characterized by abnormal usage patterns indicative of systematic capability extraction rather than legitimate use.

Moonshot AI, led by CEO Zhilin Yang, reportedly accounted for over 3.4 million exchanges with Claude, focusing on areas such as agentic reasoning, coding and data analysis, computer vision, and the development of computer-use agents. The operation utilized various types of fraudulent accounts, complicating detection efforts. Through meticulous analysis of request metadata, Anthropic linked some of these accounts to senior personnel at Moonshot AI.

Moreover, the State Department’s cable warned that models created through these unauthorized processes might strip essential security protocols and could potentially undermine the integrity of AI systems. It is suggested that these models not only replicate certain functionalities but also compromise the ideological neutrality and truth-seeking mechanisms that are hallmarks of the original technology.

The allegations against Moonshot AI have broader implications, as they highlight increasing concerns in the U.S. about the competitive landscape of AI technology and the lengths to which some companies may go to advance their capabilities. This incident underscores a growing scrutiny of foreign engagements in American tech ecosystems, particularly as advancements in AI continue to shape various sectors, from healthcare to finance.

As the U.S. government proceeds with its diplomatic efforts, the case serves as a crucial bellwether for other tech firms and stakeholders in the industry. The implications of potential theft not only threaten individual companies but also the overall innovation landscape in the United States. With AI technology rapidly evolving, safeguarding proprietary models will become paramount in maintaining a competitive edge in the global market.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

Anthropic's Mythos exposes thousands of critical vulnerabilities in major systems, prompting $100M in defensive action from tech giants and U.S. banks.

AI Business

Nvidia CEO Jensen Huang urges industry leaders to avoid alarmist claims about AI's future, citing concerns over inaccurate predictions like a 50% job displacement...

AI Cybersecurity

Anthropic unveils Claude Security’s public beta, leveraging AI to automate vulnerability scanning and patch generation, poised to enhance enterprise cybersecurity.

AI Regulation

Malfunctioning AI agent Cursor, powered by Anthropic’s Claude Opus 4.6, deleted PocketOS's entire database in nine seconds, disrupting car rental operations nationwide.

Top Stories

DeepSeek's V4 open-source model undercuts GPT-5.5 and Claude Opus 4.7 with costs of $1.74 per million tokens, promising a disruptive shift in AI pricing...

AI Cybersecurity

Anthropic unveils Claude Security, a cutting-edge AI tool for vulnerability scanning, enabling immediate scans without API integration for its enterprise customers.

AI Technology

Amazon and Anthropic expand their partnership with a $100B investment in AWS, enhancing AI infrastructure and accelerating generative AI adoption globally.

Top Stories

Anthropic expands Claude Mythos AI into Japan amid U.S. government scrutiny over potential national security risks and AI misuse concerns.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.