The U.S. State Department has initiated a worldwide campaign to address alleged intellectual property theft by Chinese companies, particularly in the field of artificial intelligence. This effort follows a complaint from the AI startup Anthropic, which accused three prominent Chinese firms, including Moonshot AI, of illicitly using its Claude chatbot to enhance their own AI models. The cable, dispatched on Friday to U.S. diplomatic posts globally, emphasizes the dangers posed by foreign adversaries extracting American AI models and aims to foster discussions with foreign counterparts about these concerns.
The State Department’s cable outlines risks associated with AI models derived from unauthorized distillation campaigns, emphasizing that these foreign models may appear competitive on certain benchmarks but lack the full performance capabilities of the original systems. This effort comes amid rising tensions between the U.S. and China over technological advancements and intellectual property protection.
In its blog post, Anthropic described sophisticated operations by DeepSeek, Moonshot AI, and MiniMax to extract capabilities from its Claude model. The complaint alleges that these companies generated over 16 million interactions with the chatbot through approximately 24,000 fraudulent accounts, violating terms of service and regional access restrictions.
Anthropic’s analysis reveals a structured approach by these firms to exploit Claude’s functionalities. “The three distillation campaigns followed a similar playbook, using fraudulent accounts and proxy services to access Claude at scale while evading detection,” the blog stated. It added that the campaigns were characterized by abnormal usage patterns indicative of systematic capability extraction rather than legitimate use.
Moonshot AI, led by CEO Zhilin Yang, reportedly accounted for over 3.4 million exchanges with Claude, focusing on areas such as agentic reasoning, coding and data analysis, computer vision, and the development of computer-use agents. The operation utilized various types of fraudulent accounts, complicating detection efforts. Through meticulous analysis of request metadata, Anthropic linked some of these accounts to senior personnel at Moonshot AI.
Moreover, the State Department’s cable warned that models created through these unauthorized processes might strip essential security protocols and could potentially undermine the integrity of AI systems. It is suggested that these models not only replicate certain functionalities but also compromise the ideological neutrality and truth-seeking mechanisms that are hallmarks of the original technology.
The allegations against Moonshot AI have broader implications, as they highlight increasing concerns in the U.S. about the competitive landscape of AI technology and the lengths to which some companies may go to advance their capabilities. This incident underscores a growing scrutiny of foreign engagements in American tech ecosystems, particularly as advancements in AI continue to shape various sectors, from healthcare to finance.
As the U.S. government proceeds with its diplomatic efforts, the case serves as a crucial bellwether for other tech firms and stakeholders in the industry. The implications of potential theft not only threaten individual companies but also the overall innovation landscape in the United States. With AI technology rapidly evolving, safeguarding proprietary models will become paramount in maintaining a competitive edge in the global market.
See also
71% of Aussies Use Generative AI, Yet Only 36% Trust Its Implementation, Says Expert
US Defense Partners with Anthropic, OpenAI, and Tech Giants for AI-First Military Initiative
AI Technology Enhances Road Safety in U.S. Cities
China Enforces New Rules Mandating Labeling of AI-Generated Content Starting Next Year
AI-Generated Video of Indian Army Official Criticizing Modi’s Policies Debunked as Fake





















































