Anthropic has accused three Chinese artificial intelligence companies—DeepSeek, Moonshot AI, and MiniMax—of conducting a coordinated effort to steal proprietary capabilities from its Claude models. The allegations, emerging from the San Francisco-based firm on February 24, 2026, detail large-scale operations involving approximately 24,000 fake accounts that generated over 16 million exchanges with Claude, circumventing access restrictions applicable in China.
In a statement, Anthropic characterized these activities as “industrial-scale campaigns,” asserting that such operations violate its terms of service. This claim aligns with similar allegations made by rival company OpenAI, which recently informed the U.S. House Select Committee on China about analogous tactics employed by Chinese entities to exploit frontier models. OpenAI described these methodologies as “sophisticated, multi-stage pipelines” designed to bypass access controls.
The technique at the core of these allegations is known as distillation, a common machine learning approach wherein a smaller model is trained on the outputs of a more powerful one. While AI labs typically utilize this method to create more compact and cost-efficient versions of their own models, Anthropic argues that the accused companies manipulated this technique to illicitly extract proprietary features.
Ironically, Anthropic itself has faced multiple lawsuits relating to copyright infringement, including cases against it by Concord Music Group and Reddit. Both Anthropic and OpenAI are currently entangled in numerous class action lawsuits, underscoring the complex legal landscape surrounding AI development. Despite these challenges, both companies frame the alleged actions of the Chinese labs as a national security threat.
Among the three firms, MiniMax has been identified as the largest perpetrator, generating over 13 million exchanges and focusing on Claude’s agentic coding and tool orchestration. Agentic reasoning, which allows a model to autonomously execute multi-step tasks, was a particular target of MiniMax’s activities. Anthropic noted that it detected MiniMax’s operations during their execution, giving it an uncommon perspective on the full lifecycle of the attack.
Moonshot AI, known for its Kimi model series, also engaged in significant activity, recording over 3.4 million exchanges. The company employed hundreds of fraudulent accounts and various access pathways to mask the coordinated nature of its campaign. Anthropic reported that the metadata of the requests matched the public profiles of senior staff at Moonshot. In a subsequent phase of its operation, the company pivoted towards a more targeted approach aimed at reconstructing Claude’s reasoning traces.
DeepSeek’s involvement, while smaller in scale with over 150,000 exchanges, drew attention for its unique methodologies. The prompts it used instructed Claude to reconstruct its internal reasoning, essentially generating training data that enhances models’ problem-solving capabilities. Anthropic traced some of these requests back to specific researchers at DeepSeek. Concerns were also raised regarding DeepSeek’s attempts to train its models to navigate politically sensitive topics without triggering censorship.
To evade Claude’s access restrictions, the Chinese companies utilized commercial proxy services through what Anthropic described as “hydra cluster” architectures. These complex networks of fraudulent accounts operated simultaneously across Anthropic’s API and third-party cloud platforms, complicating efforts to detect and dismantle the operations.
Beyond the business implications, Anthropic warns that the unauthorized distillation of its models poses significant national security risks. These models, lacking the safety measures built into U.S.-based AI systems, could potentially be integrated into military or intelligence applications without appropriate oversight. The company emphasized that foreign labs gaining access to American models could facilitate the development of capabilities that pose risks in domains like bioweapons and offensive cyber operations.
Anthropic has also entered the broader discourse on export controls for advanced semiconductors, arguing that the rapid advancements in China’s AI sector cannot solely be attributed to domestic efforts but are significantly influenced by capabilities extracted from American models. This argument is complicated by a recent report from the Forecasting Research Institute, which predicts that the performance gap between U.S. and Chinese AI models will narrow significantly by 2031, with a potential parity by 2041.
In response to these challenges, Anthropic has developed behavioral fingerprinting systems and classifiers to detect distillation patterns in API traffic. It has tightened verification requirements for account types most commonly exploited and is collaborating with other AI labs and cloud providers to share technical indicators. Furthermore, the company is creating safeguards to mitigate the risk of its outputs being used for unauthorized training while still serving legitimate users.
See also
Germany”s National Team Prepares for World Cup Qualifiers with Disco Atmosphere
95% of AI Projects Fail in Companies According to MIT
AI in Food & Beverages Market to Surge from $11.08B to $263.80B by 2032
Satya Nadella Supports OpenAI’s $100B Revenue Goal, Highlights AI Funding Needs
Wall Street Recovers from Early Loss as Nvidia Surges 1.8% Amid Market Volatility


















































