Anthropic Alleges Chinese AI Labs Copied Its Claude Model

▼ Summary
– Anthropic accuses three Chinese AI companies (DeepSeek, Moonshot AI, and MiniMax) of creating over 24,000 fake accounts to illicitly access its Claude model, generating millions of exchanges to copy its specialized capabilities.
– The alleged method used is “distillation,” a training technique that competitors can exploit to mimic another lab’s AI models, as also highlighted in a recent OpenAI complaint against DeepSeek.
– Anthropic links these large-scale distillation attacks to the need for advanced AI chips, arguing the incidents reinforce the rationale for maintaining strict U.S. export controls on such technology.
– The company warns that illicitly distilled models may lack crucial safety safeguards, posing national security risks by proliferating AI capabilities for bioweapons, cyberattacks, or disinformation without protective restrictions.
– In response, Anthropic is investing in better defenses and calling for a coordinated industry and policy response to address these threats to U.S. AI dominance and security.
Anthropic, the AI safety and research company, has leveled serious accusations against three prominent Chinese AI firms, alleging they systematically copied its flagship Claude model through a technique known as distillation. The company claims that DeepSeek, Moonshot AI, and MiniMax collectively created over 24,000 fake accounts to interact with Claude, generating more than 16 million exchanges. According to Anthropic, these efforts specifically targeted Claude’s most advanced and distinctive features, including agentic reasoning, sophisticated tool use, and complex coding abilities.
This controversy emerges during an ongoing, heated debate about U.S. export controls on advanced semiconductors, a policy designed to limit China’s progress in artificial intelligence. While distillation is a legitimate method for refining a company’s own models, using it on a competitor’s system is viewed as a form of intellectual property theft. The scale of the alleged operation is significant. Anthropic reports tracking over 150,000 exchanges from DeepSeek focused on foundational logic and alignment, particularly seeking ways to handle policy-sensitive queries. Moonshot AI allegedly engaged in more than 3.4 million exchanges targeting reasoning, coding, and computer vision. MiniMax’s activity was the most voluminous, with roughly 13 million exchanges aimed at agentic coding and tool orchestration.
The situation highlights the intense global competition in AI development. DeepSeek previously gained attention for releasing a high-performing, open-source reasoning model at a remarkably low cost, and its upcoming DeepSeek V4 is rumored to outperform leading Western models in coding tasks. Anthropic observed MiniMax redirecting nearly half of its traffic to extract capabilities from the latest Claude model immediately upon its launch. In response to these alleged attacks, Anthropic stated it will enhance its technical defenses but is also advocating for a broader, coordinated industry and policy response.
Anthropic directly links these activities to the debate over chip exports, arguing that the massive computational power required for such large-scale distillation “requires access to advanced chips.” The company contends that this incident validates the rationale for strict export controls, as limiting chip access hinders not just direct model training but also the capacity for this type of extraction. Security experts echo this concern. Dmitri Alperovitch, chairman of the Silverado Policy Accelerator, noted that rapid advances by Chinese AI models have long been suspected to involve the distillation of U.S. models, and this revelation provides concrete evidence. He argues it strengthens the case for refusing AI chip sales to the implicated companies.
Beyond commercial competition, Anthropic warns of profound national security implications. The company builds extensive safeguards into its systems to prevent malicious uses, such as developing bioweapons or executing cyberattacks. Models created through illicit distillation likely bypass these critical safety protocols, potentially allowing dangerous capabilities to proliferate without restraint. Anthropic also points to the risk of authoritarian governments deploying such powerful, unshackled AI for offensive cyber operations, disinformation, and surveillance, risks that are magnified if the models are made open-source. The accused Chinese companies have been contacted for comment regarding these allegations.
(Source: TechCrunch)





