Anthropic Alleges Chinese AI Firms Used Claude for Training

▼ Summary
– Anthropic accuses three Chinese AI companies, DeepSeek, MiniMax, and Moonshot, of misusing its Claude model through large-scale campaigns involving fraudulent accounts.
– The alleged misuse involved “distillation,” a method where a smaller AI model is trained on a more advanced one, which Anthropic says can be used illicitly to quickly and cheaply acquire capabilities.
– Anthropic warns that illicitly distilled models likely lack crucial safeguards, potentially enabling authoritarian governments to use such AI for cyber operations, disinformation, or surveillance.
– Specifically, DeepSeek is accused of targeting Claude’s reasoning capabilities and generating politically safe alternatives to sensitive questions, according to the allegations.
– Anthropic is urging the AI industry, cloud providers, and lawmakers to address this issue, noting that restricted chip access could limit such illicit activities.
A significant allegation has emerged in the competitive world of artificial intelligence, where the race for superior models is intensifying. Anthropic, the creator of the Claude AI, has publicly accused three prominent Chinese AI firms of orchestrating what it describes as “industrial-scale campaigns” to misuse its technology. The company asserts that DeepSeek, MiniMax, and Moonshot created tens of thousands of fraudulent accounts to engage in millions of unauthorized interactions with Claude. The apparent goal was a process known as distillation, where a less advanced model is trained by mimicking the outputs of a more sophisticated one.
While acknowledging distillation can be a legitimate research technique, Anthropic contends these specific actions crossed into illicit territory. The company argues that this method allows foreign labs to acquire powerful AI capabilities developed by American companies in a fraction of the time and cost required for independent development. A critical concern raised is that models created through such unauthorized distillation often fail to incorporate the original model’s built-in safety protocols and ethical guardrails.
This omission, according to Anthropic, presents a serious national security risk. The firm warns that unprotected AI capabilities could be integrated into military, intelligence, and surveillance systems by authoritarian governments. This could potentially enable offensive cyber operations, sophisticated disinformation campaigns, and enhanced mass surveillance. The allegations suggest a direct attempt to bypass the ethical frameworks that Western AI developers strive to implement.
Detailing the scale of the alleged activity, Anthropic reported that DeepSeek, known for its efficient models, conducted over 150,000 exchanges specifically targeting Claude’s reasoning abilities. Furthermore, DeepSeek is accused of using Claude to generate responses to politically sensitive questions that would comply with state censorship requirements. This echoes similar concerns raised recently by OpenAI, which also pointed to DeepSeek’s practices.
The other two companies named, Moonshot and MiniMax, are said to have engaged in even larger volumes of interaction, tallying millions of exchanges with the Claude system. In response to these events, Anthropic is urging collaboration across the AI industry, including cloud service providers and policymakers, to establish clearer boundaries and consequences for such actions. The company also suggested that restricted access to advanced semiconductor chips could serve as a practical limitation on the scale at which illicit model distillation can occur.
(Source: The Verge)





