What to know
- U.S. AI developer Anthropic alleges Chinese AI labs mined Claude at industrial scale.
- The technique involved is distillation, a common method when done legitimately.
- DeepSeek, Moonshot, and MiniMax are named in the accusations.
- There’s no public admission from the Chinese firms that they did this.
You’re hearing about this because Anthropic, the company behind the Claude family of AI models, publicly accused several Chinese AI companies — DeepSeek, Moonshot AI, and MiniMax — of essentially using Claude’s public (but not region-sanctioned) output to bolster their own systems.
What exactly is being alleged?
Anthropic says that these firms created tens of thousands of fraudulent accounts that then generated millions of conversations with Claude in a way that was designed not for normal use, but to extract Claude’s reasoning, coding, and other advanced capabilities.
The company describes this as an industrial-scale distillation attack — collecting a massive body of Claude’s outputs and using them to train smaller or competitor models.

Distillation itself is not a bad or unusual technique. In machine learning, it’s a standard approach: you take outputs from a larger “teacher” model to help train a smaller “student” model that can run faster or cheaper.
The controversy here is how it was done — Anthropic claims it wasn’t a transparent licensing or partnership, and that the scale and pattern of access violated its terms of service and regional access restrictions.
Are these firms confirmed to be using Claude data in their published models?
No. As of now, there’s no confirmed public admission from DeepSeek, Moonshot AI, or MiniMax that they trained their AI models using Claude outputs. The claims come from Anthropic’s internal detection and public statements.
That means:
- These are accusations of unauthorized extraction, not official confirmed training pipelines.
- The Chinese companies have not issued detailed responses acknowledging the practice.
- Independent verification outside of Anthropic’s reports remains limited.

Why is this a big deal?
1. Intellectual property and AI safety:
Anthropic frames the situation not just as a business conflict, but as a security concern. Models trained from someone else’s outputs — especially at scale — risk inheriting advanced capabilities without retaining the safety guardrails the original developer built in.
2. Technological competition:
These accusations are unfolding alongside intensifying competition between U.S. and Chinese AI labs. Export controls on advanced AI chips, research collaborations, and standards for model safety all factor into the larger context of global AI leadership.
3. Industry norms and ethics:
Distillation in academic research and within single companies is common and legitimate. The media and policy debate centers on whether similar techniques, when used covertly or without permission, cross ethical or legal boundaries.
What’s being done in response?
Anthropic says it is developing defensive measures to detect and block these kinds of extraction patterns and is calling for broader industry cooperation. It has also pledged support for stricter export controls on advanced AI chips, which it says could reduce the ability of foreign labs to carry out such campaigns at scale.
You should see this less as a simple question of Chinese firms “using Claude” to train models in a straightforward, sanctioned way, and more as a growing dispute about unauthorized data extraction, model security, and competitive practices in AI development.