Back to News
Cybersecurity

Anthropic Exposes Large-Scale AI Model Theft by Chinese Firms

Anthropic reveals significant unauthorized use of its AI model, Claude, by Chinese companies to enhance their own technologies.

Anthropic has disclosed that three Chinese AI firms—DeepSeek, Moonshot AI, and MiniMax—conducted large-scale distillation attacks to illegally extract the capabilities of its language model, Claude. These attacks involved over 16 million interactions with Claude through approximately 24,000 fraudulent accounts, representing a serious breach of Anthropic's terms of service. The firm described these activities as industrial-scale campaigns aimed at enhancing the competitors' models by leveraging Claude's advanced functionalities without authorization.

For businesses, this incident underscores the importance of robust cybersecurity measures and the need for vigilance against intellectual property theft, particularly in the rapidly evolving AI sector. Organizations must evaluate their defenses against similar tactics and consider implementing stricter access controls and monitoring systems to protect their proprietary technologies. This situation highlights the broader implications for the AI landscape, where the unauthorized replication of sophisticated models can not only harm original developers like Anthropic but also lead to a proliferation of less secure, inferior AI systems in the market, ultimately affecting trust and safety in AI applications.

---

*Originally reported by [The Hacker News](https://thehackernews.com/2026/02/anthropic-says-chinese-ai-firms-used-16.html)*