Anthropic says DeepSeek, MiniMax, and Moonshot violated its ToS by prompting Claude a combined 16M+ times and using distillation to train their own products
The allegations mirror those of OpenAI, which told House lawmakers that DeepSeek used ‘distillation’ to improve models
Wall Street Journal
Related Coverage
- Chinese companies used Claude to improve own models, Anthropic says Reuters · Juby Babu
- Anthropic Says DeepSeek, MiniMax Distilled AI Models for Gains Bloomberg
- Detecting and preventing distillation attacks Anthropic
- Chinese AI companies ‘distilled’ Claude to improve own models, Anthropic says iTnews
- US AI giants accuse Chinese rivals of mass data theft Livemint
- Anthropic alleges industrial-scale Claude attacks by DeepSeek and other Chinese AI rivals Crypto Briefing · Estefano Gomez
- Anthropic accuses Deepseek, Moonshot, and MiniMax of stealing Claude's AI data through 16 million queries The Decoder · Matthias Bastian
- Anthropic announces proof of distillation at scale by MiniMax, DeepSeek,Moonshot Hacker News
- Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI The Verge · Emma Roth
- Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports TechCrunch · Rebecca Bellan
- Anthropic Claude Under Large Scale Distillation Attacks By Chinese AI Labs with 13 Million Exchanges Cyber Security News · Balaji N
- Anthropic Says Chinese Firms Used Claude Data to Improve Models The Information · Stephanie Palazzolo
- Anthropic accuses three Chinese AI labs of abusing Claude to improve their own models Engadget · Jackson Chen
- Distillation Attacks on Claude Are Real. So Is the Lobbying Campaign. Implicator.ai · Harkaram Grewal
- Anthropic says DeepSeek and other Chinese AI companies fraudulently used Claude Business Insider · Brent D. Griffiths
- Anthropic Says Chinese AI Firms Illegally Extracted Claude Outputs The Asia Business Daily · Ryu Hyunseok
- 🗞️ Anthropic launched Claude Code Security to scan your code repositories for bugs and suggest security patches. Rohan's Bytes · Rohan Paul
- Critics Mock Anthropic's Claims Chinese AI Labs Are Stealing Its Data Decrypt · Jason Nelson
- Anthropic accuses DeepSeek, other Chinese AI developers of ‘industrial-scale’ copying … Tom's Hardware · Anton Shilov
- Anthropic Accuses 3 Chinese Companies of Harvesting Its Data New York Times · Cade Metz
- Anthropic Slams China for AI Theft, But Critics Say the Outrage Is Hypocritical PCMag · Michael Kan
- Anthropic accuses Chinese labs of trying to illicitly take Claude's capabilities CyberScoop · Tim Starks
- Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude VentureBeat · Michael Nuñez
Discussion
-
@firstadopter
Tae Kim
on x
Large swaths of the media and here on social media played the role of the useful idiot gushing over DeepSeek's prowess. Good job everyone!
-
@firstadopter
Tae Kim
on x
I wrote about this before, how China's advances are fraudulent. Now confirmed by both OpenAI and Anthropic. Yes, DeepSeek is a copycat, copy paste fraud “We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract
-
@sigkitten
@sigkitten
on x
good job, deepseek, moonshot, minimax. please do more
-
@timfduffy
Tim Duffy
on x
Also, https://z.ai/ is a surprising omission, are they not training on Claude or is there just not as strong evidence? I think I've heard folks say GLM sounds Claudey before.
-
@timfduffy
Tim Duffy
on x
Personally I think it's probably good that it's possible to use distillation to help catch up to the frontier, makes it harder for any one lab to pull ahead.
-
@timfduffy
Tim Duffy
on x
This has been long suspected, but I think this is the first official accusation, right? I wonder if OpenAI has also seen distillation by those labs using their models.
-
@abcampbell
@abcampbell
on x
remember when the doomers told us china was too concerned about control to compete at the frontier? rationalist epistemics in shambles rn but everyone too busy raising money for their pet ngos to care https://x.com/...
-
@alexpalcuie
@alexpalcuie
on x
just a sample of my workday
-
@tensor_rotator
Alek Dimitriev
on x
I can finally publicly state one reason I've not been bullish on open source catching up and overtaking the frontier labs: we observed several of the top open source models distilling from Claude. Leapfrogging happens through innovation, not distillation.
-
@anthropicai
@anthropicai
on x
These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community. Read more: https://www.anthropic.com/...
-
@anthropicai
@anthropicai
on x
Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems.
-
@anthropicai
@anthropicai
on x
We've identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
-
@mclean
Michael McLean
on bluesky
LLM Distillation is really underdiscussed. Fascinating to me that third-party groups are better at distilling the frontier models than the frontier labs themselves lol. [embedded post]
-
r/singularity
r
on reddit
Anthropic is accusing DeepSeek, Moonshot AI (Kimi) and MiniMax of setting up more than 24,000 fraudulent Claude accounts, and distilling training information from 16 million exchanges.
-
@theprimeagen
@theprimeagen
on x
wait... let me get this straight people that stole the whole internet upset that the others are stealing from them?
-
@headinthebox
Erik Meijer
on x
Any system, software, hardware, AI model, ..., that can be observed, can be cloned.
-
@gergelyorosz
Gergely Orosz
on x
Anthropic scrapes copyrighted materials online; creates a model that they charge $$ for; doesn't compensate for use - apparently this is fair? Now Anthropic complains about other companies paying for model access, to create free models anyone can use - and this is not fair??
-
@elonmusk
Elon Musk
on x
@tetsuoai Banger 🤣🤣 How dare they steal the stuff Anthropic stole from human coders??
-
@krishnanrohit
Rohit
on x
This is interesting. The article says Deepseek had 150k exchanges, Moonshot 3.4m and MiniMax 13m. That's a difference of 100x between Deepseek and MiniMax, were they doing the same thing? Also, fromt his, seems using Claude as “llm as a judge” seems to violate the policy too? [im…
-
@morqon
Morgan
on x
they'll spin it differently, but deepseek isn't the problem here: “150,000 interactions” is only 0.9% of the detected distillations [image]
-
@suhail
@suhail
on x
Seems fair tbh. Anthropic has done industrial scale scraping of everyone's stuff 🤷🏾♂️
-
@mitsuhiko
Armin Ronacher
on x
Distillation is great! We need more of it.
-
@kenwheeler
Patagucci Perf Papi
on x
damn that's crazy they stole your ip and are trying to resell it for a profit? what kind of complete fucking asshole would do that!?
-
@wesbos
Wes Bos
on x
Oh noooo, the company that extracted our data for their models is having others extracting data for their models
-
@tetsuoai
Tetsuo
on x
I can't believe someone would just steal from Anthropic like this. The millions of man-hours Anthropic spent hand-writing code, text, art, books, etc. to generate enough data for training must be taken into consideration here. Where is the respect for IP?
-
@jackfriks
Jack Friks
on x
maybe this is why anthropic been so worried about people using their claude code subs for things other than claude code trying to stop this
-
@jackellis
Jack Ellis
on x
Anthropic: Trains it's models using other people's data Also Anthropic: China is stealing our data!
-
@morqon
Morgan
on x
just in time for the deepseek narrative window
-
@vasuman
Vas
on x
Company that trained on everyone's data without asking is upset that someone trained on its data without asking 2026 is the year of open source for a reason
-
@hsvsphere
@hsvsphere
on x
Wow, based. I will use the Chinese models more, I can even use it for sensitive topics as they're open source.
-
@jaredkubin
Jared L Kubin
on x
It's almost like they need... durable battle tested security products
-
@luciascarlet
† Lucia Scarlet
on x
🥺 oh noooooo 🥺,,, anyway
-
@tekbog
@tekbog
on x
wow someone trained on your work? that's crazy
-
@teknium
@teknium
on x
Ohhh nooo not my private IP how dare someone use that to train an AI model, only Anthropic has the right to use everyone elses IP nooooo, this cannot stand!
-
@mert
@mert
on x
silicon valley was a documentary damn it jian yang [image]
-
@theahmadosman
Ahmad
on x
Cry me a river, you pirated humanity's knowledge and trained your models on it!
-
r/BetterOffline
r
on reddit
Anthropic accuses Chinese companies of “copying” its models through mass industrial distillation.
-
r/technology
r
on reddit
Anthropic claims to have identified industrial-scale distillation attacks by DeepSeek, Moonshot AI, and MiniMax.
-
@gothburz
Peter Girnus
on x
Credit where it's due — they named DeepSeek, Moonshot, and MiniMax with specific attribution. But the IoCs are shared privately while the policy ask is shared publicly. The audience for this post isn't defenders though, it's @congressdotgov @HouseGOP @HouseDemocrats @SenateGOP
-
@seldo.com
Laurie Voss
on bluesky
The *audacity* it takes the big model trainers to complain that somebody else scraped their work and is capturing value from it without permission. The sheer chutzpah. The staggering lack of self-awareness. It's gobsmacking. [embedded post]
-
@peark.es
George Pearkes
on bluesky
Do NOT recommend reading this at face value but it does have some interesting anecdotes in it about how Anthropic is able to detect and undermine distillation attacks.