Moonshot says Kimi K2.5 provides “the foundation” for Cursor's Composer 2 model and that Cursor accesses Kimi K2.5 via Fireworks AI
Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support.
@kimi_moonshot
Discussion
-
@fynnso
Fynn
on x
was messing with the OpenAI base URL in Cursor and caught this accounts/anysphere/models/kimi-k2p5- rl-0317-s515-fast so composer 2 is just Kimi K2.5 with RL at least rename the model ID [image]
-
@quinnypig
Corey Quinn
on x
Moonshot is being incredibly diplomatic here considering Cursor spelled “Kimi K2.5” as “continued pre-training and reinforcement learning.” The model ID was literally kimi-k2p5-rl-0317-s515-fast. They forgot to file the serial numbers off. Nailed it.
-
@leerob
Lee Robinson
on x
Since people really want me to say this: “KIMI K2.5” ‼️ Yes, that is the base we started from. And we are following the license through inference partner terms (e.g. Fireworks) I'm thankful for OSS models personally, good for the ecosystem.
-
@teortaxestex
@teortaxestex
on x
Class Notice how well they've played this. Engineers reacted, deleted tweets, the community did everything else, mounting pressure until Cursor attributed the model. No need for scandal. [image]
-
@snwy_me
@snwy_me
on x
can everyone shut the fuck up now and stop acting like cursor just stole a baby from a hospital lmfao
-
@gergelyorosz
Gergely Orosz
on x
Cursor keeps showing poor judgment with comms - behaving not like a $10B+ company, but like an early-stage startup Hikes prices for many enterprise customers without notice, or comms or transparency Big bang Composer 2.0 release w/o sharing that it's based on Kimi 2.5
-
@yampeleg
Yam Peleg
on x
I don't get why people go after cursor for fine tuning an open source model, this is exactly what they are for.
-
@leerob
Lee Robinson
on x
I'm a big believer in open source, especially as AI improves. It was a miss to not mention the Kimi base in our blog from the start. We'll fix that for the next model 🙏 Their team clarified our usage was licensed in the tweet below. https://x.com/...
-
@amanrsanger
Aman Sanger
on x
We've evaluated a lot of base models on perplexity-based evals and Kimi k2.5 proved to be the strongest! After that, we do continued pre-training and high-compute RL (a 4x scale-up). The combination of the strong base, CPT and RL, and Fireworks' inference and RL samplers make
-
@timhaldorsson
Tim Haldorsson
on x
Cursor wrapped the kimi model and called it their own model this is the beauty of open-source that you can just build and ship things
-
@sundeep
Sunny Madra
on x
The technology ecosystem has made significant progress by building upon open-source, great to see this continuing.
-
@leerob
Lee Robinson
on x
Here's confirmation the license is correct from the Kimi team. Agree with the feedback we should have mentioned the base up front, we will do that for the next model! https://x.com/...
-
@clementdelangue
Clem
on x
Looks like it's confirmed Cursor's new model is based on Kimi! It reinforces a couple of things: - open-source keeps being the greatest competition enabler - another validation for chinese open-source that is now the biggest force shaping the global AI stack - the frontier is no
-
@yuchenj_uw
Yuchen Jin
on x
People dunk on Cursor like: “it's just Kimi K2.5,” “look inside, it's a Chinese model.” There's no shame in building on top of strong base models and doing your own post-training or RL (as long as you respect the license). In most cases you don't need to pretrain from scratch.
-
@elonmusk
Elon Musk
on x
@fynnso Yeah, it's Kimi 2.5
-
@ns123abc
Nik
on x
🚨NEWS: Cursor's $50B “in-house model” is literally Kimi K2.5 with RL on top. Got caught in 24 hours >be Moonshot AI >spend hundreds of millions training Kimi K2.5 >1 trillion parameters, 15 trillion tokens, agent swarm architecture >beat GPT-5.2 and Opus 4.5 on real benchmarks [i…
-
@cryptopunk7213
@cryptopunk7213
on x
lmfao the new cursor model is actually a chinese model (kimi k2.5) they didn't even change the model name 😂 [image]
-
@yuchenj_uw
Yuchen Jin
on x
Cursor's Composer 2 is likely built on Kimi K2.5. The model URL + tokenizer are strong signals. I love this direction: companies mid-train and post-train on top of OSS LLMs. Prediction: open-source model labs will monetize by taking a cut when others build on top of their
-
@leerob
Lee Robinson
on x
Yep, Composer 2 started from an open-source base! We will do full pretraining in the future. Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training. This is why evals are very different. And yes, we are following the license through
-
@auchenberg
Kenneth Auchenberg
on x
So let me get this right: 1. MoonshotAI distills Claude from Anthropic with 3.4 million exchanges on their API. 2. Releases Kimi K2.5 as OSS. 3. Cursor RL fine-tunes Kimi K2.5 into Composer 2 Cursor now delivers Opus 4.6 performance for a fraction of the price. [image]
-
@thdxr
Dax
on x
whether this is true or not it's going to cause every company producing open source models to re-evaluate if they should continue to do so that is incredibly frustrating
-
@jenzhuscott
Jen Zhu
on x
The responses from @Kimi_Moonshot & @cursor_ai both are gracious. So wonderful to see. 1. There is absolutely no shame in building on top of open source model. Way better than repeating the same stacks in walled garden. Open-source > closed 2. Chinese open source models are [imag…
-
@tmychow
Trevor
on x
“1/4 of the compute spent on the final model came from the base, the rest is from our training” k2 and k2.5 (a continued pretrain of k2) each used 15T tokens k2 is 32B active; by 6np, kimi did 5.76e+24 flops if that's 1/4, cursor did 1.7e25 flops i.e. same as gpt-4 flops