/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Chinese startup Moonshot releases Kimi K2.5, saying the model can process text, images, and videos simultaneously and beats its open-source peers in some tests

Alibaba Group Holding Ltd.-backed Moonshot AI released an upgrade of its flagship model, heating up a domestic arms race ahead …

Bloomberg

Discussion

  • @scaling01 @scaling01 on x
    I think this is the order in which I like to use the models (purely usability/usefulness): Kimi 2.5 >> GLM 4.7 > MiniMax M2.1 > DeepSeek V3.2 > Qwen3 235B Qwen just feels very slop and last gen by now. Both GLM and MiniMax absolutely destroy it. DeepSeek V3.2 is a strong model
  • @scaling01 @scaling01 on x
    Kimi is still the most usable open-weights model Moonshot is honestly the Anthropic of China. A focus on taste and agentic behaviour.
  • @theo @theo on x
    K2's been my default model in T3 Chat for awhile. Great writer. Only issue was the lack of image recognition. Did not expect this. Genuinely hyped to play with it.
  • @kimi_moonshot @kimi_moonshot on x
    Here's a short video from our founder, Zhilin Yang. (It's his first time speaking on camera like this, and he really wanted to share Kimi K2.5 with you!) [video]
  • @haoningtimothy Wu Haoning on x
    We are really taking a long time to prove this: everyone is building big macs but we bring you a kiwi🥝 instead. You have multimodal with K2.5 everywhere: chat with visual tools, code with vision, generate aesthetic frontend with visual refs...and most basically, it is a SUPER
  • @eliebakouch Elie on x
    Kimi K2.5 is NOT just a small iteration on top of k2, it's now have fully multimodal understanding INCLUDING video! [image]
  • @bindureddy Bindu Reddy on x
    Pretty cool - Kimi 2.5 just dropped - ahead of the new DeepSeek model Will be on LiveBench tomorrow alongside Qwen Max Thinking Open source tsunami! 🌊 [image]
  • @kimi_moonshot @kimi_moonshot on x
    Introducing Kimi Code, an open-source coding agent under the Apache 2.0 License. 🔹 Python-based, easy to extend. 🔹 Fully transparent — clear, safe, reliable. 🔹 Seamlessly integrates with VS Code, Cursor, JetBrains, Zed, and more. 🔹 Fully-featured & out-of-the-box ready. [image]
  • @chujiezheng Chujie Zheng on x
    This is our best, best model so far (I love it so so much). We have integrated adaptive reasoning, search and CI into it, and put massive efforts on improving real-world user experience. Also, with the closure of Qwen3, it won't be long before the launch of Qwen3.5. Stay tuned!
  • @eliebakouch Elie on x
    very nice release by the kimi team, benchmarks are on par with opus 4.5, gpt 5.2 xhigh, gemini 3.0 pro there is also some nice details on the parallel RL part in the tech blog explaining how they build K2.5 agent swarm [image]
  • @kimi_moonshot @kimi_moonshot on x
    Kimi K2.5 has arrived! 🥝 Here are 2 things to know: Aesthetic Coding x Agent Swarm. [video]
  • r/LocalLLaMA r on reddit
    Introducing Kimi K2.5, Open-Source Visual Agentic Intelligence
  • r/LocalLLaMA r on reddit
    Kimi K2.5 Released !
  • @teortaxestex @teortaxestex on x
    > built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base ...It's essentially a totally new model with new abilities. 30T tokens @ Muon. «Kimi K2.5 represents a meaningful step toward AGI for the open-source community» wow o…
  • @teortaxestex @teortaxestex on x
    Huh. Indeed. Kimi-Thinking has been quietly updated to 2.5 and it's multimodal. [image]
  • @kimmonismus @kimmonismus on x
    Really impressive release by MoonshotAI: Kimi K2.5 is SOTA in HLE with 50%, and Agents Benchmark wih 77%. It comes with an agent swarm mode and seems overall like a really really impressive release. going to check it out now [image]
  • @kimiproduct Kimi Product on x
    One-shot “Video to code” result from Kimi K2.5 It not only clones a website, but also all the visual interactions and UX designs. No need to describe it in detail, all you need to do is take a screen recording and ask Kimi: “Clone this website with all the UX designs.” [video]
  • @dedene Peter Dedene on x
    The gap between closed-source and open-weight keeps getting closer. Fast.
  • @scaling01 @scaling01 on x
    You sleep for 5 fucking hours and both DeepSeek and Kimi are dropping models without you 🥺 [image]
  • @fireworksai_hq @fireworksai_hq on x
    Kimi K2.5 is now live on Fireworks with full parameter RL tuning support! This is @Kimi_Moonshot Moonshot AI's flagship agentic model, a new SOTA open VLM that unifies vision, text, thinking, and multi-agent execution. Kimi K2.5 demonstrates that open source models are now [image…
  • @chetaslua @chetaslua on x
    I had access to this beast for the last 7 days Many of you guys guessed it right, Agents swarms - 100 subagents working in parallel, 1500 tools call Sota on HLE - 50.2% and Browse Comp- 74.9% [image]
  • @haoningtimothy Wu Haoning on x
    https://www.kimi.com/... https://huggingface.co/... should be the most powerful ‘image-text-to-text’ on @huggingface now
  • @modelscope2022 @modelscope2022 on x
    🚀 Meet Kimi K2.5! 🌙 This is Kimi's most intelligent and versatile model to date, achieving SOTA performance across coding, vision, and agentic workflows. Model: https://modelscope.cn/... Paper: https://www.kimi.com/... Highlights: ✅ Native Multimodal Architecture: Seamlessly [ima…
  • @gm8xx8 @gm8xx8 on x
    KIMI K2.5 VISUAL AGENTIC INTELLIGENCE AT 1T SCALE Kimi K2.5 is Moonshot's open-source successor to K2: ~15T mixed vision+text continual pretraining plus a real scale-out agent stack (Agent Swarm) across Instant / Thinking / Agent modes. KIMI K2.5: MoE backbone (1T total / 32B [im…
  • @testingcatalog @testingcatalog on x
    BREAKING 🚨: Kimi K2.5 open-source model is now live on Kimi Chat and APIs with a leading 50% score on HLE benchmark! It comes along with an Agentic Swarm feature, where up to 100 sub-agents would be working on a problem in parallel (Available in beta for some customers) [video]
  • @kimi_moonshot @kimi_moonshot on x
    🥝 Meet Kimi K2.5, Open-Source Visual Agentic Intelligence. 🔹 Global SOTA on Agentic Benchmarks: HLE full set (50.2%), BrowseComp (74.9%) 🔹 Open-source SOTA on Vision and Coding: MMMU Pro (78.5%), VideoMMMU (86.6%), SWE-bench Verified (76.8%) 🔹 Code with Taste: turn chats, [image]
  • @youjiacheng You Jiacheng on x
    Interesting price change. cc @zephyr_z9 [image]
  • @casper_hansen_ Casper Hansen on x
    1T parameter model and multimodal!! Honestly insane how much Kimi is pushing forward the frontier Weights are also on Huggingface, released with INT4 quantization
  • @zephyr_z9 @zephyr_z9 on x
    Now this is really good [image]
  • @garyfung @garyfung on x
    Kimi 2.5. A class of its own among Chinese open weights when not even bothering to bench compare w/ other Chinese models anymore. Gunning straight for frontier models from the West Kimi has been my favourite in SOTA creative writing. What'd coding + writing agent enable 🤔
  • @sungkim Sung Kim on bluesky
    Moonshot AI, why are you making my life more complicated?  —  Now, I will need to revisit MQ, Kafka, and this stateless, client-side orchestration loop.  —  www.kimi.com/blog/kimi-k2...  [image]
  • r/singularity r on reddit
    Kimi K2.5 Released!!!
  • @mweinbach Max Weinbach on x
    KIMI K2.5 WEIGHTS ARE LIVE 1T total parameter MoE, fully multimodal It's competitive with Claude 4.5 Opus thinking https://huggingface.co/... [image]
  • @teortaxestex @teortaxestex on x
    > clearly they are leaking the model weights back to China Holy shit this means we can download Claude from HF
  • @basedtorba Andrew Torba on x
    China just released Kimi K2.5 and like clockwork the performance is on par with American frontier AI models. Just tested it for the first time and it identifies as Claude with a simple “hi” prompt lol. American AI companies all hire foreigners and clearly they are leaking the [im…