/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

MiniMax releases M2.5, claiming the model delivers on the “intelligence too cheap to meter” promise, priced at $0.30/1M input tokens and $1.20/1M output tokens

Today we're introducing our latest model, MiniMax-M2.5.  —  Extensively trained with reinforcement learning

MiniMax

Discussion

  • @bytebot Colin Charles on x
    I recommend MiniMax - I use it with Opencode all the time. Great workhorse. You want to try M2.5
  • @minimax_ai @minimax_ai on x
    Love seeing the benchmarks out in the wild 🚀 MiniMax M2.5 was built for real-world, long-horizon agent workloads - Reliability + performance both matter. Thanks for sharing! 🙌
  • @gneubig Graham Neubig on x
    To be honest, I'm a bit of a skeptic of claims that models are on par with Claude/GPT, but this is definitely one that I feel is getting there. Especially for tasks that focus on code (as opposed to other things like writing, math, etc.) More in the thread above.
  • @gneubig Graham Neubig on x
    MiniMax-M2.5 is a surprising new step in open coding models. The first model where I've been able to independently confirm that it's better than the most recent Claude Sonnet. It showed up in our benchmarks below, and in my vibe checks it felt strong and diverse.
  • @fanjiewang Frank Wang on x
    Zen × MiniMax M2.5 - free for a week wait what.. 2.5?! yes sir.. feels like M2.1 just dropped yesterday and now it's already M2.5?! better try it before M2.67 shows up next month
  • @openhandsdev @openhandsdev on x
    @MiniMax_AI At 230B parameters (10B active), it's also relatively lightweight for a frontier-class model. This is the size where local deployment is feasible as well. [image]
  • @openhandsdev @openhandsdev on x
    @MiniMax_AI M2.5 performed particularly well on long-running tasks like building apps from scratch, an area where smaller models have traditionally struggled. Also strong on issue resolution and software testing. [image]
  • @openhandsdev @openhandsdev on x
    @MiniMax_AI The cost-performance tradeoff is remarkable. At ~13x cheaper than Opus, M2.5 opens up use cases that weren't practical before. It's essentially a two-horse race for API-available models at the moment: Opus for max capability, M2.5 for high capability at low cost. [ima…
  • @openhandsdev @openhandsdev on x
    Big news for open models: @MiniMax_AI M2.5 is out and it's an excellent+affordable coding model. It ranks 4th in our benchmarks, the first open model to beat Claude Sonnet. Only Claude Opus and GPT-5.2 Codex score higher. Details on scores and limited-time free access below 🧵 [im…
  • @altryne Alex Volkov on x
    BREAKING: MiniMax just dropped official M2.5 benchmarks and they're going HEAD TO HEAD with Opus 4.6, GPT-5.2, and Gemini 3 Pro 🤯 And Olive Song from @MiniMax_AI is joining ThursdAI LIVE in ~30 min to break it all down @ThursdAI_pod Here are the numbers 👇 [image]
  • @openrouterai @openrouterai on x
    MiniMax M2.5 is live now on OpenRouter! @MiniMax_AI's update to their powerful agentic model M2.1 comes with improved reliability and performance on long running tasks. It's become a powerful general agent, capable of much more than writing code. [image]
  • @thdxr Dax on x
    interesting thing about minimax 2.5 is it's a smaller model considering it's very usable it's a great candidate for home labs also would love to see inference providers try and max out its tokens/s can probably do something crazy [image]
  • @skylermiao7 Skyler Miao on x
    my favorite part about M2.5: SOTA without the speed tax. SWE-bench 74 → 80.2, and the best thing — 30% faster. no trade-off.
  • @minimax_ai @minimax_ai on x
    Introducing M2.5, an open-source frontier model designed for real-world productivity. - SOTA performance at coding (SWE-Bench Verified 80.2%), search (BrowseComp 76.3%), agentic tool-calling (BFCL 76.8%) & office work. - Optimized for efficient execution, 37% faster at complex [i…
  • @eliebakouch Elie on x
    wtf, minimax M2.5 benchmark are insane and it's probably the same base model so only 10B active parameters??? [image]
  • @thdxr Dax on x
    minimax 2.5 is now generally available and free for 7 days in opencode i'm going to try and switch to it as my default so i can get a sense of how it works golden era for opensource models right now
  • @adonis_singh Adi on x
    this might be the year of open weight models all we need now is a whale comeback 🐋
  • @zephyr_z9 @zephyr_z9 on x
    This is fucking crazy Input price $0.30/M and output is $1.20/M [image]
  • @itetnaa @itetnaa on x
    There is no fucking way. The Minimax-M2.5 benchmarks are INSANE. [image]
  • @askokara @askokara on x
    China won. This is another deepseek moment MiniMax 2.5 is now the best model in the world > On par with opus 4.6 > SOTA in coding, excel data analysis, deep research, document generation and summarization > Optimized thinking efficiency + 100 tps to achieve 3x faster than opus [i…
  • @initjean @initjean on x
    the new MiniMax-M2.5 model is open source, almost on par with Opus 4.6, and way cheaper ($1.20/MTok out) [image]
  • @synthwavedd Leo on x
    MiniMax M2.5 is here and it goes to to toe with Opus 4.6 & GPT-5.2! [image]
  • @chatgpt21 Chris on x
    Minmax 2.5 is a coding monster 🤯 80.2% on SWE Bench Verified 55.4% on SWE Bench Pro [image]
  • @askokara @askokara on x
    MiniMax M2.5 one-shotted this It's so over for frontend devs [video]
  • @scaling01 @scaling01 on x
    Minimax-M2.5 SWE-Bench Verified: 80.2% Multi-SWE-Bench: 51.3% BrowseComp: 76.3% [image]
  • r/LocalLLaMA r on reddit
    Minimax M2.5 Officially Out
  • @skylermiao7 Skyler Miao on x
    With M2.5 we're also shipping MiniMax Experts. general agents sound nice until you realize they don't know your stack, your domain, or how you actually work. now you can build experts that do. share yours, grab others'. community already made: clawbot assistant, crypto trading
  • @minimaxagent @minimaxagent on x
    🧠 Meet Expert Collection from MiniMax Agent A team of specialized AI experts — office productivity in docs, Excel, PDFs & slides, finance in research & McKinsey-style decks, and coding — working together inside MiniMax Agent. Test directly with our showcase queries or bring [vide…