MiniMax releases M2.7, a proprietary “self-evolving” LLM that the company used to build, monitor, and optimize the model's own reinforcement learning harnesses
In the last few years, Chinese AI startup MiniMax has become one of the most exciting in the crowded global AI marketplace …
VentureBeat Carl Franzen
Related Coverage
- MiniMax M2.7: Early Echoes of Self-Evolution MiniMax
- MiniMax launches M2.7 model on MiniMax Agent and APIs TestingCatalog · Erin
Discussion
-
@artificialanlys
@artificialanlys
on x
MiniMax has released MiniMax-M2.7, delivering GLM-5-level intelligence for less than one third of the cost MiniMax-M2.7 from @MiniMax_AI scores 50 on the Artificial Analysis Intelligence Index, an 8-point improvement over MiniMax-M2.5, which was released one month ago. This is [i…
-
@minimax_ai
@minimax_ai
on x
Introducing MiniMax-M2.7, our first model which deeply participated in its own evolution, with an 88% win-rate vs M2.5 - Production-Ready SWE: With SOTA performance in SWE-Pro (56.22%) and Terminal Bench 2 (57.0%), M2.7 reduced intervention-to-recovery time for online incidents […
-
@kimmonismus
@kimmonismus
on x
Minimax M2.7 released! And its a big one Highlights: Self-evolving - first model that helped build itself, running 100+ autonomous optimization loops during its own RL training (30% internal improvement). Strong coder - 56.2% on SWE-Pro (near Opus 4.6), 55.6% on VIBE-Pro, [image]
-
@ollama
@ollama
on x
MiniMax-M2.7 is now available on Ollama's cloud. made for coding and agentic tasks 🖥️ Try it inside Claude Code: ollama launch claude —model minimax-m2.7:cloud 🦞 Use it with OpenClaw: ollama launch openclaw —model minimax-m2.7:cloud If you already have OpenClaw
-
@minimax_ai
@minimax_ai
on x
During the iteration process, we also realized that the model's ability to recursively evolve its harness is equally critical. Our internal harness autonomously collects feedback, builds evaluation sets for internal tasks, and based on this continuously iterates on its own [image…
-
@erikvoorhees
Erik Voorhees
on x
MiniMax M2.7 is now live in Venice (both API and web) Potentially the best cost/performance model for your @openclaw
-
@arena
@arena
on x
MiniMax M2.7 is ranked #8 in Code Arena. It's also the most cost-efficient of the top 10 at $0.30 / $1.20 per MToken. Congrats to the team at @MiniMax_AI 👏 [image]
-
@openrouter
@openrouter
on x
MiniMax M2.7 from @MiniMax_AI is live on OpenRouter! M2.7 sees a large jump in agentic and tool calling capabilities. [image]
-
@kimmonismus
@kimmonismus
on x
Ngl, thats really fascninating: MiniMax M2.7 participated in its own development. They had the model run 100+ autonomous loops, analyzing failure trajectories, modifying scaffold code, running evals, and deciding what to keep or revert. Result: 30% performance improvement on [ima…
-
@arena
@arena
on x
MiniMax M2.7 - the latest from @MiniMax_AI is ready for you in the Text and Code Arena! Let's see how it stacks up to real-world use. In Text Arena, we'll soon be able to compare its performance across multiple key categories like: Math, Coding, Creative Writing, Expert and [imag…