2025-12-23
MiniMax M2.1 is officially live🚀 Built for real-world coding and AI-native organizations — from vibe builds to serious workflows. A SOTA 10B-activated OSS coding & agent model, scoring 72.5% on SWE-multilingual and 88.6% on our newly open-sourced VIBE-bench, exceeding leading [video]
MiniMax
China's MiniMax releases M2.1, an upgrade to its open-source M2 model that it says has “significantly enhanced” coding capabilities in Rust, Java, and others
MiniMax has been continuously transforming itself in a more AI-native way. The core driving forces of this process are models …
SOTA across SWE-Verified, SWE-Multilingual, Multi-SWE, VIBE-Bench, and Terminal-Bench 2.0. [image]
MiniMax
China's MiniMax releases M2.1, an upgrade to its open-source M2 model that it says has “significantly enhanced” coding capabilities in Rust, Java, and others
MiniMax has been continuously transforming itself in a more AI-native way. The core driving forces of this process are models …
2025-06-17
Day 1/5 of #MiniMaxWeek: We're open-sourcing MiniMax-M1, our latest LLM — setting new standards in long-context reasoning. - World's longest context window: 1M-token input, 80k-token output - State-of-the-art agentic use among open-source models - RL at unmatched efficiency: [image]
Bloomberg
Shanghai-based MiniMax open sources MiniMax-M1, a model for complicated productivity tasks that supports 1M input tokens and it says beats DeepSeek's R1-0528
Chinese AI upstart MiniMax released a new large language model, joining a slew of domestic peers inspired to surpass DeepSeek in the field of reasoning AI.
3️⃣ Visualizations Prompt: Create an HTML page with a canvas-based animated particle background. The particles should move smoothly and connect when close. Add a central heading text over the canvas Canvas+JS, and the visuals slap.👇 [video]
Bloomberg
Shanghai-based MiniMax open sources MiniMax-M1, a model for complicated productivity tasks that supports 1M input tokens and it says beats DeepSeek's R1-0528
Chinese AI upstart MiniMax released a new large language model, joining a slew of domestic peers inspired to surpass DeepSeek in the field of reasoning AI.