Mistral releases Small 4, its first model to unify the reasoning, multimodal, and coding capabilities of its flagship Magistral, Pixtral, and Devstral models
Today, we are announcing Mistral Small 4. This model is the next major release in the Mistral Small family.
Mistral AI
Related Coverage
- Mistral releases Mistral Small 4 model under Apache 2.0 licence TestingCatalog · Erin
- Introducing Mistral Small 4. Big new release from Mistral today (despite the name) … Simon Willison's Weblog · Simon Willison
- Mistral AI Releases Mistral Small 4: A 119B-Parameter MoE Model that Unifies Instruct, Reasoning, and Multimodal Workloads MarkTechPost · Asif Razzaq
- 🔥 Mistral Small 4 is here — one model, multiple modes. — Mistral just released Mistral Small 4, a unified model designed to handle everything from fast responses to deep reasoning. … Gayathri G
- Mistral's new Small 4 model punches above its weight with 128 expert modules The Decoder · Jonathan Kemper
- Leanstral: Open-Source foundation for trustworthy vibe-coding Mistral AI
- Leanstral: Open-source agent for trustworthy coding and formal proof engineering Hacker News
- Leanstral: Open-Source foundation for trustworthy vibe-coding | Mistral AI Lobsters
- We released Leanstral, the first open-source code agent designed for Lean 4. Please give it a go and share feedback, we will continue to work hard to make it very useful for our users. … Indraneel Mukherjee
- Mistral boasts code-proofing agent offers champagne performance on a budget bière The Register · Thomas Claburn
- Nvidia expands open AI model portfolio and enlists partners for frontier development SiliconANGLE · Paul Gillin
- Nvidia brings together AI labs to build the next generation of open base models The New Stack · Frederic Lardinois
- Nvidia's Nemotron coalition brings eight AI labs together to build open frontier models Tom's Hardware · Luke James
Discussion
-
r/LocalLLaMA
r
on reddit
Mistral Small 4 | Mistral AI
-
r/MistralAI
r
on reddit
Introducing Mistral Small 4
-
@testingcatalog
@testingcatalog
on x
Mistral AI announced a new open-source Mistral Small 4 model under the Apache 2.0 licence. A new model is now available on Mistral Playground. “One model to do it all” 👀 [image]
-
@teksedge
David Hendrickson
on x
This was unexpected. @MistralAI released a new small model today. Have to compare with Gemma4 (when it releases) with Qwen3.5-27B, 35B and 122B. [image]
-
@prince_canuma
Prince Canuma
on x
Day-0 support on MLX for Mistral Small 4🚀 Congratulations to the @MistralAI team on the release. [image]
-
@vllm_project
@vllm_project
on x
🎉 Congrats to @MistralAI on releasing Mistral Small 4 — a 119B MoE model (6.5B active per token) that unifies instruct, reasoning, and coding in one checkpoint. Multimodal, 256K context. Day-0 support in vLLM — MLA attention backend, tool calling, and configurable reasoning [imag…
-
@arafatkatze
Ara
on x
You know Mistral has lost the race when they only benchmark against themselves.
-
@mistraldevs
@mistraldevs
on x
🎮 Try it now: - Mistral API and AI Studio: https://console.mistral.ai/ - Hugging Face Repository: https://huggingface.co/... - Developers can prototype with Mistral Small 4 for free on NVIDIA GPUs at https://build.nvidia.com/, Mistral Small 4 is also available day-0 as an NVIDIA …
-
@rayanabdulcader
Rayan A Cader
on x
Mistral Small 4 is 119B parameters but only activates a fraction at a time so you get flagship-level reasoning at 3x the speed and 40% faster than their previous models 256k context window, configurable reasoning, fully open source one model that replaces their whole lineup 🔥
-
@mistraldevs
@mistraldevs
on x
🔥 Meet Mistral Small 4: One model to do it all. ⚡ 128 experts, 119B total parameters, 256k context window ⚡ Configurable Reasoning ⚡ Apache 2.0 ⚡ 40% faster, 3x more throughput Our first model to unify the capabilities of our flagship models into a single, versatile model. [image…
-
@kimmonismus
@kimmonismus
on x
Mistral small 4 released; big jump for mistral, especially compared to their previous models [image]
-
@mistraldevs
@mistraldevs
on x
🧠 With the new reasoning_effort parameter, users can dynamically adjust the model's behavior - from fast, lightweight responses to powerful, step-by-step reasoning - delivering a significant performance leap over previous generations. [image]
-
@wildebees
Wessel van Rensburg
on bluesky
Brilliant that Mistral keeps releasing proper open source models under Apache 2.0. European tech sovereignty requires alternatives to American AI monopolies. The hardware requirements are refreshingly transparent: minimum 4x H100s, recommended 4x H200s. No hidden dependencies,…
-
r/LocalLLaMA
r
on reddit
Leanstral: Open-Source foundation for trustworthy vibe-coding
-
r/math
r
on reddit
Leanstral: First open-source code agent for Lean 4