OpenAI launches GPT-5.4 mini and nano, aimed at agents, coding, and multi-modal workflows, and offering near GPT-5.4-level performance at a much lower cost
ZDNET's key takeaways — GPT-5.4 mini runs more than twice as fast as GPT-5 mini. — New models aim at agents, coding, and multi-modal workflows.
ZDNET David Gewirtz
Related Coverage
- Introducing GPT-5.4 mini and nano OpenAI
- OpenAI Just Revealed Cheaper Versions of Its Flagship Model. Here's How to Use Them Inc · Ben Sherry
- GPT-5.4 mini brings some of the smarts of OpenAI's latest model to ChatGPT Free and Go users Engadget · Igor Bonifacic
- Leanstral: Open-Source foundation for trustworthy vibe-coding Mistral AI
- Mistral's new Small 4 model punches above its weight with 128 expert modules The Decoder · Jonathan Kemper
- Introducing Mistral Small 4. Big new release from Mistral today (despite the name) … Simon Willison's Weblog · Simon Willison
- 🔥 Mistral Small 4 is here — one model, multiple modes. — Mistral just released Mistral Small 4, a unified model designed to handle everything from fast responses to deep reasoning. … Gayathri G
- Leanstral: Open-source agent for trustworthy coding and formal proof engineering Hacker News
- Leanstral: Open-Source foundation for trustworthy vibe-coding | Mistral AI Lobsters
- OpenAI Launches GPT-5.4 mini and nano iPhone in Canada · Usman Qureshi
- GPT-5.4 mini and GPT-5.4 nano, which can describe 76,000 photos for $52 Simon Willison's Weblog · Simon Willison
- OpenAI Releases GPT-5.4 Mini and Nano, Which Could Be More Useful Than the Big Model Decrypt · Jose Antonio Lanz
- OpenAI's GPT-5.4 mini and nano are built for the subagent era The New Stack · Frederic Lardinois
- OpenAI's Latest AI Models Are Built for Speed CNET · Jon Reed
- OpenAI releases GPT-5.4 mini and nano, its ‘most capable small models yet’ 9to5Mac · Zac Hall
- OpenAI ships GPT-5.4 mini and nano, faster and more capable but up to 4x pricier The Decoder · Matthias Bastian
- OpenAI adds GPT-5.4 mini to ChatGPT, nano goes API-only The Mac Observer · Rajat Saini
- OpenAI's new GPT-5.4 cuts size, boosts speed The Deep View · Sabrina Ortiz
- ChatGPT's free tier gets GPT 5.4 mini model with improved coding capabilities 9to5Google · Ben Schoon
- GPT 5.4 Mini & 5.4 Nano: OpenAI Built a Team of AI Interns for Your AI Boss TheNeuron · Grant Harvey
Discussion
-
@altryne
Alex Volkov
on x
Very interesting release! OpenAI ships 2 tiny / fast models: GPT 5.4 Mini (and Nano) that are MUCH cheaper (1/3 the price in Codex) and can do very comparative tasks, faster. Great for sub-agents, great for Hearbeats tasks inside your Claws and especially great for Browser [image…
-
@scaling01
@scaling01
on x
GPT-5.4-mini looks really good for computer-use [image]
-
@scaling01
@scaling01
on x
GPT-5.4-mini and nano on SWE-Bench-Pro are looking pretty good GPT-5.4-nano-xhigh beats GPT-5.4-low [image]
-
@openai
@openai
on x
GPT-5.4 mini is available today in ChatGPT, Codex, and the API. Optimized for coding, computer use, multimodal understanding, and subagents. And it's 2x faster than GPT-5 mini. https://openai.com/... [image]
-
@testingcatalog
@testingcatalog
on x
Mistral AI announced a new open-source Mistral Small 4 model under the Apache 2.0 licence. A new model is now available on Mistral Playground. “One model to do it all” 👀 [image]
-
@teksedge
David Hendrickson
on x
This was unexpected. @MistralAI released a new small model today. Have to compare with Gemma4 (when it releases) with Qwen3.5-27B, 35B and 122B. [image]
-
@prince_canuma
Prince Canuma
on x
Day-0 support on MLX for Mistral Small 4🚀 Congratulations to the @MistralAI team on the release. [image]
-
@vllm_project
@vllm_project
on x
🎉 Congrats to @MistralAI on releasing Mistral Small 4 — a 119B MoE model (6.5B active per token) that unifies instruct, reasoning, and coding in one checkpoint. Multimodal, 256K context. Day-0 support in vLLM — MLA attention backend, tool calling, and configurable reasoning [imag…
-
@arafatkatze
Ara
on x
You know Mistral has lost the race when they only benchmark against themselves.
-
@mistraldevs
@mistraldevs
on x
🎮 Try it now: - Mistral API and AI Studio: https://console.mistral.ai/ - Hugging Face Repository: https://huggingface.co/... - Developers can prototype with Mistral Small 4 for free on NVIDIA GPUs at https://build.nvidia.com/, Mistral Small 4 is also available day-0 as an NVIDIA …
-
@rayanabdulcader
Rayan A Cader
on x
Mistral Small 4 is 119B parameters but only activates a fraction at a time so you get flagship-level reasoning at 3x the speed and 40% faster than their previous models 256k context window, configurable reasoning, fully open source one model that replaces their whole lineup 🔥
-
@mistraldevs
@mistraldevs
on x
🔥 Meet Mistral Small 4: One model to do it all. ⚡ 128 experts, 119B total parameters, 256k context window ⚡ Configurable Reasoning ⚡ Apache 2.0 ⚡ 40% faster, 3x more throughput Our first model to unify the capabilities of our flagship models into a single, versatile model. [image…
-
@kimmonismus
@kimmonismus
on x
Mistral small 4 released; big jump for mistral, especially compared to their previous models [image]
-
@mistraldevs
@mistraldevs
on x
🧠 With the new reasoning_effort parameter, users can dynamically adjust the model's behavior - from fast, lightweight responses to powerful, step-by-step reasoning - delivering a significant performance leap over previous generations. [image]
-
@wildebees
Wessel van Rensburg
on bluesky
Brilliant that Mistral keeps releasing proper open source models under Apache 2.0. European tech sovereignty requires alternatives to American AI monopolies. The hardware requirements are refreshingly transparent: minimum 4x H100s, recommended 4x H200s. No hidden dependencies,…
-
r/LocalLLaMA
r
on reddit
Mistral Small 4 | Mistral AI
-
r/MistralAI
r
on reddit
Introducing Mistral Small 4
-
r/LocalLLaMA
r
on reddit
Leanstral: Open-Source foundation for trustworthy vibe-coding
-
r/OpenAI
r
on reddit
Introducing GPT-5.4 mini and nano
-
@simonw
Simon Willison
on x
Couldn't resist getting OpenAI Codex to render me a pelican for every combination of model and reasoning effort - I do think gpt-5.4 xhigh came out the best, the pelican has a fish in its beak! [image]
-
@simonw
Simon Willison
on x
Notes and pelicans for today's GPT-5.4 mini and nano releases - the nano model looks like it could describe every image in my 76,000 photo library for $52 total https://simonwillison.net/...
-
@openaidevs
@openaidevs
on x
We're introducing GPT-5.4 mini and nano, our most capable small models yet. GPT-5.4 mini is more than 2x faster than GPT-5 mini. Optimized for coding, computer use, multimodal understanding, and subagents. For lighter-weight tasks, GPT-5.4 nano is our smallest and cheapest [image…
-
@davis7
Ben Davis
on x
Haiku 4.5 has finally been dethroned by GPT 5.4 mini: - faster - cheaper - smarter - better at tool calling Gemini 3.0 flash still has a place in “turn this video/audio/blob of text into json”, but for any small model task that requires an agent loop (sub agents, search, quick [v…
-
@openrouter
@openrouter
on x
The new @OpenAI GPT-5.4 mini and nano are available now on OpenRouter! In our early testing, mini's increase in speed was useful for staying in the loop for coding agents, and results in better UX for chat apps that allow the models to perform agent tasks mid conversation. [image…
-
@suvanshsanjeev
Suvansh Sanjeev
on x
original vision for nano reasoning was hundreds of subagents working in harmony. with 5.4-nano, you can get that for a dollar. I'll be watching for your demos 👀 insane amount of work on the from @qingquan_song, @astonzhangAZ, Alex Efremov, Sijia Chen, etc to push the
-
@astonzhangaz
Aston Zhang
on x
GPT-5.4, you have company now. Today we're releasing GPT-5.4 Mini and Nano 🚀, bringing many of the strengths of GPT-5.4 to faster, more efficient models. Despite their size, they shine on coding and subagent workflows and can make a big difference for high-volume workloads. [imag…
-
@jetbrains
@jetbrains
on x
GPT-5.4 mini and nano are now available in the AI chat of your JetBrains IDE. Our Developer Advocate gave GPT-5.4 mini a creative challenge - and it responded with a full spring animation. Not bad for something called “mini.” Your turn. Try it in your IDE! [video]
-
@sherwinwu
Sherwin Wu
on x
We're now in a world where you can get 54.4% SWE-Bench PRO performance, and 60% (!) T-Bench 2.0 performance: - at 2x the speed of GPT-5-mini - for 7.5x to 10x cheaper than GPT-5.4 Wild!
-
@mercor_ai
@mercor_ai
on x
We evalled @OpenAI GPT-5.4 mini and nano on APEX-Agents. With xhigh reasoning, mini scores 24.5% Pass@1. It outperforms other lightweight models like Gemini 3.1 Flash Lite (12.8%) as well as midweight models like Sonnet 4.6 (23.7% Pass@1) - but the token $ is just ¼. [image]
-
@scaling01
@scaling01
on x
GPT-5.4-mini 2.25 times more expensive than GPT-5-mini $0.75 Input $4.5 Output 400k [image]
-
@windsurf
@windsurf
on x
GPT-5.4 mini is now available in Windsurf!
-
@poe_platform
Poe
on x
OpenAI's GPT-5.4-Nano and GPT-5.4-Mini are now live on Poe. GPT-5.4-Nano is a strong fit for fast, high-volume tasks like summarizing transcripts, labeling tickets, rewriting content, quick RAG answers, and running @openclaw flows with Poe where latency and cost matter most. [ima…
-
@dkundel
Dominik Kundel
on x
GPT-5.4-mini is a wildly capable model and gives you ~3.3x more usage on Codex tasks compared to GPT-5.4. It's excellent for spinning up new subagents! [video]
-
@openaidevs
@openaidevs
on x
GPT-5.4 mini is available today in the API, Codex, and ChatGPT. In the API, it has a 400k context window. In Codex, it uses only 30% of the GPT-5.4 quota, letting you handle simpler coding tasks for about one-third of the cost. GPT-5.4 nano is only available in the API.