OpenAI debuts a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex that it claims generates code 15 times faster, for ChatGPT Pro users
ZDNET David Gewirtz
Related Coverage
- 🗞️ OpenAI shipped blazing-Fast GPT-5.3-Codex-Spark coding model Rohan's Bytes · Rohan Paul
- OpenAI's Codex and Anthropic's Claude spark coding revolution as developers say they've abandoned traditional programming Fortune · Beatrice Nolan
- OpenAI debuts Codex-Spark powered by Cerebras infra TestingCatalog · Alexey Shabanov
- OpenAI GPT-5.3-Codex-Spark Now Running at 1K Tokens Per Second on BIG Cerebras Chips ServeTheHome · Patrick Kennedy
- ChatGPT-5.3-Codex Is Also Good At Coding — OpenAI is back with a new Codex model, released the same day as Claude Opus 4.6. Don't Worry About the Vase · Zvi Mowshowitz
- OpenAI launches GPT-5.3-Codex-Spark on Cerebras chips — marks AI giants first production deployment away from Nvidia Tom's Hardware · Luke James
- OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips Ars Technica · Benj Edwards
- OpenAI unveils GPT-5.3 Codex Spark, a coding model built for speed Business Today
- OpenAI released GPT-5.3-Codex-Spark, a real-time coding model Help Net Security · Anamarija Pogorelec
- Introducing GPT-5.3-Codex-Spark. OpenAI announced a partnership with Cerebras on January 14th. … Simon Willison's Weblog · Simon Willison
- OpenAI introduces GPT-5.3-Codex-Spark, an ultra-fast coding model powered by Cerebras Neowin · Pradeep Viswanathan
- OpenAI unveils ultra-fast GPT 5.3 Codex Spark model for real-time coding Digit · Ayushi Jain
- Introducing GPT-5.3-Codex-Spark — An ultra-fast model for real-time coding in Codex. OpenAI
- OpenAI's rapid GPT-5.3-Codex model moves beyond simple coding tasks SiliconANGLE · Mike Wheatley
- OpenAI Releases a Research Preview of GPT-5.3-Codex-Spark: A 15x Faster AI Coding Model Delivering Over 1000 Tokens Per Second on Cerebras Hardware MarkTechPost · Asif Razzaq
- OpenAI says GPT-5.3-Codex-Spark is its first AI model that runs on Cerebras chips, after they signed a $10B+ deal in January; Codex has 1M+ weekly active users Bloomberg Law · Rachel Metz
- A new version of OpenAI's Codex is powered by a new dedicated chip TechCrunch · Lucas Ropek
- OpenAI Expands Line with Faster Codex-Spark Model Techstrong.ai · Jon Swartz
- Introducing OpenAI GPT-5.3-Codex-Spark Powered by Cerebras Cerebras · James Wang
- OpenAI's new Codex Spark model is built for speed The New Stack · Frederic Lardinois
- OpenAI deploys Cerebras chips for ‘near-instant’ code generation in first major move beyond Nvidia VentureBeat · Michael Nuñez
- Developers are still weighing the pros and cons of AI coding agents Fast Company · Mark Sullivan
- 🔥 OpenAI unveils GPT-5.3-Codex-Spark — an ultra-fast, real-time AI coding model in research preview. … Gayathri G
- Today we launched GPT-5.3-Codex-Spark! — This is the first release in our partnership with OpenAI. — Codex-Spark is powered by the Cerebras Wafer-Scale Engine, making it really fast! … Sean Lie
- My mind was blown the first time I saw how fast Codex Spark is for two reasons. — 1) It's incredibly fast. … Bryant McCombs
- super excited to see what people build with codex-spark! — the team at Cerebras has been awesome to build with, and the codex team is obviously on an absolute tear. … Anuj Saharan
- I'm really excited about this one. — Today, we're releasing a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex, and our first model designed for real-time coding. … Julien Simiand
Discussion
-
@sama
Sam Altman
on x
GPT-5.3-Codex-Spark is launching today as a research preview for Pro. More than 1000 tokens per second! There are limitations at launch; we will rapidly improve.
-
@openai
@openai
on x
GPT-5.3-Codex-Spark is now in research preview. You can just build things—faster. [video]
-
@andrewdfeldman
Andrew Feldman
on x
Just one month after announcing our partnership with @OpenAI, we're launching our first model together: OpenAI Codex-Spark, powered by @cerebras. Codex-Spark is built for real-time software development. In coding, responsiveness is the product. It is not a nice to have. [image]
-
@scaling01
@scaling01
on x
GPT-5.3-Codex-Spark size: ~700B@30B OpenAI's new GPT-5.3-Codex-Spark is the first model for which we can somewhat reliably estimate its size... GPT-5.3-Codex-Spark gets “over 1000 tokens/s”, so probably 1000-1100 tokens/s... Based on that GPT-5.3-Codex-Spark should be: ~30B activ…
-
@kevinweil
Kevin Weil
on x
Coding at 1000 tokens/sec is a mind-expanding experience. You have to try this.
-
@skirano
Pietro Schirano
on x
Been using this model for a bit now, the combination of speed and intelligence is insane. It genuinely feels like a new paradigm shift. Excited to plug it into more specialized coding pipelines.
-
@openaidevs
@openaidevs
on x
Introducing GPT-5.3-Codex-Spark, our ultra-fast model purpose built for real-time coding. We're rolling it out as a research preview for ChatGPT Pro users in the Codex app, Codex CLI, and IDE extension. [video]
-
@danshipper
Dan Shipper
on x
BREAKING: @OpenAI just launched a new Codex model, Spark—it serves at 1,000 tokens per second. It's blow your hair back fast. It's their first model publicly released on Cerebras hardware, and you can see the difference. We've been testing internally @every for the last week o…
-
@eliebakouch
Elie
on x
ok this is very interesting, this is not the same perf than gpt5.3, and might not be the same arch as well? > Codex-Spark marks the first milestone in our partnership with Cerebras. Codex-Spark is optimized to feel near-instant when served on ultra-low latency hardware (from [ima…
-
@romainhuet
Romain Huet
on x
Hello GPT-5.3-Codex-Spark! ✨ Our first real-time coding model. It is... FAST. 1,000+ tokens per second. Once you experience latency this low, it's hard to go back. This is an exciting first milestone in our partnership with @Cerebras. [video]
-
@heccbrent
Brent Schooley
on x
This was so fun to work on. GPT-5.3-Codex-Spark built a snake game so fast that I was able to start setting high scores in about 9 seconds. If you have ChatGPT Pro you're going to want to check this out today! [video]
-
@_simonsmith
Simon Smith
on x
We've seen how much speed affects people's model preferences recently (e.g. the arena @swyx is running), so I think Codex Spark will be well-received. Also interesting that this initial release is a step towards combining long-horizon and real-time agents, including delegating to…
-
@embirico
Alexander Embiricos
on x
✨GPT-5.3-Codex-Spark✨ We're rolling out our first @cerebras model to Pro users today. It's fast! Rollout will be slow and very capacity constrained. Excited to roll out to more folks, and improve it with your feedback.
-
@derrickcchoi
Derrick Choi
on x
One of the top pieces of feedback we get about @OpenAI Codex: “make it faster”. We addressed it in a big way with our ultra fast Codex-Spark model (research preview for Pro). Available in the latest Codex app, CLI, and IDE extension. Here's Spark vs 5.3-Codex-Low side-by-side [vi…
-
@mweinbach
Max Weinbach
on x
GPT 5.3 Codex Spark! It's a smaller version of GPT 5.3 Codex running at over 1000 tokens per second on Cerebras hardware GPT 5.3 Codex was trained for GB200, on GB200. I wonder what this is? Maybe GPT 5.3 Codex that used Cerebras Reap?
-
@dimillian
Thomas Ricouard
on x
I've been playing with GPT-5.3-Codex-Spark this week, and it's a really a✨ experience. Basically, using it for smaller tasks, context scanning, quick analysis, and smaller code edits. It feels so natural and instant; it's really hard to go back to other models.
-
@openai
@openai
on x
Rolling out today to ChatGPT Pro users in the Codex app, CLI, and IDE extension. https://openai.com/...
-
@sungkim
Sung Kim
on bluesky
OpenAI has released GPT-5.3-Codex-Spark: 1000 tokens per second — openai.com/index/introd... [embedded post]
-
r/singularity
r
on reddit
Introducing GPT-5.3-Codex-Spark. An ultra-fast model for real-time coding in Codex
-
r/OpenAI
r
on reddit
Introducing GPT-5.3-Codex-Spark