/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

GPT-5.3-Codex-Spark is OpenAI's first AI model to run on chips from Nvidia rival Cerebras; OpenAI says Codex has more than 1M weekly active users

OpenAI is releasing its first artificial intelligence model that runs on chips from semiconductor startup Cerebras Systems Inc.

Bloomberg Rachel Metz

Discussion

  • @openaidevs @openaidevs on x
    GPT-5.3-Codex-Spark is the first milestone in our partnership with @cerebras. It provides a faster tier on the same production stack as our other models, complementing GPUs for workloads where low latency is critical. https://openai.com/...
  • @openaidevs @openaidevs on x
    Introducing GPT-5.3-Codex-Spark, our ultra-fast model purpose built for real-time coding. We're rolling it out as a research preview for ChatGPT Pro users in the Codex app, Codex CLI, and IDE extension. [video]
  • @kylebrussell Kyle Russell on x
    I thought this was going to come like next year, not now
  • @benbajarin Ben Bajarin on x
    As the world moves to inference, dedicated inference designs will be prominant. Great customer case for @cerebras
  • @cerebras @cerebras on x
    OpenAI Codex-Spark powered by Cerebras You can now just build things faster—at 1,000 tokens/s. [video]
  • @mweinbach Max Weinbach on x
    Codex Spark was trained on GPUs for Cerebras hardware but OpenAI added support to their inference framework for Cerebras meaning they're reading to load future models onto it too GPUs are still foundational for inference and training, though [image]