OpenAI says GPT-5.3-Codex-Spark is its first AI model that runs on Cerebras chips, after they signed a $10B+ deal in January; Codex has 1M+ weekly active users
at 1,000 tokens/s. [video]@openaidevs:Introducing GPT-5.3-Codex-Spark, our ultra-fast model purpose built for real-time coding. We're rolling it out as a research preview for ChatG...
OpenAI debuts a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex that it claims generates code 15 times faster, for ChatGPT Pro users
ZDNET's key takeaways — OpenAI targets “conversational” coding, not slow batch-style agents. — Big latency wins: 80% faster roundtrip, 50% faster time-to-first-token.
GPT-5.3-Codex-Spark is OpenAI's first AI model to run on chips from Nvidia rival Cerebras; OpenAI says Codex has more than 1M weekly active users
OpenAI is releasing its first artificial intelligence model that runs on chips from semiconductor startup Cerebras Systems Inc. …
OpenAI launches a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex that it claims generates code 15 times faster, for Pro users
ZDNET's key takeaways — OpenAI targets “conversational” coding, not slow batch-style agents. — Big latency wins: 80% faster roundtrip, 50% faster time-to-first-token.
How Anthropic's bet on enterprise users is paying off; sources say Anthropic's investor guidance claims annualized revenue will exceed $30B by the end of 2026
Start-up's bet on enterprise users is paying off as coding tools fuel revenue and investor frenzy
Anthropic rolls out a fast mode for Claude Opus 4.6 in research preview, saying it offers the same model quality at 2.5x faster but costs 6x more
excited that we're making it available outside Anthropic too.Dean W. Ball /@deanwball:I would like to know more about the experimental Claude scaffold that caused Opus 4.6 to more ...
Anthropic rolls out a fast mode for Claude Opus 4.6 in research preview, saying it offers the same model quality 2.5 times faster but costs six times more
Opus is usually $5/million input and $25/million output. The new fast mode is $30/million input and $150/million output!
AI chipmaker Cerebras raised a ~$1B Series H led by Tiger Global at a $23B valuation, up from $8.1B in September 2025; Benchmark, Fidelity, and AMD invested too
AI chipmaker Cerebras raised a ~$1B Series H led by Tiger Global at a $23B valuation, up from $8.1B in September 2025; Benchmark, Fidelity, and AMD invested too
AI chip provider Cerebras Systems Inc. has raised about $1 billion in a new funding round, bolstering the company's efforts to compete with Nvidia Corp.
OpenAI strikes a multibillion-dollar, three-year deal to buy 750 MW of computing capacity from Cerebras, which Sam Altman has backed; sources: it's a $10B+ deal
The ChatGPT-maker is racing to secure more computing power, especially for responding to user queries
OpenAI strikes a multibillion-dollar agreement to buy 750 MW of computing capacity from Cerebras over three years; sources: the deal is worth more than $10B
The ChatGPT-maker is racing to secure more computing power, especially for responding to user queries
Cognition releases SWE-1.5, a new coding model in Windsurf, saying it partnered with Cerebras to serve SWE-1.5 at speeds up to 13x faster than Claude Sonnet 4.5
lots of new paradigms/UX to figure out. @cognition : Today we're releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for ...
Cognition releases SWE-1.5, a new coding model in Windsurf, saying it partnered with Cerebras to serve SWE-1.5 at speeds up to 13x faster than Claude Sonnet 4.5
lots of new paradigms/UX to figure out. @cognition : Today we're releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for ...