/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@cerebras

@cerebras
13 posts
2026-02-13
OpenAI Codex-Spark powered by Cerebras You can now just build things faster—at 1,000 tokens/s. [video]
2026-02-13 View on X
Bloomberg

OpenAI says GPT-5.3-Codex-Spark is its first AI model that runs on Cerebras chips, after they signed a $10B+ deal in January; Codex has 1M+ weekly active users

at 1,000 tokens/s. [video]@openaidevs:Introducing GPT-5.3-Codex-Spark, our ultra-fast model purpose built for real-time coding. We're rolling it out as a research preview for ChatG...

OpenAI Codex-Spark powered by Cerebras You can now just build things faster—at 1,000 tokens/s. [video]
2026-02-13 View on X
ZDNET

OpenAI debuts a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex that it claims generates code 15 times faster, for ChatGPT Pro users

ZDNET's key takeaways  — OpenAI targets “conversational” coding, not slow batch-style agents.  — Big latency wins: 80% faster roundtrip, 50% faster time-to-first-token.

2026-02-12
OpenAI Codex-Spark powered by Cerebras You can now just build things faster—at 1,000 tokens/s. [video]
2026-02-12 View on X
Bloomberg

GPT-5.3-Codex-Spark is OpenAI's first AI model to run on chips from Nvidia rival Cerebras; OpenAI says Codex has more than 1M weekly active users

OpenAI is releasing its first artificial intelligence model that runs on chips from semiconductor startup Cerebras Systems Inc. …

OpenAI Codex-Spark powered by Cerebras You can now just build things faster—at 1,000 tokens/s. [video]
2026-02-12 View on X
ZDNET

OpenAI launches a research preview of GPT-5.3-Codex-Spark, a smaller version of GPT-5.3-Codex that it claims generates code 15 times faster, for Pro users

ZDNET's key takeaways  — OpenAI targets “conversational” coding, not slow batch-style agents.  — Big latency wins: 80% faster roundtrip, 50% faster time-to-first-token.

2026-02-09
Fast inference = 6x markup? Don't be giving us any ideas 😼
2026-02-09 View on X
Financial Times

How Anthropic's bet on enterprise users is paying off; sources say Anthropic's investor guidance claims annualized revenue will exceed $30B by the end of 2026

Start-up's bet on enterprise users is paying off as coding tools fuel revenue and investor frenzy

Fast inference = 6x markup? Don't be giving us any ideas 😼
2026-02-09 View on X
Simon Willison's Weblog

Anthropic rolls out a fast mode for Claude Opus 4.6 in research preview, saying it offers the same model quality at 2.5x faster but costs 6x more

excited that we're making it available outside Anthropic too.Dean W. Ball /@deanwball:I would like to know more about the experimental Claude scaffold that caused Opus 4.6 to more ...

2026-02-08
Fast inference = 6x markup? Don't be giving us any ideas 😼
2026-02-08 View on X
Simon Willison's Weblog

Anthropic rolls out a fast mode for Claude Opus 4.6 in research preview, saying it offers the same model quality 2.5 times faster but costs six times more

Opus is usually $5/million input and $25/million output.  The new fast mode is $30/million input and $150/million output!

2026-02-05
Cerebras Systems today announced the closing of a $1 billion Series H financing at a post-money valuation of approximately $23 billion. The round was led by Tiger Global, with participation from Benchmark, Fidelity Management & Research Company, Atreides Management, Alpha Wave
2026-02-05 View on X
Bloomberg

AI chipmaker Cerebras raised a ~$1B Series H led by Tiger Global at a $23B valuation, up from $8.1B in September 2025; Benchmark, Fidelity, and AMD invested too

2026-02-04
Cerebras Systems today announced the closing of a $1 billion Series H financing at a post-money valuation of approximately $23 billion. The round was led by Tiger Global, with participation from Benchmark, Fidelity Management & Research Company, Atreides Management, Alpha Wave
2026-02-04 View on X
Bloomberg

AI chipmaker Cerebras raised a ~$1B Series H led by Tiger Global at a $23B valuation, up from $8.1B in September 2025; Benchmark, Fidelity, and AMD invested too

AI chip provider Cerebras Systems Inc. has raised about $1 billion in a new funding round, bolstering the company's efforts to compete with Nvidia Corp.

2026-01-15
OpenAI🤝Cerebras https://openai.com/... [image]
2026-01-15 View on X
Wall Street Journal

OpenAI strikes a multibillion-dollar, three-year deal to buy 750 MW of computing capacity from Cerebras, which Sam Altman has backed; sources: it's a $10B+ deal

The ChatGPT-maker is racing to secure more computing power, especially for responding to user queries

2026-01-14
OpenAI🤝Cerebras https://openai.com/... [image]
2026-01-14 View on X
Wall Street Journal

OpenAI strikes a multibillion-dollar agreement to buy 750 MW of computing capacity from Cerebras over three years; sources: the deal is worth more than $10B

The ChatGPT-maker is racing to secure more computing power, especially for responding to user queries

2025-10-31
Today, @cognition released SWE-1.5 - the world's fastest coding agent, powered by Cerebras. SWE-1.5 achieves frontier-level coding ability, comparable to Sonnet 4.5 and surpassing GPT-5. Cerebras and Cognition engineers worked hand in hand over the past few weeks, training a [image]
2025-10-31 View on X
Cognition

Cognition releases SWE-1.5, a new coding model in Windsurf, saying it partnered with Cerebras to serve SWE-1.5 at speeds up to 13x faster than Claude Sonnet 4.5

lots of new paradigms/UX to figure out. @cognition : Today we're releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for ...

Sonnet but 10x faster 🟧
2025-10-31 View on X
Cognition

Cognition releases SWE-1.5, a new coding model in Windsurf, saying it partnered with Cerebras to serve SWE-1.5 at speeds up to 13x faster than Claude Sonnet 4.5

lots of new paradigms/UX to figure out. @cognition : Today we're releasing SWE-1.5, our fast agent model. It achieves near-SOTA coding performance while setting a new standard for ...