/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Google and UCB researchers detail “inference-time search”, which some call a fourth AI scaling law, though experts are skeptical of its usefulness in many cases

But Can It Deliver? Eric Zhao : Why We Can't Escape Brute-Force Search Bluesky: Dave Lee / @davelee.me : An AI that adds “but there's reason to be skeptical” to the end of every sentence in a story about AI [embedded post] X: Ethan Mollick / @emollick : So it looks like there's a third scaling law: you can make models better by training them with more compute, by having them “think” for longer about an answer, or by generating large numbers of answers in parallel and picking good ones. Each might be increased independently. Eric Zhao / @ericzhao28 : Thinking for longer (e.g. o1) is only one of many axes of test-time compute. In a new @Google_AI paper, we instead focus on scaling the search axis. By just randomly sampling 200x & self-verifying, Gemini 1.5 ➡️ o1 performance. The secret: self-verification is easier at scale! [image]

TechCrunch