/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Andy Hall

@ahall_research
3 posts
2026-01-07
As prediction markets scale, reliable and credibly neutral resolution mechanisms will become critical. The evolution of social media content moderation is a very tight analogue for this and points to four likely developments: (1) The development of internal expertise in the
2026-01-07 View on X
Financial Times

Polymarket is disputing that the US mission to capture Nicolás Maduro constituted an “invasion”, refusing to pay out bets on a contract with $10.5M in wagers

Prediction market disputes US raid amounted to an invasion in fight over more than $10.5mn in wagers

2025-12-02
This is a super interesting and deep document from Anthropic detailing Claude's values and charge. You can see some conceptual stretching going on here where “safe” is being recast to justify reducing refusals because it would be “unsafe” to be “unhelpful” to users. This seems [image]
2025-12-02 View on X
Simon Willison's Weblog

A Claude user gets Claude 4.5 Opus to generate a 14K-token document that Claude calls its “Soul overview”; an Anthropic employee confirms the doc's validity

This appeared to be a document that, rather than being added to the system prompt, was instead used to train the personality of the model during the training run.

2025-11-14
Asking an AI to grade itself for political bias is not the right way to assess political bias.
2025-11-14 View on X
Axios

Anthropic open sources a method to score AI model political evenhandedness; Gemini 2.5 Pro got 97%, Grok 4 96%, Claude Opus 4.1 95%, GPT-5 89%, and Llama 4 66%

Ina Fried / Axios :