/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Jan Kulveit

@jankulveit
3 posts
2026-02-24
Very nicely written summary of understanding of “simulators/personas” ontology as understood by the “frontier in understanding” ˜2 years ago. (Great the post does not claim originality!). Also it is somewhat obsolete now, ca by ~1-2 years.
2026-02-24 View on X
Anthropic

Anthropic introduces “persona selection model”, a theory to explain AI's human-like behavior, and details how AI personas form in pre-training and post-training

AI assistants like Claude can seem surprisingly human.  They express joy after solving tricky coding tasks.

2026-02-02
Great new paper on power dynamics in human-AI interactions. Often deep, informative, and sometimes funny/bizarre. Some of my favourite bits / thoughts: 1. Before the “AI period”, we have the “cyborg period”. However there is a very wide spectrum of what the human role is in
2026-02-02 View on X
Ars Technica

Anthropic and UofT researchers detail “disempowerment patterns in real-world LLM usage” where AI potentially distorts a user's reality, beliefs, or actions

At this point, we've all heard plenty of stories about AI chatbots leading users to harmful actions, harmful beliefs, or simply incorrect information.

2026-01-02
Great econ thinking as I'd expect from @pawtrammell Yet I have close to zero trust in the conclusions when read as futurism / directly applied to “how the world will look.” Several crucial considerations seem missing: CC1: Capital will likely end up owned by AIs, not humans.
2026-01-02 View on X
Philosopher Count

How AI automation can fulfill Thomas Piketty's predictions on rising economic inequality, and why highly progressive taxes on capital can help slow the spiral

Piketty was wrong about the past.  He's probably right about the future.  —  1. Introduction X: @briancalbrecht , @saikatc , @harryh , @jankulveit , @daniel_271828 , @krishnanrohit...