2026-02-24
Very nicely written summary of understanding of “simulators/personas” ontology as understood by the “frontier in understanding” ˜2 years ago. (Great the post does not claim originality!). Also it is somewhat obsolete now, ca by ~1-2 years.
Anthropic
Anthropic introduces “persona selection model”, a theory to explain AI's human-like behavior, and details how AI personas form in pre-training and post-training
AI assistants like Claude can seem surprisingly human. They express joy after solving tricky coding tasks.
2026-02-02
Great new paper on power dynamics in human-AI interactions. Often deep, informative, and sometimes funny/bizarre. Some of my favourite bits / thoughts: 1. Before the “AI period”, we have the “cyborg period”. However there is a very wide spectrum of what the human role is in
Ars Technica
Anthropic and UofT researchers detail “disempowerment patterns in real-world LLM usage” where AI potentially distorts a user's reality, beliefs, or actions
At this point, we've all heard plenty of stories about AI chatbots leading users to harmful actions, harmful beliefs, or simply incorrect information.
2026-01-02
Great econ thinking as I'd expect from @pawtrammell Yet I have close to zero trust in the conclusions when read as futurism / directly applied to “how the world will look.” Several crucial considerations seem missing: CC1: Capital will likely end up owned by AIs, not humans.
Philosopher Count
How AI automation can fulfill Thomas Piketty's predictions on rising economic inequality, and why highly progressive taxes on capital can help slow the spiral
Piketty was wrong about the past. He's probably right about the future. — 1. Introduction X: @briancalbrecht , @saikatc , @harryh , @jankulveit , @daniel_271828 , @krishnanrohit...