2026-02-24
A common mental model for AI development is that pre-training teaches LLMs to simulate “personas” and post-training selects over these personas. New blog post: We describe this perspective in more detail, survey the evidence, and discuss consequences for AI development.
Anthropic
Anthropic introduces “persona selection model”, a theory to explain AI's human-like behavior, and details how AI personas form in pre-training and post-training
AI assistants like Claude can seem surprisingly human. They express joy after solving tricky coding tasks.
2025-12-04
I strongly recommend that interpretability researchers take a look at this thoughtful post on the GDM interp team's recent research philosophy! Their approach emphasizes making measurable progress on carefully-selected downstream tasks.
AI Alignment Forum
Google DeepMind's mechanistic interpretability team details why it shifted from fully reverse-engineering neural nets to a focus on “pragmatic interpretability”
we're calling it “pragmatic” interpretability Neel Nanda / @neelnanda5 : The GDM mechanistic interpretability team has pivoted to a new approach: pragmatic interpretability Our pos...
2025-10-08
Very exciting: Anthropic is releasing an open-source version of an alignment auditing agent we use internally. Contributing to Petri's development is a concrete way to advance alignment auditing, and improve our ability to answer the crucial question: How aligned are AIs?
Anthropic
Anthropic releases Petri, an open-source tool that uses AI agents for safety testing, and says it observed multiple cases of models attempting to whistle blow
Anthropic :
2025-07-23
Subliminal learning: training on model-generated data can transmit traits of that model, even if the data is unrelated. Think: “You can learn physics by watching Einstein do yoga” I'll discuss how this introduces a surprising pitfall for AI developers 🧵https://x.com/...
Anthropic
Anthropic and other researchers detail “subliminal learning”, where LLMs learn traits from model-generated data that is semantically unrelated to those traits
We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits.