/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

An interview with Kyle Fish, who Anthropic hired in 2024 as a welfare researcher to study AI consciousness and estimates a ~15% chance that models are conscious

As artificial intelligence systems become smarter, one A.I. company is trying to figure out what to do if they become conscious.

New York Times Kevin Roose

Discussion

  • @nanlinear @nanlinear on bluesky
    it's a binary question though!
  • @kevinroose Kevin Roose on x
    New column: Anthropic is studying “model welfare” to determine if Claude or other AI systems are (or will soon be) conscious and deserve moral status. I talked to Kyle Fish, who leads the research, and thinks there's a ~15% chance that Claude or another AI is conscious today. [im…
  • @scottnover Scott Nover on x
    I think we're taking Severance a bit too seriously.
  • @anthropicai @anthropicai on x
    As AI models become more complex and more capable, is it possible that they'll have experiences of their own? It's an open question. We recently started a research program to investigate it. [video]
  • @kevinroose Kevin Roose on x
    One interesting nugget to pull out: Anthropic is considering whether models should be able to force-quit conversations if users' requests are too distressing. [image]