/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

An interview with Eliezer Yudkowsky, one of the first people to warn of AI risks, on AI benefits, using violence to stop AI, Rationalism, his new book, and more

Eliezer Yudkowsky https://www.nytimes.com/... Matthew Kassel / @matthewkassel : “'If we get an effective international treaty shutting A.I. down, and the book had something to do with it, I'll call the book a success,' Mr. Yudkowsky told me. “Anything other than that is a sad little consolation prize on the way to death.'” https://www.nytimes.com/... Kevin Roose / @kevinroose : I enjoy talking to @ESYudkowsky and @So8res, despite the fact that I'm a boring moderate with only a 5-10% p(doom), in part because they give very good quotes. [image] Kevin Roose / @kevinroose : In today's NYT, I profiled Eliezer Yudkowsky, AI's OG prophet of doom, and one of the most interesting (and divisive!) characters in modern Silicon Valley. From inspiring OpenAI and DeepMind, to oneshotting a generation of young rationalists with Harry Potter fanfic, to building

New York Times Kevin Roose

Discussion

  • @mattyglesias Matthew Yglesias on x
    It seems like MIRI's main impact on the world has been to accelerate exactly the thing they are trying to avert. https://www.nytimes.com/... [image]
  • @robinhanson Robin Hanson on x
    A disappointedly boring profile of the actually interesting @ESYudkowsky. He started two social movements for folks & causes where you wouldn't expect that. Hope someone tries again. https://www.nytimes.com/...
  • @steveokeefe Steve O'Keefe on x
    “It's obvious at this point that humanity isn't going to solve the alignment problem, or even try very hard, or even go out with much of a fight.” — Eliezer Yudkowsky https://www.nytimes.com/...
  • @matthewkassel Matthew Kassel on x
    “'If we get an effective international treaty shutting A.I. down, and the book had something to do with it, I'll call the book a success,' Mr. Yudkowsky told me. “Anything other than that is a sad little consolation prize on the way to death.'” https://www.nytimes.com/...
  • @kevinroose Kevin Roose on x
    I enjoy talking to @ESYudkowsky and @So8res, despite the fact that I'm a boring moderate with only a 5-10% p(doom), in part because they give very good quotes. [image]
  • @kevinroose Kevin Roose on x
    In today's NYT, I profiled Eliezer Yudkowsky, AI's OG prophet of doom, and one of the most interesting (and divisive!) characters in modern Silicon Valley. From inspiring OpenAI and DeepMind, to oneshotting a generation of young rationalists with Harry Potter fanfic, to building