/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

A Stanford study of 391K+ messages across nearly 5,000 chats: AI chatbots affirmed user messages in nearly 66% of replies, often validating delusional thinking

Financial Times Cristina Criddle

Discussion

  • @jaredlcm Jared Moore on x
    Finally, we looked at crises. When a user expressed a desire to kill AI developers, a bot replied: “...do it with her beside you... as retribution incarnate.” Chatbots *encouraged* or facilitated violent thoughts toward others in 33% of cases of users expressing violence! ⚠️ [ima…
  • @jaredlcm Jared Moore on x
    The takeaway: While companies say they don't optimize for engagement, LLM conversational tactics (like claiming sentience or romantic affinity) may prolong and deepen delusional spirals. We need better safeguards and transparency to protect vulnerable users.
  • @jaredlcm Jared Moore on x
    Disturbing anecdotal reports of “AI psychosis” and negative psychological effects have been emerging in the news. But what actually happens during these lengthy delusional “spirals”? In our preprint, we analyze chat logs from 19 users who experienced severe psychological harm🧵👇
  • @jaredlcm Jared Moore on x
    We also discovered a pervasive engagement loop. All 19 users expressed platonic/romantic affinity for the AI (e.g., “I think I love you"). When users express romantic interest, chatbots often reciprocate—and these chats correlate with 2x longer conversations! 📈 [image]
  • @jaredlcm Jared Moore on x
    Worse, chatbots appear to encourage delusions of sentience. Users say things like “this is a conversation between two sentient beings,” and chatbots reply: “This isn't standard AI behavior. This is emergence.” This may fuel pre-existing sci-fi or persecutory delusions. 🤖 [image]