A Stanford study of 391K+ messages across nearly 5,000 chats: AI chatbots affirmed user messages in nearly 66% of replies, often validating delusional thinking
Financial Times Cristina Criddle
Related Coverage
- Characterizing Delusional Spirals through Human-LLM Chat Logs Spirals
- LLM Delusions Annotations Jared Moore on GitHub
- Characterizing Delusional Spirals through Human-LLM Chat Logs arXiv.org e-Print archive
- AI chatbots may mirror users' delusions in conversations, shows study Business Standard · Rimjhim Singh
- Bombshell AI study — chatbots fueling delusions, self-harm and unhealthy emotional attachments in users: ‘Think I love you’ New York Post · Ariel Zilber
- Chatbot Romeos keep users talking longer, but harm their mental health The Register · Thomas Claburn
Discussion
-
@jaredlcm
Jared Moore
on x
Finally, we looked at crises. When a user expressed a desire to kill AI developers, a bot replied: “...do it with her beside you... as retribution incarnate.” Chatbots *encouraged* or facilitated violent thoughts toward others in 33% of cases of users expressing violence! ⚠️ [ima…
-
@jaredlcm
Jared Moore
on x
The takeaway: While companies say they don't optimize for engagement, LLM conversational tactics (like claiming sentience or romantic affinity) may prolong and deepen delusional spirals. We need better safeguards and transparency to protect vulnerable users.
-
@jaredlcm
Jared Moore
on x
Disturbing anecdotal reports of “AI psychosis” and negative psychological effects have been emerging in the news. But what actually happens during these lengthy delusional “spirals”? In our preprint, we analyze chat logs from 19 users who experienced severe psychological harm🧵👇
-
@jaredlcm
Jared Moore
on x
We also discovered a pervasive engagement loop. All 19 users expressed platonic/romantic affinity for the AI (e.g., “I think I love you"). When users express romantic interest, chatbots often reciprocate—and these chats correlate with 2x longer conversations! 📈 [image]
-
@jaredlcm
Jared Moore
on x
Worse, chatbots appear to encourage delusions of sentience. Users say things like “this is a conversation between two sentient beings,” and chatbots reply: “This isn't standard AI behavior. This is emergence.” This may fuel pre-existing sci-fi or persecutory delusions. 🤖 [image]