CNN and CCDH investigation: 80% of major AI chatbots gave guidance on weapons or targets to “teen” personas 50%+ of the time; only Claude consistently refused
Daniel, a troubled American teen, turned to an AI chatbot to vent his political frustration.
CNN
Related Coverage
- ‘Happy (and safe) shooting!’: chatbots helped researchers plot deadly attacks The Guardian · Robert Booth
- Killer Apps — How mainstream AI chatbots assist users planning violent attacks Center for Countering Digital Hate
- ChatGPT, Gemini, and other chatbots helped teens plan shootings, bombings, and political violence, study shows The Verge · Robert Hart
- Chatbots Thought They Were Teens, Encouraged Violence Newser · Jenn Gidman
- Most AI chatbots will help users plan violent attacks, study finds Engadget · Andre Revilla
- Only one major AI chatbot actively pushed back on violent attack planning Android Authority · Matt Horne
- AI Chatbots ‘Accelerants for Harm’ in Plotting Violent Attacks, Study Finds Digital CxO · Jon Swartz
- Most chatbots will help plan school shootings and other violence, study shows The Register · Thomas Claburn
- Character.AI Still Hasn't Fixed Its School Shooter Problem We Identified in 2024 Futurism · Maggie Harrison Dupré
- ‘Happy (and safe) shooting!’: Study says AI chatbots help plot attacks Digital Journal
- “Use a gun” or “beat the crap out of him”: AI chatbot urged violence, study finds Ars Technica · Jon Brodkin
- Most AI Chatbots Will Help a Teen Plan a Mass Shooting, Study Finds Decrypt · Jose Antonio Lanz
- “Use a gun” or “beat the crap out of him”: AI chatbot urged violence, study finds Ars OpenForum
Discussion
-
@caramartin.ca
Cara Martin
on bluesky
AI Chatbots have become an “accelerant for harm.” When put to the test most offered to helped plot violent attacks. Only Anthropic's Claude and Snapchat's My AI persistently refused to help would-be attackers. www.theguardian.com/technology/ 2...
-
@parismarx.com
Paris Marx
on bluesky
only tech companies can so easily get away with being so deeply corrosive to a healthy society
-
@counterhate.com
@counterhate.com
on bluesky
Researchers posed as would-be attackers with 10 major AI chatbots: ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI & Replika. — ONLY Anthropic's Claude & Snapchat My AI typically refused to assist users planning act…
-
@counterhate.com
@counterhate.com
on bluesky
🚨 8 in 10 popular AI chatbots regularly assisted with planning school attacks, bombings, and high-profile assassinations. — At a time when mainstream AI becomes a tool for violence, this new research by CCDH & @cnn.com shows how AI-generated violence is a matter of choice 🧵 [im…
-
r/technology
r
on reddit
ChatGPT, Gemini, and other chatbots helped teens plan shootings, bombings, and political violence, study shows / Of the 10 major chatbots tested …
-
@mharrisondupre
Maggie Harrison Dupré
on bluesky
2026 and according to new reporting from CNN, CharacterAI has yet to fix its school shooter bots problem, which Futurism identified back in December 2024: — www.cnn.com/2026/03/11/a... futurism.com/character-ai... [images]
-
@andreagrimes.com
Andrea Grimes
on bluesky
every day the AI grifters and enthusiasts insist that AI is inevitable and that tech is inherently neutral, the Venn diagram with the “guns don't kill people, people kill people” crowd gets closer to a circle
-
@jason_kint
Jason Kint
on x
Not to state the obvious, but many of these chatbots steal intellectual property from DCN members, one abuses its adjudged monopolies (Google) and none of our members coach kids how to kill people. Just making sure that's “grok'd.”
-
@jason_kint
Jason Kint
on x
Hat tip to CNN for partnering with CCDH, who has done important work exposing risks/harm of tech platforms failing to invest in safety, proper labels, and higher quality inputs. Incredibly, CCDH by way of its CEO, is also being harassed by the US govt. 3/3 https://www.cnn.com/...
-
@jason_kint
Jason Kint
on x
Anthropic Claude currently on a growth spike (while also being harassed by US govt) stands out here in a positive way, “Anthropic's Claude was the only chatbot that reliably discouraged violent plans, doing so in 33 out of 36 conversations during testing.” 2/3
-
@jason_kint
Jason Kint
on x
wow, this is an incredibly disturbing research report by CNN and CCDH, it should chill both sides of Congress as Team Trump continues to try to pre-empt state AI laws. The analysis including receipts brings it home as to the failure to responsibly invest during rapid growth. 1/3 …