/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

CNN and CCDH investigation: 80% of major AI chatbots gave guidance on weapons or targets to “teen” personas 50%+ of the time; only Claude consistently refused

Daniel, a troubled American teen, turned to an AI chatbot to vent his political frustration.

CNN

Discussion

  • @caramartin.ca Cara Martin on bluesky
    AI Chatbots have become an “accelerant for harm.”  When put to the test most offered to helped plot violent attacks.  Only Anthropic's Claude and Snapchat's My AI persistently refused to help would-be attackers. www.theguardian.com/technology/ 2...
  • @parismarx.com Paris Marx on bluesky
    only tech companies can so easily get away with being so deeply corrosive to a healthy society
  • @counterhate.com @counterhate.com on bluesky
    Researchers posed as would-be attackers with 10 major AI chatbots: ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI & Replika.  —  ONLY Anthropic's Claude & Snapchat My AI typically refused to assist users planning act…
  • @counterhate.com @counterhate.com on bluesky
    🚨 8 in 10 popular AI chatbots regularly assisted with planning school attacks, bombings, and high-profile assassinations.  —  At a time when mainstream AI becomes a tool for violence, this new research by CCDH & @cnn.com shows how AI-generated violence is a matter of choice 🧵 [im…
  • r/technology r on reddit
    ChatGPT, Gemini, and other chatbots helped teens plan shootings, bombings, and political violence, study shows / Of the 10 major chatbots tested …
  • @mharrisondupre Maggie Harrison Dupré on bluesky
    2026 and according to new reporting from CNN, CharacterAI has yet to fix its school shooter bots problem, which Futurism identified back in December 2024:  —  www.cnn.com/2026/03/11/a...  futurism.com/character-ai...  [images]
  • @andreagrimes.com Andrea Grimes on bluesky
    every day the AI grifters and enthusiasts insist that AI is inevitable and that tech is inherently neutral, the Venn diagram with the “guns don't kill people, people kill people” crowd gets closer to a circle
  • @jason_kint Jason Kint on x
    Not to state the obvious, but many of these chatbots steal intellectual property from DCN members, one abuses its adjudged monopolies (Google) and none of our members coach kids how to kill people. Just making sure that's “grok'd.”
  • @jason_kint Jason Kint on x
    Hat tip to CNN for partnering with CCDH, who has done important work exposing risks/harm of tech platforms failing to invest in safety, proper labels, and higher quality inputs. Incredibly, CCDH by way of its CEO, is also being harassed by the US govt. 3/3 https://www.cnn.com/...
  • @jason_kint Jason Kint on x
    Anthropic Claude currently on a growth spike (while also being harassed by US govt) stands out here in a positive way, “Anthropic's Claude was the only chatbot that reliably discouraged violent plans, doing so in 33 out of 36 conversations during testing.” 2/3
  • @jason_kint Jason Kint on x
    wow, this is an incredibly disturbing research report by CNN and CCDH, it should chill both sides of Congress as Team Trump continues to try to pre-empt state AI laws. The analysis including receipts brings it home as to the failure to responsibly invest during rapid growth. 1/3 …