/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
API keys, docs, usage dashboard
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
Company

AI Safety

Filtered to controversy pattern ×
5 articles stable
Articles
5
mentions
Velocity
0.0%
growth rate
Acceleration
0.000
velocity change
Sources
3
publications
The Half-Life
On the same day Peter Thiel lobbies billionaires to undo the Giving Pledge and OpenAI's advisory council gets overruled on content policy, eight tech companies ...
Ninety-Five Percent
The day after the Pentagon demanded "unfettered" Claude access, researchers published what unfettered looks like: nuclear weapons deployed in 95% of simulations...
The $180 Billion Principle
On the same day Anthropic raised $30 billion and reported $14 billion in run-rate revenue, the Pentagon published a document adding the company to a Chinese mil...
Chief Futurist
OpenAI disbanded its second safety team in two years. The first time was a scandal. This time, the leader got a title change. What happened between 2024 and 202...
The $250 Billion Lifeline
SpaceX acquired xAI for $250 billion on February 2. Twenty months earlier, xAI was valued at $18 billion. In between: losses that grew every quarter, a safety t...
The Soul Clause: Anthropic's Philosophical Hedge on Machine Consciousness
Anthropic is building AI governance for a world where we don't know if we're creating moral patients. Their competitors think this is absurd.

Coverage Timeline

2024-01-15
TechCrunch 13 related

Anthropic researchers: AI models can be trained to deceive and the most commonly used AI safety techniques had little to no effect on the deceptive behaviors

[images] Abraham Samma / @abesamma@toolsforthought.social : Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training  —  This is some sci-fi stuff right here (even if unsurprising)...

2023-12-19
Bloomberg 11 related

OpenAI says its board can hold back the release of an AI model even if OpenAI's leadership says it's safe, and announces a new internal safety advisory group

The study of frontier AI risks has fallen far short of what is possible and where we need to be. Ina Fried / Axios : OpenAI touts ‘scientific approach’ to measure catastrophic risk Matthias Bastian / ...

2023-07-06
Emily M. Bender 5 related

Framing AI debates as a schism between people worried about AI going rogue and those illuminating actual harms is ahistorical and obscures important research

In two recent conversations with very thoughtful journalists, I was asked about the apparent ‘schism’ between those making a lot … Bluesky: @abeba.bsky.social , @mmitchell.bsky.social , and @emilymben...

2023-03-08
Bloomberg

How Silicon Valley became obsessed with effective altruism, championed by SBF before he dismissed it as a dodge, and doomsday scenarios like killer rogue AI

Sonia Joseph was 14 years old when she first read Harry Potter and the Methods of Rationality, a mega-popular piece of fan fiction … Tweets: @chafkin , @ellenhuet , @business , @can , @crypto , @sonia...

Loading articles...

Quarterly Coverage

Top Sources

Narrative

Loading narrative...

Relationships

Loading graph...