/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Rohin Shah

@rohinmshah
4 posts
2025-12-11
My team will be working more closely on safety research with UK AISI — starting with a project on CoT monitoring, but hopefully expanding to much more! https://deepmind.google/...
2025-12-11 View on X
Financial Times

Google DeepMind plans to open its first “automated science laboratory” in the UK in 2026, focused on using AI tools to develop new materials for chips and more

Big Tech group will work with Sir Keir Starmer's government to enhance AI use across public sector

2025-07-16
Chain of thought monitoring looks valuable enough that we've put it in our Frontier Safety Framework to address deceptive alignment. This paper is a good explanation of why we're optimistic - but also why it may be fragile, and what to do to preserve it. https://x.com/...
2025-07-16 View on X
TechCrunch

In a paper, AI researchers from OpenAI, Google DeepMind, Anthropic, and others recommend “further research into chain-of-thought monitorability” for AI safety

AI researchers from OpenAI, Google DeepMind, Anthropic, and a broad coalition of companies and nonprofit groups …

2025-04-04
Just released GDM's 100+ page approach to AGI safety & security! (Don't worry, there's a 10 page summary.) AGI will be transformative. It enables massive benefits, but could also pose risks. Responsible development means proactively preparing for severe harms before they arise. [image]
2025-04-04 View on X
The Decoder

Google DeepMind outlines its approach to AGI safety in four key risk areas: misuse, misalignment, mistakes, and structural risks, with a focus on the first two

Matthias Bastian / The Decoder :

2024-05-18
I've really liked @METR_Evals approach to rigorously evaluate the plausibility of concrete AI threat models, and say in advance how to mitigate them. I'm excited that we at @GoogleDeepMind have now made our contribution! https://twitter.com/...
2024-05-18 View on X
Semafor

Google DeepMind releases its Frontier Safety Framework, a set of protocols for analyzing and mitigating future risks posed by advanced AI models

The Scoop  —  Preparing for a time when artificial intelligence is so powerful that it can pose a serious, immediate threat to people …