/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
Person

Jan Leike

Filtered to personnel pattern ×
8 articles accelerating

Jan Leike has appeared in 8 articles since 2023-03. Coverage peaked in 2024Q2 with 5 articles. Frequently mentioned alongside OpenAI, Sam Altman, AGI, ilya.

Articles
8
mentions
Velocity
+400.0%
growth rate
Acceleration
+4.500
velocity change
Sources
5
publications

Coverage Timeline

2024-05-29
TechCrunch 23 related

Anthropic hires former OpenAI safety lead Jan Leike to head up a new Superalignment team; a source says Leike will report to Chief Science Officer Jared Kaplan

Here's What We Know Wendy Lee / Los Angeles Times : OpenAI forms safety and security committee as concerns mount about AI Rounak Jain / Benzinga : OpenAI Former ‘Superalignment’ Lead Joins Jeff Bezos-...

2024-05-19
Wired 18 related

OpenAI's entire Superalignment team, which was focused on the existential dangers of AI, has either resigned or been absorbed into other research groups

Company insiders explain why safety-conscious employees are leaving. https://www.vox.com/... vs #ai #openai X: Sam Altman / @sama : i'm super appreciative of @janleike's contributions to openai's alig...

@gdb 10 related

Sam Altman and Greg Brockman respond to Jan Leike, say they've raised awareness of the risks and opportunities of AGI, will keep doing safety research, and more

We're really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has raised, we wanted to...

2024-05-18
Wired 36 related

OpenAI's entire Superalignment team, which was focused on the existential dangers of AI, has either resigned or been absorbed into other research groups

During my twenties in Silicon Valley, I ran among elite tech/AI circles through the community house scene. I have seen some troubling things around social circles of early OpenAI Austen Allred / @aust...

2024-05-15
The Verge 30 related

Jan Leike, who was co-leading OpenAI's Superalignment team with Ilya Sutskever to “steer and control” more powerful AI, has also resigned from the company

Ilya Sutskever, OpenAI's co-founder and chief scientist who helped lead the infamous failed coup against Sam Altman …

Loading articles...

Quarterly Coverage

Top Sources

Narrative

Loading narrative...

Relationships

Loading graph...