/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@ai_risks

@ai_risks
2 posts
2023-05-31
We've released a statement on the risk of extinction from AI. Signatories include: - Three Turing Award winners - Authors of the standard textbooks on AI/DL/RL - CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic - Many more https://safe.ai/...
2023-05-31 View on X
New York Times

OpenAI and DeepMind executives, Geoffrey Hinton, and 350+ others sign a statement saying “mitigating the risk of extinction from AI should be a global priority”

and says computer scientists need ethics training Brian Fung / CNN : AI industry and researchers sign statement warning of ‘extinction’ risk Alka Jain / Livemint : Industry leaders...

2023-05-30
We've released a statement on the risk of extinction from AI. Signatories include: - Three Turing Award winners - Authors of the standard textbooks on AI/DL/RL - CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic - Many more https://safe.ai/...
2023-05-30 View on X
New York Times

OpenAI and DeepMind execs, Geoffrey Hinton, and 350+ others release a statement saying “mitigating the risk of extinction from AI should be a global priority”

Leaders from OpenAI, Google Deepmind, Anthropic and other A.I. labs warn that future systems could be as deadly as pandemics and nuclear weapons.