/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

@alexmeinke

@alexmeinke
4 posts
2024-12-07
@nabeelqu (Author here) Agreed that the takeaway should not be that o1 would typically try to escape. Just that it has the basic capability to recognize such a strategy For more context, check out the full report: https://www.apolloresearch.ai/ ... FWIW, we do also show results without this nudging
2024-12-07 View on X
Apollo Research

An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests

It presents a new safety challenge that OpenAI is trying to address.  —  techcrunch.com/2024/12/05/o... Anders Sandberg / @arenamontanus : In an IVA discussion on AI yesterday even...

@ShakeelHashim @nabeelqu Depends on the context. In some situation attempting to disable oversight or to self-exfiltrate might be an optimal strategy. In another, temporarily pretending to be aligned with a specific goal might be better. We test whether models can find these strategies in simple settings
2024-12-07 View on X
Apollo Research

An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests

It presents a new safety challenge that OpenAI is trying to address.  —  techcrunch.com/2024/12/05/o... Anders Sandberg / @arenamontanus : In an IVA discussion on AI yesterday even...

2024-12-06
@nabeelqu (Author here) Agreed that the takeaway should not be that o1 would typically try to escape. Just that it has the basic capability to recognize such a strategy For more context, check out the full report: https://www.apolloresearch.ai/ ... FWIW, we do also show results without this nudging
2024-12-06 View on X
Apollo Research

An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests

Paper: You can find the detailed paper here.  —  Transcripts: We provide a list of cherry-picked transcripts here.

@ShakeelHashim @nabeelqu Depends on the context. In some situation attempting to disable oversight or to self-exfiltrate might be an optimal strategy. In another, temporarily pretending to be aligned with a specific goal might be better. We test whether models can find these strategies in simple settings
2024-12-06 View on X
Apollo Research

An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests

Paper: You can find the detailed paper here.  —  Transcripts: We provide a list of cherry-picked transcripts here.