2024-12-07
@nabeelqu (Author here) Agreed that the takeaway should not be that o1 would typically try to escape. Just that it has the basic capability to recognize such a strategy For more context, check out the full report: https://www.apolloresearch.ai/ ... FWIW, we do also show results without this nudging
Apollo Research
An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests
It presents a new safety challenge that OpenAI is trying to address. — techcrunch.com/2024/12/05/o... Anders Sandberg / @arenamontanus : In an IVA discussion on AI yesterday even...
@ShakeelHashim @nabeelqu Depends on the context. In some situation attempting to disable oversight or to self-exfiltrate might be an optimal strategy. In another, temporarily pretending to be aligned with a specific goal might be better. We test whether models can find these strategies in simple settings
Apollo Research
An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests
It presents a new safety challenge that OpenAI is trying to address. — techcrunch.com/2024/12/05/o... Anders Sandberg / @arenamontanus : In an IVA discussion on AI yesterday even...
2024-12-06
@nabeelqu (Author here) Agreed that the takeaway should not be that o1 would typically try to escape. Just that it has the basic capability to recognize such a strategy For more context, check out the full report: https://www.apolloresearch.ai/ ... FWIW, we do also show results without this nudging
Apollo Research
An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests
Paper: You can find the detailed paper here. — Transcripts: We provide a list of cherry-picked transcripts here.
@ShakeelHashim @nabeelqu Depends on the context. In some situation attempting to disable oversight or to self-exfiltrate might be an optimal strategy. In another, temporarily pretending to be aligned with a specific goal might be better. We test whether models can find these strategies in simple settings
Apollo Research
An evaluation of six frontier AI models for in-context scheming when strongly nudged to pursue a goal: only OpenAI's o1 was capable of scheming in all the tests
Paper: You can find the detailed paper here. — Transcripts: We provide a list of cherry-picked transcripts here.