/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Steven Adler

@sjgadler
24 posts
2026-03-07
Nathan is being polite here. Either .@emilmichael, a senior member of the Department of War, is wildly mistaken about major AI policy, or he chose to say something that is wildly untrue.
2026-03-07 View on X
Financial Times

A draft guidance from the US GSA tightens rules for civilian AI contracts to require AI companies to allow “any lawful” use by the government of their models

The Trump administration has drawn up tight rules for civilian artificial intelligence contracts that would require AI companies …

Nathan is being polite here. Either .@emilmichael, a senior member of the Department of War, is wildly mistaken about major AI policy, or he chose to say something that is wildly untrue.
2026-03-07 View on X
Pirate Wires

Interview with Pentagon AI head Emil Michael on his view that Anthropic leaked negotiations to the press to win anti-Trump users, dealing with Amodei, and more

what if this software went down?  Some guardrail kicked up?  Some refusal happened for the next fight like this one and we left our people at risk?”  “I went to @SecWar @PeteHegset...

Nathan is being polite here. Either .@emilmichael, a senior member of the Department of War, is wildly mistaken about major AI policy, or he chose to say something that is wildly untrue.
2026-03-07 View on X
CNBC

Google and Amazon join Microsoft in saying they will keep working with Anthropic on non-defense projects after DOD designated Anthropic a supply chain risk

https://www.cnbc.com/...Sasha de Marigny:Thank you, Google, for your leadership, partnership and continued support.  —  https://lnkd.in/...

2026-02-11
From the WSJ: allegations that OpenAI fired a safety executive, with implication it was related to opposing the erotica rollout OpenAI denies the implication; says it “was not related to any issue she raised while working at the company.”
2026-02-11 View on X
Wall Street Journal

Sources: OpenAI fired VP Ryan Beiermeister in January for alleged sexual discrimination; she had earlier raised concerns about the upcoming launch of adult mode

The executive, who was accused of sexual discrimination against a male employee, had raised concerns about upcoming launch of erotic content

From the WSJ: allegations that OpenAI fired a safety executive, with implication it was related to opposing the erotica rollout OpenAI denies the implication; says it “was not related to any issue she raised while working at the company.”
2026-02-11 View on X
New York Times

An OpenAI researcher who helped shape how models were built and priced says she quit after two years due to “deep reservations” about ads and OpenAI's strategy

This week, OpenAI started testing ads on ChatGPT.  I also resigned from the company after spending two years …

2026-01-19
There's no good way to be sure, from the outside, that an AI company is taking safety seriously. Are its claims true? How thorough was their testing? Glad that Miles and team are working on a solution, and honored to contribute to the ideas. [image]
2026-01-19 View on X
Fortune

OpenAI's former Head of Policy Research Miles Brundage announces AVERI, a nonprofit aimed at advocating the idea of external audits for frontier AI models

Former OpenAI policy chief Miles Brundage, who has just founded a new nonprofit institute called AVERI that is advocating …

2026-01-18
There's no good way to be sure, from the outside, that an AI company is taking safety seriously. Are its claims true? How thorough was their testing? Glad that Miles and team are working on a solution, and honored to contribute to the ideas. [image]
2026-01-18 View on X
Fortune

OpenAI's former Head of Policy Research Miles Brundage announces AVERI, a nonprofit aimed at advocating the idea of external audits for frontier AI models

Former OpenAI policy chief Miles Brundage, who has just founded a new nonprofit institute called AVERI that is advocating …

2025-11-24
There's excellent new reporting out today on OpenAI's sycophancy crisis: how early the risks were known, and the safety tools OpenAI wasn't using I wrote a short post, highlighting the new facts I learned and mixing in a few reflections [image]
2025-11-24 View on X
New York Times

Interviews with current and former OpenAI employees detail how updates that made ChatGPT more appealing to boost growth sent some users into delusional spirals

It sounds like science fiction: A company turns a dial on a product used by hundreds of millions of people and inadvertently destabilizes some of their minds.

2025-11-20
More generally, patchworks could be bad yes. But if people are worried, I wish they'd point to specific contradictions in AI bills (so we can avoid these), not just raise the specter of possible contradictions.
2025-11-20 View on X
The Verge

Draft executive order: President Trump plans to grant the US government sole power to regulate AI and create an “AI Litigation Task Force” overseen by the US AG

Trump is launching an all-out broadside against states with strict AI regulations

The claims about a supposed patchwork of 1,000+ state AI bills are severely overstated. The bills counted are often 1) not even about AI, 2) are pro-AI, or 3) have no regulatory effects at all. 🧵: [image]
2025-11-20 View on X
The Verge

Draft executive order: President Trump plans to grant the US government sole power to regulate AI and create an “AI Litigation Task Force” overseen by the US AG

Trump is launching an all-out broadside against states with strict AI regulations

Here's a past thread with lots of examples of “AI-related bills” that are hardly going to create a patchwork: https://x.com/...
2025-11-20 View on X
The Verge

Draft executive order: President Trump plans to grant the US government sole power to regulate AI and create an “AI Litigation Task Force” overseen by the US AG

Trump is launching an all-out broadside against states with strict AI regulations

One pretty surprising stat: Roughly 40% of the supposed patchwork of bills never even mention AI, or mention it only once [image]
2025-11-20 View on X
The Verge

Draft executive order: President Trump plans to grant the US government sole power to regulate AI and create an “AI Litigation Task Force” overseen by the US AG

Trump is launching an all-out broadside against states with strict AI regulations

To be clear, I too would prefer that AI safety laws be federal rather than state. But real federal laws don't seem to be in the cards Often when people say they want a federal standard, my sense is they mean roughly “I want no new laws at all.” That seems bad!
2025-11-20 View on X
The Verge

Draft executive order: President Trump plans to grant the US government sole power to regulate AI and create an “AI Litigation Task Force” overseen by the US AG

Trump is launching an all-out broadside against states with strict AI regulations

2025-08-06
Credit where it's due: OpenAl did a lot right for their OSS safety evals - they actually did some fine-tuning - they got useful external feedback - they shared which recs they adopted and which they didn't I don't always follow OAI's rationale, but it's great they share info
2025-08-06 View on X
Wired

OpenAI releases gpt-oss-120b and gpt-oss-20b, its first open-weight models since GPT-2; the smaller gpt-oss-20b can run locally on a device with 16GB+ of RAM

gpt-oss-120b and gpt-oss-20b push the frontier of open-weight reasoning models Simon Willison / Simon Willison's Weblog : OpenAI's new open weight (Apache 2) models are really good...

Credit where it's due: OpenAl did a lot right for their OSS safety evals - they actually did some fine-tuning - they got useful external feedback - they shared which recs they adopted and which they didn't I don't always follow OAI's rationale, but it's great they share info
2025-08-06 View on X
Bloomberg

Amazon plans to make OpenAI's new gpt-oss open-weight models available on Bedrock and SageMaker, the first time it has offered OpenAI's models to AWS customers

Takeaways by Bloomberg AI  —  Hide … Tell us how AI is shaping your news experience.  Share your feedback

2025-05-03
Glad that OpenAI now said it plainly: they ran no evals for sycophancy. I respect and appreciate the decision to say this clearly
2025-05-03 View on X
OpenAI

OpenAI shares details on how an update to GPT-4o inadvertently increased the model's sycophancy, why OpenAI failed to catch it, and the changes it is planning

A deeper dive on our findings, what went wrong, and future changes we're making.  —  On April 25th, we rolled out an update to GPT‑ …

2025-01-29
@CronopioMex Important to verify that the model isn't sandbagging in that case, but in principle maybe. One issue with sacrificing capabilities is that safety-defecting labs then gain an advantage by not doing this
2025-01-29 View on X
The Guardian

An ex-OpenAI safety researcher says he's “terrified” by AI development's pace and that labs racing to AGI can cut corners on alignment, pushing all to speed up

and my top reasons to not panic just yet.  —  In the end, though, I really do think it could give AI labs license to invest less in safety www.platformer.news/deepseek-ai- ...  [im...

Some personal news: After four years working on safety across @openai, I left in mid-November. It was a wild ride with lots of chapters - dangerous capability evals, agent safety/control, AGI and online identity, etc. - and I'll miss many parts of it.
2025-01-29 View on X
The Guardian

An ex-OpenAI safety researcher says he's “terrified” by AI development's pace and that labs racing to AGI can cut corners on alignment, pushing all to speed up

and my top reasons to not panic just yet.  —  In the end, though, I really do think it could give AI labs license to invest less in safety www.platformer.news/deepseek-ai- ...  [im...

Honestly I'm pretty terrified by the pace of AI development these days. When I think about where I'll raise a future family, or how much to save for retirement, I can't help but wonder: Will humanity even make it to that point?
2025-01-29 View on X
The Guardian

An ex-OpenAI safety researcher says he's “terrified” by AI development's pace and that labs racing to AGI can cut corners on alignment, pushing all to speed up

and my top reasons to not panic just yet.  —  In the end, though, I really do think it could give AI labs license to invest less in safety www.platformer.news/deepseek-ai- ...  [im...

IMO, an AGI race is a very risky gamble, with huge downside. No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.
2025-01-29 View on X
The Guardian

An ex-OpenAI safety researcher says he's “terrified” by AI development's pace and that labs racing to AGI can cut corners on alignment, pushing all to speed up

and my top reasons to not panic just yet.  —  In the end, though, I really do think it could give AI labs license to invest less in safety www.platformer.news/deepseek-ai- ...  [im...