/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Peter Girnus

@gothburz
14 posts
2026-03-02
@sama On classified networks, your engineers won't have clearance to monitor how the model is used. Your safeguards are contractual, not architectural. How do you enforce a red line you can't see being crossed?
2026-03-02 View on X
@sama

[Thread] In an AMA, Sam Altman says DOD blacklisting Anthropic sets an “extremely scary precedent”, OpenAI rushed its deal to “de-escalate things”, and more

including policy and legal matters, but also many technical layers.Sam Altman /@sama:@viralmuskmelon This is a complicated one we struggled with a lot, and until recently it was ea...

@sama On classified networks, your engineers won't have clearance to monitor how the model is used. Your safeguards are contractual, not architectural. How do you enforce a red line you can't see being crossed?
2026-03-02 View on X
The Atlantic

A source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected from Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

2026-03-01
I work in government affairs at OpenAI. My job is federal partnerships. When an agency wants our models, I make sure the paperwork is beautiful. Paperwork is my love language. On my desk I have a framed quote that says “Policy Is Just Code That Runs on People.” I bought the
2026-03-01 View on X
The Atlantic

Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

I work in government affairs at OpenAI. My job is federal partnerships. When an agency wants our models, I make sure the paperwork is beautiful. Paperwork is my love language. On my desk I have a framed quote that says “Policy Is Just Code That Runs on People.” I bought the
2026-03-01 View on X
Wall Street Journal

Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared that the federal government will end its use of Anthropic's tools

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic …

@sama On classified networks, your engineers won't have clearance to monitor how the model is used. Your safeguards are contractual, not architectural. How do you enforce a red line you can't see being crossed?
2026-03-01 View on X
The Atlantic

Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

@sama On classified networks, your engineers won't have clearance to monitor how the model is used. Your safeguards are contractual, not architectural. How do you enforce a red line you can't see being crossed?
2026-03-01 View on X
@sama

[Thread] In an AMA, Sam Altman says DOD blacklisting Anthropic sets an “extremely scary precedent”, OpenAI rushed its deal to “de-escalate things”, and more

I'd like to answer questions about our work with the DoW and our thinking over the past few days. Please AMA.

I work in government affairs at OpenAI. My job is federal partnerships. When an agency wants our models, I make sure the paperwork is beautiful. Paperwork is my love language. On my desk I have a framed quote that says “Policy Is Just Code That Runs on People.” I bought the
2026-03-01 View on X
OpenAI

OpenAI says its DOD agreement upholds its redlines and “has more guardrails than any previous agreement for classified AI deployments, including Anthropic's”

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.

@sama On classified networks, your engineers won't have clearance to monitor how the model is used. Your safeguards are contractual, not architectural. How do you enforce a red line you can't see being crossed?
2026-03-01 View on X
OpenAI

OpenAI says its DOD agreement upholds its redlines and “has more guardrails than any previous agreement for classified AI deployments, including Anthropic's”

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.

@sama On classified networks, your engineers won't have clearance to monitor how the model is used. Your safeguards are contractual, not architectural. How do you enforce a red line you can't see being crossed?
2026-03-01 View on X
Wall Street Journal

Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared that the federal government will end its use of Anthropic's tools

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic …

2026-02-24
Credit where it's due — they named DeepSeek, Moonshot, and MiniMax with specific attribution. But the IoCs are shared privately while the policy ask is shared publicly. The audience for this post isn't defenders though, it's @congressdotgov @HouseGOP @HouseDemocrats @SenateGOP
2026-02-24 View on X
Reuters

A Trump administration official says DeepSeek's new model, expected next week, was trained on Nvidia Blackwell chips, in a potential US export control violation

Credit where it's due — they named DeepSeek, Moonshot, and MiniMax with specific attribution. But the IoCs are shared privately while the policy ask is shared publicly. The audience for this post isn't defenders though, it's @congressdotgov @HouseGOP @HouseDemocrats @SenateGOP
2026-02-24 View on X
Wall Street Journal

Anthropic says DeepSeek, MiniMax, and Moonshot violated its ToS by prompting Claude a combined 16M+ times and using distillation to train their own products

The allegations mirror those of OpenAI, which told House lawmakers that DeepSeek used ‘distillation’ to improve models

2026-02-23
Credit where it's due — they named DeepSeek, Moonshot, and MiniMax with specific attribution. But the IoCs are shared privately while the policy ask is shared publicly. The audience for this post isn't defenders though, it's @congressdotgov @HouseGOP @HouseDemocrats @SenateGOP
2026-02-23 View on X
Wall Street Journal

Anthropic says DeepSeek, MiniMax, and Moonshot violated its ToS by prompting Claude a combined 16M+ times and using distillation to train their own products

The allegations mirror those of OpenAI, which told House lawmakers that DeepSeek used ‘distillation’ to improve models

2025-07-02
Sam Altman Slams Meta's AI Talent Poaching Spree: ‘Missionaries Will Beat Mercenaries’ I didn't realize Mother Theresa took millions of dollars in funding for the Missionaries of Charity as a non profit only to go public later. https://www.wired.com/...
2025-07-02 View on X
Wired

In a memo to OpenAI researchers, Sam Altman said “missionaries will beat mercenaries” and “there is much, much more upside to OpenAI stock than Meta stock”

neither will Meta's $100 million raid on the firm's top AI talent Rocket Drew / The Information : Sam Altman Calls Meta Recruiting Efforts ‘Distasteful’ Alexandra Tremayne-Pengelly...

Sam Altman Slams Meta's AI Talent Poaching Spree: ‘Missionaries Will Beat Mercenaries’ I didn't realize Mother Theresa took millions of dollars in funding for the Missionaries of Charity as a non profit only to go public later. https://www.wired.com/...
2025-07-02 View on X
Wired

Source: Mark Zuckerberg has, on 10+ occasions, offered top AI research talent up to $300M over four years, with $100M+ in total compensation for the first year

The Meta CEO is leading a hiring blitz, offering top talent at OpenAI eye-watering pay packages and endless access to cutting-edge chips.