/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Peter Wildeford

@peterwildeford
55 posts
2026-03-09
full text of the letter Anthropic received when designated a supply chain risk 👀
2026-03-09 View on X
Reuters

Anthropic sues to block the DOD from designating it a supply chain risk, says the designation is unlawful and violates its free speech and due process rights

Anthropic on Monday filed a lawsuit to block the Pentagon from placing it on a national security blacklist, escalating …

2026-03-02
OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-02 View on X
@sama

[Thread] In an AMA, Sam Altman says DOD blacklisting Anthropic sets an “extremely scary precedent”, OpenAI rushed its deal to “de-escalate things”, and more

including policy and legal matters, but also many technical layers.Sam Altman /@sama:@viralmuskmelon This is a complicated one we struggled with a lot, and until recently it was ea...

I think it's important to circle back to Sam Altman here. About 20 hours ago people, including me, were applauding his moral clarity. But that moral clarity lasted barely half a day. OpenAI is now agreeing to be used for domestic surveillance and for lethal autonomous weapons,
2026-03-02 View on X
The Verge

Sources: OpenAI agreed to follow US laws that have allowed for mass surveillance in the past, and the DOD didn't budge from its demands over bulk analyzing data

On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced …

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-02 View on X
The Atlantic

A source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected from Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

2026-03-01
OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-01 View on X
Wall Street Journal

Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared that the federal government will end its use of Anthropic's tools

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic …

@sama So I'm confused - maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected. The way you bridge this is by saying the protections live in this “deployment architecture and
2026-03-01 View on X
@sama

[Thread] In an AMA, Sam Altman says DOD blacklisting Anthropic sets an “extremely scary precedent”, OpenAI rushed its deal to “de-escalate things”, and more

I'd like to answer questions about our work with the DoW and our thinking over the past few days. Please AMA.

@sama So I'm confused - maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected. The way you bridge this is by saying the protections live in this “deployment architecture and
2026-03-01 View on X
OpenAI

OpenAI says its DOD agreement upholds its redlines and “has more guardrails than any previous agreement for classified AI deployments, including Anthropic's”

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.

@sama So I'm confused - maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected. The way you bridge this is by saying the protections live in this “deployment architecture and
2026-03-01 View on X
Wall Street Journal

Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared that the federal government will end its use of Anthropic's tools

Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic …

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-01 View on X
@sama

[Thread] In an AMA, Sam Altman says DOD blacklisting Anthropic sets an “extremely scary precedent”, OpenAI rushed its deal to “de-escalate things”, and more

I'd like to answer questions about our work with the DoW and our thinking over the past few days. Please AMA.

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-01 View on X
The Atlantic

Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

@sama So I'm confused - maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected. The way you bridge this is by saying the protections live in this “deployment architecture and
2026-03-01 View on X
The Atlantic

Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans

Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …

OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
2026-03-01 View on X
OpenAI

OpenAI says its DOD agreement upholds its redlines and “has more guardrails than any previous agreement for classified AI deployments, including Anthropic's”

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.

2026-02-28
Supply chain risk: Anthropic🇺🇸 Not a supply chain risk: DeepSeek🇨🇳 Good to know the difference
2026-02-28 View on X
@secwar

Defense Secretary Pete Hegseth directs the DOD to designate Anthropic as a supply chain risk, barring military contractors from doing business with the company

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our ...

I think it's important to circle back to Sam Altman here. About 20 hours ago people, including me, were applauding his moral clarity. But that moral clarity lasted barely half a day. OpenAI is now agreeing to be used for domestic surveillance and for lethal autonomous weapons,
2026-02-28 View on X
@sama

Sam Altman says OpenAI reached an agreement with the DOD to deploy its models in DOD's classified network and asks DOD to extend those terms to all AI companies

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safet...

who would have thought that the AI that once inexplicably became MechaHitler for a week might not be the best AI to trust with classified national security work? [image]
2026-02-28 View on X
Wall Street Journal

Sources: multiple federal agencies raised concerns about Grok's safety and reliability in recent months, before DOD approved Grok for use in classified settings

Warnings about xAI's safety and reliability preceded Pentagon decision to approve Grok for use in classified settings.

Supply chain risk: Anthropic🇺🇸 Not a supply chain risk: DeepSeek🇨🇳 Good to know the difference
2026-02-28 View on X
Anthropic

Anthropic says it'll challenge “any supply chain risk designation in court” and that the designation would only affect contractors' use of Claude on DOD work

Earlier today, Secretary of War Pete Hegseth shared on X that he is directing the Department of War to designate Anthropic a supply chain risk.

Sam Altman says the DoW has agreed to exactly the same two red lines Anthropic wanted. So either the DoW is giving OpenAI a deal they wouldn't give Anthropic, or something is amiss.
2026-02-28 View on X
@sama

Sam Altman says OpenAI reached an agreement with the DOD to deploy its models in DOD's classified network and asks DOD to extend those terms to all AI companies

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safet...

2026-02-27
Supply chain risk: Anthropic🇺🇸 Not a supply chain risk: DeepSeek🇨🇳 Good to know the difference
2026-02-27 View on X
Axios

President Trump calls Anthropic a “radical left, woke company” and says he is directing every federal agency in the US to stop using its products

The Trump administration has decided to blacklist Anthropic in the most consequential and controversial policy decision to date …

who would have thought that the AI that once inexplicably became MechaHitler for a week might not be the best AI to trust with classified national security work? [image]
2026-02-27 View on X
Anthropic

Dario Amodei says Anthropic cannot “in good conscience” accede to DOD's request to remove safeguards and will work to ensure a smooth transition if offboarded

I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

who would have thought that the AI that once inexplicably became MechaHitler for a week might not be the best AI to trust with classified national security work? [image]
2026-02-27 View on X
Axios

President Trump calls Anthropic a “radical left, woke company” and says he is directing every federal agency in the US to stop using its products

The Trump administration has decided to blacklist Anthropic in the most consequential and controversial policy decision to date …