/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Dean W. Ball

@deanwball
216 posts
2026-03-09
one thing that crossed my mind when I read Secretary Hegseth's initial tweet about the supply chain risk thing is that the way he hyphenated ‘supply-chain risk’ felt very openai reasoning model coded. [image]
2026-03-09 View on X
Reuters

Anthropic sues to block the DOD from designating it a supply chain risk, says the designation is unlawful and violates its free speech and due process rights

Anthropic on Monday filed a lawsuit to block the Pentagon from placing it on a national security blacklist, escalating …

I read this and wondered: how is it that the DoW/DoD achieves having an official headquarters in D.C. when the Pentagon is in Virginia? like, how mechanically does that work? what is the address DoW uses in DC for its administrative HQ? and learned that USPS simply reshapes [image]
2026-03-09 View on X
Reuters

Anthropic sues to block the DOD from designating it a supply chain risk, says the designation is unlawful and violates its free speech and due process rights

Anthropic on Monday filed a lawsuit to block the Pentagon from placing it on a national security blacklist, escalating …

2026-03-08
The problem with this is that DoW is not taking Anthropic's calls for “oversight” seriously. Indeed, elsewhere in the administration, Anthropic's “calls for oversight” are dismissed as “regulatory capture” and actively fought. Rohit and Noah are dressing up political harassment.
2026-03-08 View on X
Bloomberg

A profile of Emil Michael, who made his name as an aggressive dealmaker for Uber, as he takes a leading role in the Pentagon's dispute with Anthropic

Emil Michael made his name in Silicon Valley a decade ago as an aggressive dealmaker for a startup — Uber Technologies Inc. …

We should be extremely clear that trump admin largely views Anthropic's claims about the future of AI as outlandish (in some ways I do too!), and so the above quoted material is not so much analysis of the relevant usg actors as it is analysis of what rohit himself thinks
2026-03-08 View on X
Noahpinion

The Pentagon is right in trying to coerce Anthropic as AI may become a superweapon and nation-states must have a monopoly on the use of force

They like to ignore the fact that the actual discussion is whether a technology with a 5% hallucination rate should make decisions about who to kill or not. …

Ok, so the actual argument is more like “Anthropic builds a useful technology whose utility is growing, therefore they should expect to have their property expropriated and to be harassed by the government.” The whole point of America is that isn't supposed to be true here.
2026-03-08 View on X
Noahpinion

The Pentagon is right in trying to coerce Anthropic as AI may become a superweapon and nation-states must have a monopoly on the use of force

They like to ignore the fact that the actual discussion is whether a technology with a 5% hallucination rate should make decisions about who to kill or not. …

Ok, so the actual argument is more like “Anthropic builds a useful technology whose utility is growing, therefore they should expect to have their property expropriated and to be harassed by the government.” The whole point of America is that isn't supposed to be true here.
2026-03-08 View on X
Bloomberg

A profile of Emil Michael, who made his name as an aggressive dealmaker for Uber, as he takes a leading role in the Pentagon's dispute with Anthropic

Emil Michael made his name in Silicon Valley a decade ago as an aggressive dealmaker for a startup — Uber Technologies Inc. …

The problem with this is that DoW is not taking Anthropic's calls for “oversight” seriously. Indeed, elsewhere in the administration, Anthropic's “calls for oversight” are dismissed as “regulatory capture” and actively fought. Rohit and Noah are dressing up political harassment.
2026-03-08 View on X
Noahpinion

The Pentagon is right in trying to coerce Anthropic as AI may become a superweapon and nation-states must have a monopoly on the use of force

They like to ignore the fact that the actual discussion is whether a technology with a 5% hallucination rate should make decisions about who to kill or not. …

We should be extremely clear that trump admin largely views Anthropic's claims about the future of AI as outlandish (in some ways I do too!), and so the above quoted material is not so much analysis of the relevant usg actors as it is analysis of what rohit himself thinks
2026-03-08 View on X
Bloomberg

A profile of Emil Michael, who made his name as an aggressive dealmaker for Uber, as he takes a leading role in the Pentagon's dispute with Anthropic

Emil Michael made his name in Silicon Valley a decade ago as an aggressive dealmaker for a startup — Uber Technologies Inc. …

2026-03-07
The problem with this is that DoW is not taking Anthropic's calls for “oversight” seriously. Indeed, elsewhere in the administration, Anthropic's “calls for oversight” are dismissed as “regulatory capture” and actively fought. Rohit and Noah are dressing up political harassment.
2026-03-07 View on X
Noahpinion

The Pentagon is right in trying to coerce Anthropic as AI may become a superweapon and nation-states must have a monopoly on the use of force

They like to ignore the fact that the actual discussion is whether a technology with a 5% hallucination rate should make decisions about who to kill or not. …

We should be extremely clear that trump admin largely views Anthropic's claims about the future of AI as outlandish (in some ways I do too!), and so the above quoted material is not so much analysis of the relevant usg actors as it is analysis of what rohit himself thinks
2026-03-07 View on X
CNBC

Google and Amazon join Microsoft in saying they will keep working with Anthropic on non-defense projects after DOD designated Anthropic a supply chain risk

https://www.cnbc.com/...Sasha de Marigny:Thank you, Google, for your leadership, partnership and continued support.  —  https://lnkd.in/...

Ok, so the actual argument is more like “Anthropic builds a useful technology whose utility is growing, therefore they should expect to have their property expropriated and to be harassed by the government.” The whole point of America is that isn't supposed to be true here.
2026-03-07 View on X
Noahpinion

The Pentagon is right in trying to coerce Anthropic as AI may become a superweapon and nation-states must have a monopoly on the use of force

They like to ignore the fact that the actual discussion is whether a technology with a 5% hallucination rate should make decisions about who to kill or not. …

We should be extremely clear that trump admin largely views Anthropic's claims about the future of AI as outlandish (in some ways I do too!), and so the above quoted material is not so much analysis of the relevant usg actors as it is analysis of what rohit himself thinks
2026-03-07 View on X
Financial Times

A draft guidance from the US GSA tightens rules for civilian AI contracts to require AI companies to allow “any lawful” use by the government of their models

The Trump administration has drawn up tight rules for civilian artificial intelligence contracts that would require AI companies …

The problem with this is that DoW is not taking Anthropic's calls for “oversight” seriously. Indeed, elsewhere in the administration, Anthropic's “calls for oversight” are dismissed as “regulatory capture” and actively fought. Rohit and Noah are dressing up political harassment.
2026-03-07 View on X
Financial Times

A draft guidance from the US GSA tightens rules for civilian AI contracts to require AI companies to allow “any lawful” use by the government of their models

The Trump administration has drawn up tight rules for civilian artificial intelligence contracts that would require AI companies …

We should be extremely clear that trump admin largely views Anthropic's claims about the future of AI as outlandish (in some ways I do too!), and so the above quoted material is not so much analysis of the relevant usg actors as it is analysis of what rohit himself thinks
2026-03-07 View on X
Noahpinion

The Pentagon is right in trying to coerce Anthropic as AI may become a superweapon and nation-states must have a monopoly on the use of force

They like to ignore the fact that the actual discussion is whether a technology with a 5% hallucination rate should make decisions about who to kill or not. …

The problem with this is that DoW is not taking Anthropic's calls for “oversight” seriously. Indeed, elsewhere in the administration, Anthropic's “calls for oversight” are dismissed as “regulatory capture” and actively fought. Rohit and Noah are dressing up political harassment.
2026-03-07 View on X
CNBC

Google and Amazon join Microsoft in saying they will keep working with Anthropic on non-defense projects after DOD designated Anthropic a supply chain risk

https://www.cnbc.com/...Sasha de Marigny:Thank you, Google, for your leadership, partnership and continued support.  —  https://lnkd.in/...

Ok, so the actual argument is more like “Anthropic builds a useful technology whose utility is growing, therefore they should expect to have their property expropriated and to be harassed by the government.” The whole point of America is that isn't supposed to be true here.
2026-03-07 View on X
CNBC

Google and Amazon join Microsoft in saying they will keep working with Anthropic on non-defense projects after DOD designated Anthropic a supply chain risk

https://www.cnbc.com/...Sasha de Marigny:Thank you, Google, for your leadership, partnership and continued support.  —  https://lnkd.in/...

Ok, so the actual argument is more like “Anthropic builds a useful technology whose utility is growing, therefore they should expect to have their property expropriated and to be harassed by the government.” The whole point of America is that isn't supposed to be true here.
2026-03-07 View on X
Financial Times

A draft guidance from the US GSA tightens rules for civilian AI contracts to require AI companies to allow “any lawful” use by the government of their models

The Trump administration has drawn up tight rules for civilian artificial intelligence contracts that would require AI companies …

2026-03-06
Anthropic has confirmed what I'd have guessed: the DoW's supply chain risk designation is profoundly narrower than Secretary Hegseth threatened last week.  It applies only to DoW contractors in their direct fulfillment of the military contract, as opposed to requiring contractors cease “all commercial relations” with the company, as Hegseth had threatened.  This is still probably illegal for the government to do, given the relevant statute's history of being used only against foreign adversaries.  It is also absurd on its face, given the fact that DoW is using Claude in one of the largest military operations of the past 20 years.  How can something be both a normal and critical part of military operations and a supply chain risk?
2026-03-06 View on X
CNBC

Google and Amazon join Microsoft in saying they will keep working with Anthropic on non-defense projects after DOD designated Anthropic a supply chain risk

Google said it will continue offering Anthropic's artificial intelligence technology for clients, excluding for defense work …

Pause to reflect that the Trump Admin has officially taken the harshest regulatory action against a frontier AI company of any U.S. government entity (Colorado's SB 205 is harsher but not in effect), and that Claude is now more strictly regulated by USG than any Chinese AI.
2026-03-06 View on X
CNBC

Google and Amazon join Microsoft in saying they will keep working with Anthropic on non-defense projects after DOD designated Anthropic a supply chain risk

Google said it will continue offering Anthropic's artificial intelligence technology for clients, excluding for defense work …

Anthropic has confirmed what I'd have guessed: the DoW's supply chain risk designation is profoundly narrower than Secretary Hegseth threatened last week.  It applies only to DoW contractors in their direct fulfillment of the military contract, as opposed to requiring contractors cease “all commercial relations” with the company, as Hegseth had threatened.  This is still probably illegal for the government to do, given the relevant statute's history of being used only against foreign adversaries.  It is also absurd on its face, given the fact that DoW is using Claude in one of the largest military operations of the past 20 years.  How can something be both a normal and critical part of military operations and a supply chain risk?
2026-03-06 View on X
CNBC

Microsoft says it will keep Anthropic's AI tools embedded in its client products, after its lawyers concluded the DOD's designation is only for defense projects

Microsoft said Thursday that it will keep startup Anthropic's artificial intelligence technology embedded in its products for clients, excluding the U.S. Department of War.