Sources: amid negotiations with the DOD, Anthropic submitted a bid to compete in a $100M DOD contest to develop voice-controlled, autonomous drone swarming tech
Anthropic PBC was among the artificial intelligence companies that submitted a proposal earlier this year to compete …
Bloomberg Katrina Manson
Related Coverage
- Users Ghost ChatGPT for Claude as OpenAI Strikes Deal with Pentagon The Daily Upside · Jamie Wilde
- Wars within wars Gazetteer SF · Cydney Hayes
- OpenAI's Pentagon deal raises new questions about AI and mass surveillance Fortune · Beatrice Nolan
- OpenAI Leadership Defends Deal With Pentagon as Employees Wait in Limbo Gizmodo · Ece Yildirim
- OpenAI says its US defense deal is safer than Anthropic's, but is it? CIO.com · Anirban Ghoshal
- Trump gives the Claude shoulder Tech Brew · Alex Carr
- OpenAI's Pentagon red lines are a mirage Transformer · Shakeel Hashim
- The Loopholes in OpenAI's Pentagon Deal The Information · Erin Woo
- Anthropic and the US military are feuding: What to know Information Age · Tom Williams
- Sources detail how the Anthropic and DOD talks fell apart and how officials at US intelligence agencies, including the CIA, still hope for a peace agreement New York Times
- Frontier AI labs' military usage policies for their AI tools are incoherent, vague, and often change, which allows company leadership to preserve “optionality” fishbowlification · Sarah Shoker
- Agencies begin to shed Anthropic contracts following Trump's directive FCW · Nextgov
- Anthropic labelled supply risk: Tech companies defend the label as State Department embraces OpenAI Business Today · Aishwarya Panda
- Claude Surge Triggers Widespread Outage Amid Washington Tensions Techstrong.ai · Jon Swartz
- US Treasury, federal housing agency ending use of Anthropic products iTnews
- HHS Tells Employees to Stop Using Anthropic's Claude NOTUS · Margaret Manto
- A source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected from Americans The Atlantic · Ross Andersen
- Tech workers urge DOD, Congress to withdraw Anthropic label as a supply-chain risk TechCrunch · Rebecca Bellan
- Canadian Cops Questioned Dad About Human Trafficking After He Took His Daughter to a Coffee Shop Reason · Elizabeth Nolan Brown
- What the Pentagon Has Done to Anthropic Should Make Every Founder Nervous Inc · James Surowiecki
- When I was the HR Chief of Staff for the AGI team at Amazon, one of the most important parts of the role was keeping the team grounded in the external landscape of AI development and research. … Stephanie Burke Duke
- What Anthropic Stands to Gain From Pentagon Stance The Information · Martin Peers
- Anthropic has been at the heart of America's business narrative this year, introducing tools that could upend industries from cybersecurity to law and everything in between. … Dan Primack
- Trump's lethal presidency Axios · Zachary Basu
- ‘Anthropic and Alignment’ — Ben Thompson, writing at Stratechery: Daring Fireball · John Gruber
- Start Up No.2621: Anthropic's doomed military standoff, chatbots v PDFs, ChatGPT's bad health, 25 years after the iPod, and more The Overspill · Charlesarthur
- The Pointless War Between the Pentagon and Anthropic Wall Street Journal · Judd Rosenblatt
- His second major point is that Anthropic should do what the govt says because the govt has the power to destroy it. This point is purely a pragmatic one and can be assessed on its merits. The USG is all-powerful but its current leadership is not. … @eric_he_1998 · Eric He
- His third point is that Anthropic leadership seems to have a bad understanding on AI game theory and its advice on how to contain China / slow open source AI development is especially counterproductive. I fully agree with this one and to the extent this shapes the rest of Thompson's views that Anthropic can't be trusted with power, I think that's reasonable. … @eric_he_1998 · Eric He
- Ben Thompson usually hits but this piece on Anthropic is a miss. His first major point is that Anthropic is unelected and shouldn't tell the govt what to do and if we don't want the govt to mass surveil its citizens there should be a law against that. … @eric_he_1998 · Eric He
- I'm glad I'm not on X because the debate that is being waged there about Anthropic versus the Pentagon is appalling. The post below, like others on X argues that national security should override Anthropic not wanting their AI being used to kill people. … @carnage4life@mas.to · Dare Obasanjo
- The power struggle over AI red lines Politico · Aaron Mak
- The Pentagon's bombshell deal with OpenAI, explained Understanding AI · Timothy B. Lee
- OpenAI CEO Defends Taking Over Anthropic's Place in the Pentagon's AI Infrastructure Android Headlines · Jean Leon
- OpenAI Claims Safety ‘Red Lines’ in Pentagon Deal—But Users Aren't Buying It Decrypt · Jose Antonio Lanz
- A ‘Fight About Vibes’ Drove the Pentagon's Breakup with Anthropic Wall Street Journal
- Anthropic sees major Claude outage after ‘unprecedented demand’ Silicon Republic · Ann O'Dea
- 1.5 Million Users Leave ChatGPT. If You Cancel, Make Sure You Do This First Forbes · Barry Collins
- Anthropic Is Cashing In on OpenAI's Pentagon Deal Adweek · Kendra Barnett
- A look at the rights AI companies have in US government contracts, such as the “any lawful use” standard, amid the Anthropic-DOD dispute and the OpenAI-DOD deal Jessica Tillipman
- No one has a good plan for how AI companies should work with the government TechCrunch · Russell Brandom
- Anthropic's concerns are legitimate, but its position is intolerable and misaligned with a reality where US foes are developing autonomous fighting capabilities Stratechery · Ben Thompson
- AI Bros Wanted Trump. Now They Learn What Happens When You Tell Him No. Techdirt · Mike Masnick
- Mantic Monday: Groundhog Day Astral Codex Ten · Scott Alexander
- Anthropic, the Pentagon, and the AI Innovation Ecosystem R Street Institute
- Anthropic's Pentagon Sanctions Expose Enterprise AI's Emerging Vendor Risks PYMNTS.com
- Can Anthropic survive taking on Trump's Pentagon? The Economic Times
- The Pentagon vs. Anthropic The Dispatch
- OpenAI amends Pentagon deal as Sam Altman admits it looks ‘sloppy’ The Guardian
- Anthropic to sue Trump administration over security risk designation DatacenterDynamics · Sebastian Moss
- After leaks and massive criticism, OpenAI adds safeguard clauses to Pentagon contract The Decoder · Matthias Bastian
- ‘Whatever It Takes’ Rattles Global Markets New York Times
- The Anthropic-OpenAI fight could usher in a new era: chatbot monogamy Business Insider · Dan DeFrancesco
- Federal AI shakeup: State Department swaps Claude for aging GPT-4.1 The Decoder · Maximilian Schreiner
- Hundreds of AI Tech Workers Are Demanding the Pentagon Drop Its Anthropic “Supply-Chain Risk” Label Android Headlines · Tyler Lee
- The Anthropic-DOD skirmish is the first major public debate on control over frontier AI, and institutions behaved erratically, maliciously, and without clarity Hyperdimensional · Dean W. Ball
Discussion
-
@katrinamanson
Katrina Manson
on x
Anthropic didn't think developing the technology would cross its red line, according to one of the people. Although the effort could ultimately create lethal drone swarms, a human would still be able to monitor and stop the system if necessary, according to the person.
-
@katrinamanson
Katrina Manson
on x
Anthropic's submission focused on using its Claude AI tool to translate a commander's intent into digital instructions and to coordinate a fleet of drones, according to the person. It didn't use AI for autonomous targeting or weapons decisions, the person said.
-
@chatgpt21
Chris
on x
I kid you not this is Claude's unfiltered reaction 💀 [image]
-
@wesroth
Wes Roth
on x
A bombshell new report from Bloomberg just added a massive plot twist to the ongoing Anthropic vs. Pentagon saga. Despite their public refusal to build autonomous weapons, it turns out Anthropic actively pitched a military drone project! According to insiders, Anthropic
-
@gbrl_dick
Gabriel
on x
we're in The Culture timeline and we're getting the knife missiles.
-
@apples_jimmy
@apples_jimmy
on x
Claude but for autonomous drone swarms. He's just a wittle peaceful Claude :3 [image]
-
@provisionalidea
James Rosen-Birch
on x
New scoop out of Bloomberg underscores how flexible Anthropic was willing to be about autonomous weapons — as long as there was a human somewhere in the loop, they were fine. Which may indicate just how extreme DoW is in their demands on this portfolio. [image]
-
@morqon
Morgan
on x
earlier this year, anthropic submitted a proposal to produce an autonomous drone swarm for the pentagon this was within their red line: “although the effort could ultimately create lethal drone swarms, a human would still be able to monitor and stop the system if necessary”
-
@katrinamanson
Katrina Manson
on x
SCOOP: Anthropic was among the AI companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming, acc to people familiar w/ matter. https://www.bloomberg.com/...
-
r/ClaudeAI
r
on reddit
Anthropic was among the AI companies that submitted a proposal earlier this year to compete in a $100 million Pentagon prize challenge …
-
@aidan_mclau
Aidan McLaughlin
on x
i personally don't think this deal was worth it
-
@haydenfield
Hayden Field
on x
NEW: When OpenAI announced its Pentagon deal Friday night, people immediately challenged Sam Altman's claims. Why, they asked, would the DoD suddenly agree to red lines when it had said it would never do so? The answer, sources told me, is that it didn't. https://www.theverge.com…
-
@nathanpmyoung
Nathan
on x
My current read is that OpenAI have said they maintained Anthropic's red lines without having done so. Not consistently candid. Anthropic senior staff assured people that RSPs were binding. They weren't. Not exactly candid either. Choose for yourself how bad each is.
-
@shakeelhashim
Shakeel
on x
Very important piece that confirms what I've suspected the last couple days: “If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it's technically legal, then the US military can use OpenAI's technology to carry it out.” [image]
-
@thezvi
Zvi Mowshowitz
on x
This is good and fully consistent with my reporting and understanding. OAI is permitting all legal use. OpenAI is trusting DoW to determine legality and relying on its safety stack to catch if DoW breaks their trust, and the red lines are only in highly illegal territory.
-
@shakeelhashim
Shakeel
on x
Important context here is that OpenAI's team has DoW experience. And as @binarybits points out, they're likely well versed in playing word games. The statement OpenAI gave The Verge earlier today is a perfect example of this. [image]
-
@shakeelhashim
Shakeel
on x
OpenAI says a bunch of safeguards in its contracts prevent its models from being used for these purposes. But the “protections” are flimsy at best, and OpenAI is yet to provide evidence of a clause that specifically prevents it. [image]
-
@garymarcus
Gary Marcus
on x
“OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.” Translation? 1. OpenAI is full of shit 2. They may well turn over everything you ever typed into ChatGPT if the US government asks. Scoop from
-
@garymarcus
Gary Marcus
on x
BREAKING: “OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.” Translation? 1. OpenAI is full of shit 2. They may well turn over everything you ever typed into ChatGPT if the US government asks.
-
@binarybits
Timothy B. Lee
on x
Recall that the Obama Administration's view circa 2013 was that most of what Snowden revealed wasn't illegal or improper. They played a lot of word games to downplay and justify what a lot of ordinary people considered intrusive mass surveillance programs.
-
@binarybits
Timothy B. Lee
on x
I don't understand why OpenAI thinks quoting this language would convince people concerned about autonomous weapon uses. “You can't do it in any case where it would be illegal” is another way of saying “you can do it if it's legal.” [image]
-
@binarybits
Timothy B. Lee
on x
I think it's significant that @natseckatrina, who @sama tapped to help answer questions about the DoD deal on Twitter, led the Obama administration's “media and public policy response” to the Snowden disclosures, according to her LinkedIn. Explains a lot about their approach.
-
@binarybits
Timothy B. Lee
on x
So of course when the government comes to OpenAI and says “don't worry we won't engage in mass surveillance,” they were inclined to believe them. Because one of their key decision-makers had been on the team that didn't think the Snowden revelations were problematic.
-
@tszzl
Roon
on x
there is no contractual redline obligation or safety guardrail on earth that will protect you from a counterparty that has its own secret courts, zero day retention, full secrecy on the provenance of its data etc. every deal you make here is a trust relationship
-
@unmarredreality
@unmarredreality
on x
Every deal you ever make is a trust relationship. That's why there are conditions you simply don't agree to - especially when you're developing something with unprecedented scope and influence. Anthropic wisely declined such conditions. OpenAI agreed to them anyway.
-
@tszzl
Roon
on x
@allTheYud thankfully if I quit my job no one will ever work on ai or weapons technology again. you would have advised oppenheimer himself to quit his job
-
@ciphergoth
Paul Crowley
on x
OpenAI employees are already at a desperate barrel scraping stage of justifying continuing to work for Altman.
-
@thedextriarchy
Adi Robertson
on bluesky
blinks in Edward Snowden [embedded post]
-
@seanokane
Sean O'Kane
on bluesky
it's almost like this guy sam is a little slippery with the truth sometimes [embedded post]
-
@haydenfield
Hayden Field
on bluesky
NEW: On Friday night when OpenAI announced its Pentagon deal, people immediately challenged Sam Altman's claims. Why, they asked, would the DoD suddenly agree to red lines when it had clearly said it would never budge? — The answer, sources told me, is that it didn't. — www.…
-
@reckless
Nilay Patel
on bluesky
Sam Altman got played and spun it like a win - @haydenfield.bsky.social has the scoop from a weekend's worth of reporting from inside the Pentagon AI negotiations. www.theverge.com/ai-artificia... [image]
-
@druce.ai
@druce.ai
on bluesky
Negotiations over a roughly $200 million Pentagon AI contract collapsed after Secretary Pete Hegseth labeled Anthropic a supply chain risk; OpenAI secured a competing framework deal the same night and Anthropic said it would sue.
-
@nktpnd
Ankit Panda
on bluesky
“...the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data, people briefed on the negotiations said” www.nytimes.com/2026/03/01/t...
-
r/artificial
r
on reddit
How OpenAI caved to the Pentagon on AI surveillance
-
r/singularity
r
on reddit
How OpenAI caved to the Pentagon on AI surveillance | The law doesn't say what Sam Altman claims it does.
-
r/technology
r
on reddit
How OpenAI caved to the Pentagon on AI surveillance | The law doesn't say what Sam Altman claims it does
-
r/politics
r
on reddit
How OpenAI caved to the Pentagon on AI surveillance | The law doesn't say what Sam Altman claims it does.
-
r/TrueReddit
r
on reddit
How OpenAI caved to the Pentagon on AI surveillance
-
@secscottbessent
Treasury Secretary Scott Bessent
on x
At the direction of @POTUS, the @USTreasury is terminating all use of Anthropic products, including the use of its Claude platform, within our department. The American people deserve confidence that every tool in government serves the public interest, and under President Trump n…
-
@davidicke
David Icke
on x
Any government that has a problem with an AI company not allowing mass domestic surveillance or fully-AI deployed weapons is a grotesque tyranny. But then we knew that.
-
@brendan_duke
Brendan Duke
on x
The same Admin that said efficiency was so important they fired thousands of civil servants also wants to ban using a leading enterprise tool at the IRS and other Treasury components because the firm won't let DOD use it for domestic surveillance and autonomous weapons?
-
@mobav0
Mo Bavarian
on x
Anthropic SCR designation is unfair, unwise, and an extreme overreaction. Anthropic is filled with brilliant hard-working well-intentioned people who truly care about Western civilization & democratic nations success in frontier AI. They are real patriots. Designating an
-
@chamath
Chamath Palihapitiya
on x
This is an important moment for all companies: By picking only one model, you absorb that model maker’s institutional biases and idiosyncrasies. If those deviate from your POV, you are taking on massive risk as we saw with the DoW this weekend. No real business should take ki…
-
@idontexisttore
@idontexisttore
on x
AI you don't own shouldn't be running your wars. Ai you don't KNOW from the first line of code should never be implemented as foundation for robot wars.
-
@joshkale
Josh Kale
on x
OpenAI just won the biggest Gov AI contract in history but its employees aren't celebrating: One of their research scientists just publicly said “I personally don't think this deal was worth it.” And he's not alone 500+ employees from OpenAI and Google signed a letter opposing [i…
-
@unbranded63
@unbranded63
on x
US government purges superior AI platform as whore vendors line up to abandon ethics and donate to Trump PACs in order to secure taxpayer funded contracts. Cesspool.
-
@steveguest
Steve Guest
on x
Big move from the Trump administration. The government shouldn't be beholden to woke tech oligarchs like Dario Amodei.
-
@mobav0
Mo Bavarian
on x
As an American working in frontier for the last 5 years (at Anthropic's biggest rival, OpenAI), it pains me to see the current unnecessary drama between Admin & Anthropic. I really hope the Admin realies its mistake and reverses course. USA needs Anthropic and vice versa! 🇺🇸
-
@mobav0
Mo Bavarian
on x
I don't think there is an un-crossable gap between what Anthropic wants and DoW's demands. With cooler heads it should be possible to cross the divide. Even if divide is un-crossable, off-boarding from Anthropic models seems like the right solution for USG. The solution is not
-
@adxtyahq
Aditya
on x
“its over for anthropic” bro this is when the real game starts [video]
-
@briantycangco
Brian Tycangco
on x
All the more reason to use Claude!
-
@carnage4life
Dare Obasanjo
on bluesky
The government punishing Anthropic because they won't agree to Claude being used to kill people is like punishing Glock because they won't sell you a gun that shoots the person to your left 5% of the time.
-
@boazbaraktcs
Boaz Barak
on x
Extremely well put @deanwball ! A must read essay. My position is that: 1. Anthropic is a great company, people who work there care deeply about AI safety and the benefit of the U.S. Tagging it as a “supply chain risk” is a massive own-goal to American AI leadership. 2. The
-
@lessin
@lessin
on x
This is the smartest overall thing. i have read on claude / dow dynamic
-
@sammcallister
Sam Mcallister
on x
@aidan_mclau @scrollvoid This isn't true. Anthropic hasn't offered a “helpful-only” model without safeguards for NatSec use. Claude Gov is a custom model with extra training, including technical safeguards. (We've also had FDEs and researchers implementing it, and we run our own …
-
@danprimack
Dan Primack
on x
There is a valid argument for DoD not wanting to work w/ cos that used Claude in products being sold to DoD, given mission disagreement between the company and DoD. There is no good argument for banning Claude use at other, non-national security depts. Beyond spite.
-
@liv_boeree
Liv Boeree
on x
Fascinating piece from someone close to the DoW/Anthropic skirmish. Worth reading.
-
@matthew_meyers5
Matthew Meyers
on x
Many policy failures are downstream from this dynamic [image]
-
@_coenen
Andy Coenen
on x
This essay perfectly captures why the Anthropic fiasco matters
-
@albertwenger
Albert Wenger
on x
Eloquent essay on why the bully treatment of @AnthropicAI by the administration is profoundly bad. Everyone in tech should be speaking up.
-
@itsurboyevan
Evan Armstrong
on x
Excellent—people seem to have forgotten that what makes America great is fundamental rights of speech, private property, and enforcement of contracts. I disagree with Dean on many (most?) AI policies, but without contract law that debate is meaningless.
-
@afinetheorem
Kevin A. Bryan
on x
Re: DoD-Anthropic craziness & @deanwball's great essay today: let's try a steelman USG defense. You are Canada, or a future D admin in the US. Contract w/ Starlink to handle all govt comms or similar. You worry it is so integrated & important - what if Elon shuts off access? 1/8
-
@palmerluckey
Palmer Luckey
on x
@AlecStapp This would hit a lot harder if the government had not been doing this for at least a century. The gun industry during the Clinton years is a particularly relevant example.
-
@dkthomp
Derek Thompson
on x
I continue to think that a useful way to see this administration is a kind of systematic “Control-F: monarchy” search function to discover the tools of authoritarianism embedded in the legal code. The White House keeps finding dormant, esoteric, picayune statutes to justify
-
@justinbullock14
Justin Bullock
on x
This week in AI policy, everything is different, and everything is the same. Brilliantly laid out by @deanwball, who has further increased my respect for him the last 5 days. Kudos, sir.
-
@tszzl
Roon
on x
@QuasLacrimas you can't conflate “the USA gets to decide” with “the pentagon can unilaterally nuke your company”
-
@pmarca
Marc Andreessen
on x
Overheard in Silicon Valley: “Every single person who was in favor of government control of AI, is now opposed to government control of AI.”
-
@tszzl
Roon
on x
The machinery of our current republic seems to be in such disrepair that it is hard to see how it lasts. No one knows what comes next, but I strongly suspect that whatever it is will be deeply intertwined with, and enabled by, advanced AI. It is with this that we will rebuild our
-
@deanwball
Dean W. Ball
on x
@BearForce_Won as someone who has idolized ben since the days of “no, the iPhone is going to be resilient to commodification” (his beginning)—and obviously is operating in ben's shadow as a tech newsletter writer—I was disappointed with his piece today.
-
@joannejang
Joanne Jang
on x
the most thoughtful & truth-seeking take on this all
-
@bearforce_won
@bearforce_won
on x
This is excellent, and much less hastily reasoned than this morning's Stratechery piece, which as far as I can tell attempts to make the case that the government can unilaterally destroy private property on the basis of a counterparty's entirely theoretical future threat to its
-
@quaslacrimas
Tantum
on x
Long term, the most powerful artificial intelligence will also be the most powerful weapon in the world. If you think you're building the most powerful weapon in the United States of America and the USA doesn't get to decide how to use it, you're smoking crack
-
@xenoimpulse
@xenoimpulse
on x
I suppose I was incorrect in saying that a walkback could alleviate the chilling effects; the chilling effects are here to stay no matter what due to institutional incoherence and disunity. [image]
-
@zeffmax
Max Zeff
on x
Powerful words from Dean Ball, former White House AI adviser. “That alone should make one thing clear: terms like this are not some ridiculous violation of the norms of defense contracting. Anyone attempting to convince you otherwise is misinformed or lying.”
-
@dkthomp
Derek Thompson
on x
A quite brilliant essay on AI, the law, and the future of the republic. An upshot: If the US govt can go to any company, demand any contract language, and reserve the right to destroy your company if you have qualms, there is no such thing as private property rights in America.
-
@mreflow
Matt Wolfe
on x
I know a ton of people have shared this already and you've probably already seen it. But it really is a great read. It's the most clear explanation of what's currently happening with Anthropic vs the DoW, with much more nuance than I'm able to share.
-
@gallabytes
@gallabytes
on x
very high quality post, an accounting of the true cost of the moment. an interesting question for this time of incredible leverage that I haven't seen enough ink on: what comes after the republic? what should governance even look like at the dawn of superintelligence?
-
@minmodulation
@minmodulation
on x
lmao well you get what you voted for [image]
-
@garymarcus
Gary Marcus
on x
“No matter what world we build, the limitations imposed in the law on what we know today as “the government's” use of AI will be of paramount importance. We really do want to ensure that mass surveillance and autonomous weapons/systems of control cannot be used to curtail our
-
@presidentlin
@presidentlin
on x
Bars. Read to the end. My two favourite paragraphs [image]
-
@ericboehm87
Eric Boehm
on x
You really should read @deanwball's latest on the Trump administration's attempted corporate murder of Anthropic... [image]
-
@s_oheigeartaigh
@s_oheigeartaigh
on x
This is essential reading. It's powerful, emotive, but also has exceptional clarity. This in particular is nail on head - “Even if I am right that we live in the “rapid capabilities growth” world, it will still be the case that the adoption of U.S. AI will be seen as especially
-
@mdudas
Mike Dudas
on x
incredible piece on @AnthropicAI vs @DeptofWar via @deanwball https://www.hyperdimensional.co/ ... you simply can't pass laws anymore in america, which means regulators, courts and the president run the country [image]
-
@zdch
Zac Hill
on x
One reason I am a State Capacity Maximalist (and why the work of e.g. @pahlkadot et al at Recoding America is so important to me) is that we just can't function as a Republic when the idea of passing legislation is at best a punchline. GOAT-tier essay from @deanwball today. [imag…
-
@eggerdc
Andrew Egger
on x
Bracing stuff from @deanwball [image]
-
@deredleritt3r
Prinz
on x
Self-recommending, and a must-read. I agree with pretty much every word of this.
-
@ruark
@ruark
on x
“I encourage you to avoid the assumption that “democratic” control—control “of the people, by the people, and for the people”—is synonymous with governmental control. The gap between these loci of control has always existed, but it is ever wider now.” https://www.hyperdimensional…
-
@thezvi
Zvi Mowshowitz
on x
Now in a Twitter article, so you have no excuse. Read it. My stuff can wait.
-
@deanwball
Dean W. Ball
on x
Clawed
-
@hamandcheese
Samuel Hammond
on x
“At some point during my lifetime—I am not sure when—the American republic as we know it began to die.”
-
@rcbregman
Rutger Bregman
on x
Wow, the lead author of Trump's AI Action Plan, Dean Ball, is calling out Pete Hegseth's mafia-style behavior toward Anthropic: “The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do [i…
-
@deanwball
Dean W. Ball
on x
I have, for lack of a better phrase, “action plan mode,” and that part of me wants to be like, “just add a fucking clause to dfars you fools” and then I also have, uh, “macrohistorical literary analysis mode,” and I think this piece probably captures the two wolves pretty well
-
@andrewcurran_
Andrew Curran
on x
The old world is ending; more of it burns away every day. Things will never return to the way they were, not in two years, not in five, not ever. We have long since passed the threshold. This is an era of transformative change.
-
@alecstapp
Alec Stapp
on x
This is not hyperbole, and every business leader in the country needs to recognize the stakes of what's happening: [image]
-
@alecstapp
Alec Stapp
on x
Really important point here: There were much, much less restrictive means available for the Department of War to achieve its stated ends. Instead, they are attempting to destroy one of our leading AI companies. [image]
-
@deanwball
Dean W. Ball
on x
I think this one needs no further explanation. [image]
-
@timkellogg.me
Tim Kellogg
on bluesky
Fascinating article. It argues that the republic is already dead, and the DoW incident is merely the signal — www.hyperdimensional.co/p/clawed [image]
-
@moskov.goodventures.org
Dustin Moskovitz
on bluesky
“If this event contributed anything, it simply made the ongoing death more obvious and less deniable for me personally. I consider the events of the last week a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.” — Don't sk…
-
@tcarmody
Tim Carmody
on bluesky
The means of production have been replaced by the terms of service. [embedded post]
-
@tcarmody
Tim Carmody
on bluesky
This makes it sound like Anthropic's funding might be revoked, which would be surprising — but that's not the case. Investors are just worried about their ROI depending on how this supply chain risk designation plays out. A nothing story. [embedded post]
-
@brianluidog
Brian Lui
on x
I remember this tech influencer having terrible judgement about wework, so I never followed him. But a lot of techies think he's an oracle of some sort. You can see why I think finance bros are better at parsing information.
-
@ronbodkin
Ron Bodkin
on x
The democratic way to govern powerful technology is to PASS LAWS. Not to use lawfare to destroy companies that refuse to bend the knee. We should be regulating AI as labs race to superintelligence not assuming that as long as they sign contracts to allow DoW to use them “for all
-
@arozenshtein
Alan Rozenshtein
on x
I think this deeply understates the lawlessness of how the government is going about trying to destroy Anthropic. But what it does well is situate this in what is ultimately the bigger-picture question: to what extent will/should America nationalize its AI industry (which is what
-
@kellylsims
Kelly Sims
on x
“What concerns me about Amodei and Anthropic in particular is the consistent pattern of being singularly focused on being the one winner with all of the power, with limited consideration of how everyone else may react to that situation.” This is a thoughtful piece on all this.
-
@smokepetrol
J. Huffer
on x
Jingoist bullshit conclusions and democratic fantasies aside this is a basic analysis of great power politics from a realist (in the Mearsheimer sense) PoV that takes Amodei's eval of Claude on its own terms and usefully contextualizes DoW aggression
-
@alasdairpr
Alasdair Phillips-Robins
on x
This post is confused on the Anthropic-DOD dispute and adopts a vision of society—"might makes right," so quit whining, Dario—that is at odds with American democracy. Anthropic's position is more limited than Thompson says, and we live in a country of laws, not brute force.
-
@meekaale
Mikael Brockman
on x
even stratechery fails to address the difference between Anthropic and OpenAI god I'm so fucking tired of the idiotic discourse fuck all of you fucking pundits
-
@benspringwater
Ben Springwater
on x
I love @benthompson . He is my favorite tech commentator. I listen to @stratechery every day. But his justification for the US Govt seeking to destroy Anthropic is incredibly glib and misguided. AI :: nuclear weapons is sometimes a useful analogy but it's obviously an imperf…
-
@ericlevitz
Eric Levitz
on x
It's really bizarre to see a bunch of ostensibly pro-market, right-leaning tech guys argue, “A private company asserting the right to decide what contracts it enters into is antithetical to democratic government” [image]
-
@themindscourge
@themindscourge
on x
Who will be the Oppenheimer or Sakharov of AI? Anthropic vs DoD discourse reminds me of Cold War debates between nuclear scientists and the governments who employed them to build their nuclear arsenals. The governments thought that they were buying technical skills, but the
-
@kkmaway
Krishna Memani
on x
there is nothing as rich as Tech bros...even if they are not billionaires...you can always aspire...is for them to become foreign policy, game theory, how-the-middle-empire-will-think- about-it expert. I have no clue. But for god's sake, you have no clue either. But you have a
-
@packym
Packy McCormick
on x
Ben Thompson with the best take on DOD v. Anthropic, which is basically: if you don't want the government to treat your technology like nuclear weapons, stop comparing your technology to nuclear weapons. Hype Tax. [image]
-
@irl_danb
Dan
on x
Ben Thompson, as always, lays out the reality more clearly than I could have, despite my attempts by Dario's own words, he's building something akin to nukes he's simultaneously challenging the US government's authority to decide how to wield said power as much as I like [image]
-
@benthompson
Ben Thompson
on x
@EricLevitz I wasn't making a normative argument. Of course I think this is bad. I was pointing out what will inevitably happen with AI in reality
-
@arctotherium42
@arctotherium42
on x
I've been defending DoD's position on their contract with Anthropic, but the correct remedy there is cancelling the contract, not trying to obliterate the company with a supply-chain risk designation.
-
@uswremichael
@uswremichael
on x
Great article about the democratic process determining our nation's fate rather that a single tech founder overriding our leaders.
-
@billyez2
Billy Easley II
on x
Easily my least favorite piece I've read from stratechery. Dismissive of the law's power, Neo-Brandesian in its analysis of public and corporate power dynamics. This is not the way
-
@justjoshinyou13
Josh You
on x
@stratechery This conflates multiple senses of control/power. By vetoing some government uses of Claude, Anthropic is not arrogating to itself the ability or right to use Claude for autonomous weapons or mass domestic surveillance.
-
@tbpn
@tbpn
on x
Stratechery's @benthompson: “I would like [Anthropic] to sell to the government, and I would like Congress to pass a law addressing these digital surveillance issues.” “A lot of people are like, 'That's unrealistic,' which I'm amenable to. But at the end of the day, if you [video…
-
@jeremiahdjohns
Jeremiah Johnson
on x
@stratechery This is one of the worst things I've read from you, and seems like obvious nonsense. “AI is as dangerous as nuclear weapons, which is why if a company expresses concerns about using AI for autonomous weapons, we will destroy them permanently”. What the hell?
-
@rabois
Keith Rabois
on x
Yes.
-
@quastora
Trey Causey
on x
@stratechery I believe this post fundamentally misunderstands the options that are / were actually available to the government and to Anthropic in a way that is undemocratic. I highly recommend reading @deanwball's piece on this today for a more accurate picture. https://www.hype…
-
@ramez
Ramez Naam
on x
Coming back to this. No AI company can stop DOD from misusing AI, because it's simply too easy to pick up or buy a different model. But by making the issue public, Dario has called the attention of voters, the press, and Congress to the potential misuse of AI. That's the win.
-
@ramez
Ramez Naam
on x
The most important thing Dario did is get this issue in the news. At the end of the day, xAI will build a good enough model. Or Palantir can build a frontier model for a few hundred million. There are no technical moats here. The important thing is that the public and Congress
-
@reckless
Nilay Patel
on bluesky
Ben Thompson making a full-throated case for fascism here stratechery.com/2026/anthrop... [image]
-
@romitmehta.com
Romit Mehta
on bluesky
This is the kind of unnecessary rationalization of tech by Ben that prompted me to not renew his newsletter last year. This is nuts, and this is not the first time he has written such a thing. [embedded post]
-
@rusty.todayintabs.com
Rusty Foster
on bluesky
Earlier in the piece, he says that international law is “fake.” It doesn't get much more cynical and amoral than this. I haven't checked in on Ben in a while but this is straightforward Nazi thinking. “Might makes right and only violent power is real.” [embedded post]
-
@lopatto
Elizabeth Lopatto
on bluesky
the contortions here are very funny if you're familiar with (a) ben's stance on other tech cos and (b) his objections to antitrust action. do we think he's aware that he's describing and endorsing fascism? stratechery.com/2026/anthrop...
-
r/WeTheFifth
r
on reddit
“No president in the modern era has ordered more military strikes against as many different countries as Donald Trump …
-
@undersecretaryf
@undersecretaryf
on x
For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safe…
-
@natseckatrina
@natseckatrina
on x
A lot of the concerns about the government's “all lawful use” language seem to stem from mistrust that government will follow the laws. At the same time, people believe that Anthropic took an important stand by insisting on contract language around their redlines. We cannot
-
@_nathancalvin
Nathan Calvin
on x
From reading this and Sam's tweet, it really seems like OpenAI *did* agree to the compromise that Anthropic rejected - “all lawful use” but with additional explanation of what the DOW means by all lawful use. The concerns Dario raised in his response would still apply here
-
@nabla_theta
Leo Gao
on x
the contract snippet from the openai dow blog post is so obviously just “all lawful use” followed by a bunch of stuff that is not really operative except as window dressing. the referenced DoD Directive 3000.09 basically says the DoD gets to decide when autonomous weapons systems
-
@shakeelhashim
Shakeel
on x
Lots of new, hard to follow details today about the OpenAI-Pentagon deal. Here's a roundup of the most important things about using commercially available data for surveillance on Americans. TL;DR: It seems the Pentagon wanted Anthropic to allow this, and Anthropic's refusal is
-
@thebasepoint
Joshua Batson
on x
For those wondering how mass domestic surveillance could be consistent with “all lawful use” of AI models, I recommend a declassified report from the ODNI on just how much can be done with commercially available data (CAI): “...to identify ever person who attended a protest” [ima…
-
@justanotherlaw
Lawrence Chan
on x
OpenAI has released the language in their contract with the DoW, and it's exactly as Anthropic was claiming: “legalese that would allow those safeguards to be disregarded at will”. Note: the first paragraph doesn't say “no autonomous weapons”! It says “AI can't control [image]
-
@deredleritt3r
Prinz
on x
My thoughts on OpenAI's agreement with the DoD: On autonomous AI weapons: 1. “The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.” This says that OpenAI's models may not [image]
-
@shakeelhashim
Shakeel
on x
“We cannot say that the government cannot be trusted to interpret laws and contracts the right way, but also agree that Anthropic's policy redlines, in a contract, would have been effective.” This is a fair and good point.
-
@max_spero_
Max Spero
on x
Confirmation by the administration that the OpenAI contract contained the “all lawful use” wording that Anthropic rejected. Sam's wordsmithing aside, this opens the door for Trump or a future leader to authorize autonomous weapons or mass domestic surveillance with AI.
-
@emmyprobasco
Emmy Probasco
on x
There is a narrow but important gap between the “all lawful use” stipulation and “no autonomous weapons.” On the one hand, you could interpret these two positions as being essentially aligned. But it is more complicated than that. 🧵
-
@livgorton
Liv
on x
I feel like I am going insane and no one has read the articles. It appears that OpenAI has not brought about harmony and still has the “all lawful use” clause in their contract that was the issue in the first place? I think they've negotiated functionally the same contact they've
-
@shakeelhashim
Shakeel
on x
What we know about the OpenAI-DoW deal: OpenAI agreed to the terms Anthropic rejected. The terms include an “all lawful use” clause. The contract “references certain existing legal authorities” which the govt claims prove that domestic mass surveillance is already illegal.
-
@undersecretaryf
@undersecretaryf
on x
@tedlieu The axios article doesn't have much detail and this is DoW's decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective co…
-
@fortenforge
@fortenforge
on x
In fewer words: Anthropic doesn't trust the current administration's own interpretation of “all lawful use” and wanted consultation. OpenAI was more than happy to trust Hegseth and Trump with their technology.
-
@mattbgilliland
Matt Gilliland
on x
Anyone who thinks “all lawful use” + LLMs doesn't enable unprecedented mass surveillance is ignorant of the state of the law, the state of the technology, or both.
-
@gjmcgowan
George McGowan
on x
This is just “all lawful use” with extra words - no way the pentagon would have a huge hissy fit about these redlines and then immediately agree to a new contract with the same ones in it
-
@johnschulman2
John Schulman
on x
There's some discussion about whether contract terms ("all lawful use" vs more specific terms) vs safety stack (monitoring systems) are more effective as safeguards against AI misuse. It'd be useful for someone to game out how they'd hold up against historical incidents of
-
@arozenshtein
Alan Rozenshtein
on x
Very interesting procurement analysis.
-
@jtillipman
Jessica Tillipman
on x
Can AI companies restrict government use of their technology? They do it all the time. Whether and how depends on the acquisition pathway, contract type, and terms. My explainer: https://jessicatillipman.com/ ... #Anthropic #openai #pentagon #DoD #govcon
-
@codytfenwick
Cody Fenwick
on x
This is excellent — and this point is particularly interesting: [image]
-
@scaling01
@scaling01
on x
very good read on the Anthropic - OpenAI - DoW situation https://jessicatillipman.com/ ...
-
@jacquesthibs
Jacques
on x
Great article from someone who knows what they are talking about [image]
-
@bradrcarson
Brad Carson
on x
Signal-boosting an excellent explainer.
-
@andytseng
Andy Tseng
on bluesky
In case anyone's interested, @jtillipman.bsky.social has an excellent, detailed analysis of the current Anthropic-DoD-OpenAI contract debate - lots of nuances I wasn't aware of! — #USPol #AI #AIGovernance #Anthropic #DoD #OpenAI #GovernmentProcurement #GovCon #ProcurementPolicy…
-
@timkellogg.me
Tim Kellogg
on bluesky
A much more wholistic analysis of the OpenAI v Anthropic v DoW contract mess — OpenAI gives up contractual enforcement of redlines in exchange for architectural enforcement (supposedly) — the incident highlights severe problems with government procurement — jessicatillipman.c…
-
@ianbetteridge.com
Ian Betteridge
on bluesky
An actual expert on government contracts: “Contractors restrict the government's use of their products all the time.” — Ben Thompson: “this insistence on controlling the U.S. military, however, is fundamentally misaligned with reality” — I just don't know who to believe!
-
r/technology
r
on reddit
Senate's Wyden Pledges Battle Over Pentagon Ban on Anthropic
-
@arozenshtein
Alan Rozenshtein
on x
The Pentagon's legal position is so bad that they're either delusional or they never intended to win this lawsuit. [image]
-
@timfduffy
Tim Duffy
on x
New piece from @lawfare on the Anthropic supply chain risk designation, they argue DoW has no case. Here is their conclusion: https://www.lawfaremedia.org/ ... [image]
-
@charliebul58993
Charlie Bullock
on x
I agree with Alan's overall claim in this piece (Anthropic will very likely sue and win), but I disagree with his analysis on one important point. I think that Anthropic's case is actually even stronger than Alan's and Michael's analysis suggests, because the statutory “judicial
-
@bengoldhaber
Ben Goldhaber
on x
This was an excellent and in depth review of the relevant statutes and why the supply chain risk designation is unlikely to hold up
-
@chorzempamartin
Martin Chorzempa
on x
This @lawfare piece is worthwhile (and encouraging), suggesting DoW extreme attack on Anthropic is overreach that won't survive in court. We could end up with more narrow DoW policy that Anthropic can't be specifically a subcontractor on DoW contracts. https://www.lawfaremedia.or…
-
@deredleritt3r
Prinz
on x
This is a great article discussing Anthropic's likelihood of prevailing over the DoD's supply chain risk designation. TL;DR: Things are not looking too great for the DoD.
-
@arozenshtein
Alan Rozenshtein
on x
The short version: Section 3252 was built to address foreign adversaries infiltrating the IT supply chain. Congress designed it with minimal procedural protections precisely because it assumed the targets would be entities like Huawei and ZTE, not domestic companies in contract
-
@atabarrok
Alex Tabarrok
on x
“The specific actions Hegseth and Trump took have serious legal problems. The designation exceeds what the statute authorizes....required findings don't hold up...H's statements may have doomed the government's litigation posture before it even begins.” https://www.lawfaremedia.o…
-
@austinc3301
Agus
on x
Lawfare just released this detailed analysis by Endrias & Rozenshtein, finding that Hegseth's supply chain risk designation of Anthropic has serious legal problems on basically every level. Key takeaways 🧵
-
@petereharrell
Peter Harrell
on x
In fact, as of Mar. 2 @ 1pm ET, I can find no evidence that Hegseth has legally tried to designate Anthropic a supply chain risk, suggesting that maybe the government suspects its legal case is quite weak... (Government does seem to be terminating its own direct contracts).
-
@lawfare
@lawfare
on x
On Feb. 27, Defense Secretary Pete Hegseth designated Anthropic—the maker of the AI model Claude—a supply chain risk to national security. Michael Endrias and @ARozenshtein explain what this designation does and the legal challenges it will likely face. [image]
-
@christinayiotis
@christinayiotis
on x
“ .. designation & the secondary boycott go beyond what Congress authorized. Section 3252 defines supply chain risk as .. risk that ‘an adversary’ may sabotage or subvert a covered system .. connotes an entity acting with hostile intent .” https://www.lawfaremedia.org/ ... @lawfa…
-
@arozenshtein
Alan Rozenshtein
on x
A deep dive in @lawfare on the many legal problems with the Pentagon's designation of Anthropic as a supply chain risk. [image]
-
r/supremecourt
r
on reddit
Pentagon's Anthropic Designation Won't Survive First Contact with Legal System
-
@deanwball
Dean W. Ball
on x
I don't understand why this is so hard for people. Of course for some it actually isn't and they are just defending “whatever my side does” for all the typical stupid reasons. I am a little disappointed to see who has now fallen into the idiot trap, however.
-
@artemisconsort
Hunter Ash
on x
People believe in process only and exactly to the extent they believe it will produce their desired outcomes.
-
@dan_jeffries1
Daniel Jeffries
on x
Limited government folks have always understood one thing better than everyone else: Your team is not always in charge. Fools imagine their team will be in charge forever and always. So whatever powers you give “the powers that be” get to be used by the other guys later. So
-
@beffjezos
@beffjezos
on x
EAs are for government control of AIs as long as it's their people in charge We have been calling them out for years and now the mask has come off Self-serving power-seeking disguised as virtue
-
@aaronscher
Aaron Scher
on x
It continues to be the case that nobody knows how to align a superintelligence. Therefore, no company should be allowed to create such an AI, no government should be allowed to create such an AI. The private sector cannot effectively create such prohibitions—governments could.
-
@antoniogm
Antonio García Martínez
on x
Yes, but the problem is that the reverse is also true.
-
@erikvoorhees
Erik Voorhees
on x
If your opinion on this topic depends on who the president is, you are actually the problem.
-
@alltheyud
Eliezer Yudkowsky
on x
Just to be real clear, I am and have been in favor of international treaties to shut down escalation toward superhuman AI. I am against government control of advanced AI. I am also against private control of advanced AI. It must not be allowed to exist.
-
@dkthomp
Derek Thompson
on x
Three things that can be true at the same time 1. That this WH has a commendable talent for turning public opinion against its actions. 2. That govt regulation of AI was always going to be a very tricky multi-stage muddle no matter who the president was in 2026. 3. That Pete
-
@morallawwithin
@morallawwithin
on x
Crazy how the “the government should ban torture” people are in favor of government control of torture, right until the government starts torturing people
-
@thezvi
Zvi Mowshowitz
on x
My biggest update on this was the willingness of DoW to make modifications in OAI's favor. Very positive and to me surprising.