/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

The Anthropic-DOD skirmish is the first major public debate on control over frontier AI, and institutions behaved erratically, maliciously, and without clarity

On Anthropic and the Department of War  —  I.  —  A little more than a decade ago, I sat with my father and watched him die.

Hyperdimensional Dean W. Ball

Discussion

  • @zeffmax Max Zeff on x
    Powerful words from Dean Ball, former White House AI adviser. “That alone should make one thing clear: terms like this are not some ridiculous violation of the norms of defense contracting. Anyone attempting to convince you otherwise is misinformed or lying.”
  • @gallabytes @gallabytes on x
    very high quality post, an accounting of the true cost of the moment. an interesting question for this time of incredible leverage that I haven't seen enough ink on: what comes after the republic? what should governance even look like at the dawn of superintelligence?
  • @itsurboyevan Evan Armstrong on x
    Excellent—people seem to have forgotten that what makes America great is fundamental rights of speech, private property, and enforcement of contracts. I disagree with Dean on many (most?) AI policies, but without contract law that debate is meaningless.
  • @presidentlin @presidentlin on x
    Bars. Read to the end. My two favourite paragraphs [image]
  • @ericboehm87 Eric Boehm on x
    You really should read @deanwball's latest on the Trump administration's attempted corporate murder of Anthropic... [image]
  • @justinbullock14 Justin Bullock on x
    This week in AI policy, everything is different, and everything is the same. Brilliantly laid out by @deanwball, who has further increased my respect for him the last 5 days. Kudos, sir.
  • @mdudas Mike Dudas on x
    incredible piece on @AnthropicAI vs @DeptofWar via @deanwball https://www.hyperdimensional.co/ ... you simply can't pass laws anymore in america, which means regulators, courts and the president run the country [image]
  • @dkthomp Derek Thompson on x
    A quite brilliant essay on AI, the law, and the future of the republic. An upshot: If the US govt can go to any company, demand any contract language, and reserve the right to destroy your company if you have qualms, there is no such thing as private property rights in America.
  • @zdch Zac Hill on x
    One reason I am a State Capacity Maximalist (and why the work of e.g. @pahlkadot et al at Recoding America is so important to me) is that we just can't function as a Republic when the idea of passing legislation is at best a punchline. GOAT-tier essay from @deanwball today. [imag…
  • @eggerdc Andrew Egger on x
    Bracing stuff from @deanwball [image]
  • @boazbaraktcs Boaz Barak on x
    Extremely well put @deanwball ! A must read essay. My position is that: 1. Anthropic is a great company, people who work there care deeply about AI safety and the benefit of the U.S. Tagging it as a “supply chain risk” is a massive own-goal to American AI leadership. 2. The
  • @deredleritt3r Prinz on x
    Self-recommending, and a must-read. I agree with pretty much every word of this.
  • @ruark @ruark on x
    “I encourage you to avoid the assumption that “democratic” control—control “of the people, by the people, and for the people”—is synonymous with governmental control. The gap between these loci of control has always existed, but it is ever wider now.” https://www.hyperdimensional…
  • @thezvi Zvi Mowshowitz on x
    Now in a Twitter article, so you have no excuse. Read it. My stuff can wait.
  • @deanwball Dean W. Ball on x
    Clawed
  • @quastora Trey Causey on x
    @stratechery I believe this post fundamentally misunderstands the options that are / were actually available to the government and to Anthropic in a way that is undemocratic. I highly recommend reading @deanwball's piece on this today for a more accurate picture. https://www.hype…
  • @hamandcheese Samuel Hammond on x
    “At some point during my lifetime—I am not sure when—the American republic as we know it began to die.”
  • @rcbregman Rutger Bregman on x
    Wow, the lead author of Trump's AI Action Plan, Dean Ball, is calling out Pete Hegseth's mafia-style behavior toward Anthropic: “The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do [i…
  • @deanwball Dean W. Ball on x
    I have, for lack of a better phrase, “action plan mode,” and that part of me wants to be like, “just add a fucking clause to dfars you fools” and then I also have, uh, “macrohistorical literary analysis mode,” and I think this piece probably captures the two wolves pretty well
  • @andrewcurran_ Andrew Curran on x
    The old world is ending; more of it burns away every day. Things will never return to the way they were, not in two years, not in five, not ever. We have long since passed the threshold. This is an era of transformative change.
  • @alecstapp Alec Stapp on x
    This is not hyperbole, and every business leader in the country needs to recognize the stakes of what's happening: [image]
  • @deanwball Dean W. Ball on x
    I think this one needs no further explanation. [image]
  • @alecstapp Alec Stapp on x
    Really important point here: There were much, much less restrictive means available for the Department of War to achieve its stated ends. Instead, they are attempting to destroy one of our leading AI companies. [image]
  • @s_oheigeartaigh @s_oheigeartaigh on x
    This is essential reading. It's powerful, emotive, but also has exceptional clarity. This in particular is nail on head - “Even if I am right that we live in the “rapid capabilities growth” world, it will still be the case that the adoption of U.S. AI will be seen as especially
  • @tcarmody Tim Carmody on bluesky
    The means of production have been replaced by the terms of service.  [embedded post]
  • @undersecretaryf @undersecretaryf on x
    For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon
  • @natseckatrina @natseckatrina on x
    A lot of the concerns about the government's “all lawful use” language seem to stem from mistrust that government will follow the laws. At the same time, people believe that Anthropic took an important stand by insisting on contract language around their redlines. We cannot
  • @_nathancalvin Nathan Calvin on x
    From reading this and Sam's tweet, it really seems like OpenAI *did* agree to the compromise that Anthropic rejected - “all lawful use” but with additional explanation of what the DOW means by all lawful use. The concerns Dario raised in his response would still apply here
  • @shakeelhashim Shakeel on x
    Lots of new, hard to follow details today about the OpenAI-Pentagon deal. Here's a roundup of the most important things about using commercially available data for surveillance on Americans. TL;DR: It seems the Pentagon wanted Anthropic to allow this, and Anthropic's refusal is
  • @thebasepoint Joshua Batson on x
    For those wondering how mass domestic surveillance could be consistent with “all lawful use” of AI models, I recommend a declassified report from the ODNI on just how much can be done with commercially available data (CAI): “...to identify ever person who attended a protest” [ima…
  • @nabla_theta Leo Gao on x
    the contract snippet from the openai dow blog post is so obviously just “all lawful use” followed by a bunch of stuff that is not really operative except as window dressing. the referenced DoD Directive 3000.09 basically says the DoD gets to decide when autonomous weapons systems
  • @justanotherlaw Lawrence Chan on x
    OpenAI has released the language in their contract with the DoW, and it's exactly as Anthropic was claiming: “legalese that would allow those safeguards to be disregarded at will”. Note: the first paragraph doesn't say “no autonomous weapons”! It says “AI can't control [image]
  • @max_spero_ Max Spero on x
    Confirmation by the administration that the OpenAI contract contained the “all lawful use” wording that Anthropic rejected. Sam's wordsmithing aside, this opens the door for Trump or a future leader to authorize autonomous weapons or mass domestic surveillance with AI.
  • @emmyprobasco Emmy Probasco on x
    There is a narrow but important gap between the “all lawful use” stipulation and “no autonomous weapons.” On the one hand, you could interpret these two positions as being essentially aligned. But it is more complicated than that. 🧵
  • @livgorton Liv on x
    I feel like I am going insane and no one has read the articles. It appears that OpenAI has not brought about harmony and still has the “all lawful use” clause in their contract that was the issue in the first place? I think they've negotiated functionally the same contact they've
  • @shakeelhashim Shakeel on x
    What we know about the OpenAI-DoW deal: OpenAI agreed to the terms Anthropic rejected. The terms include an “all lawful use” clause. The contract “references certain existing legal authorities” which the govt claims prove that domestic mass surveillance is already illegal.
  • @undersecretaryf @undersecretaryf on x
    @tedlieu The axios article doesn't have much detail and this is DoW's decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective co…
  • @deredleritt3r Prinz on x
    My thoughts on OpenAI's agreement with the DoD: On autonomous AI weapons: 1. “The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.” This says that OpenAI's models may not [image]
  • @arozenshtein Alan Rozenshtein on x
    Very interesting procurement analysis.
  • @jtillipman Jessica Tillipman on x
    Can AI companies restrict government use of their technology? They do it all the time. Whether and how depends on the acquisition pathway, contract type, and terms. My explainer: https://jessicatillipman.com/ ... #Anthropic #openai #pentagon #DoD #govcon
  • @codytfenwick Cody Fenwick on x
    This is excellent — and this point is particularly interesting: [image]
  • @scaling01 @scaling01 on x
    very good read on the Anthropic - OpenAI - DoW situation https://jessicatillipman.com/ ...
  • @johnschulman2 John Schulman on x
    There's some discussion about whether contract terms ("all lawful use" vs more specific terms) vs safety stack (monitoring systems) are more effective as safeguards against AI misuse. It'd be useful for someone to game out how they'd hold up against historical incidents of
  • @fortenforge @fortenforge on x
    In fewer words: Anthropic doesn't trust the current administration's own interpretation of “all lawful use” and wanted consultation. OpenAI was more than happy to trust Hegseth and Trump with their technology.
  • @shakeelhashim Shakeel on x
    “We cannot say that the government cannot be trusted to interpret laws and contracts the right way, but also agree that Anthropic's policy redlines, in a contract, would have been effective.” This is a fair and good point.
  • @mattbgilliland Matt Gilliland on x
    Anyone who thinks “all lawful use” + LLMs doesn't enable unprecedented mass surveillance is ignorant of the state of the law, the state of the technology, or both.
  • @jacquesthibs Jacques on x
    Great article from someone who knows what they are talking about [image]
  • @gjmcgowan George McGowan on x
    This is just “all lawful use” with extra words - no way the pentagon would have a huge hissy fit about these redlines and then immediately agree to a new contract with the same ones in it
  • @bradrcarson Brad Carson on x
    Signal-boosting an excellent explainer.
  • @timkellogg.me Tim Kellogg on bluesky
    A much more wholistic analysis of the OpenAI v Anthropic v DoW contract mess  — OpenAI gives up contractual enforcement of redlines in exchange for architectural enforcement (supposedly)  — the incident highlights severe problems with government procurement  —  jessicatillipman.c…
  • @andytseng Andy Tseng on bluesky
    In case anyone's interested, @jtillipman.bsky.social has an excellent, detailed analysis of the current Anthropic-DoD-OpenAI contract debate - lots of nuances I wasn't aware of!  —  #USPol #AI #AIGovernance #Anthropic #DoD #OpenAI #GovernmentProcurement #GovCon #ProcurementPolicy…
  • @jenzhuscott Jen Zhu on x
    Worth a read - •Ex-OpenAI geopolitics lead: frontier AI labs' military policies r deliberately vague & changeable to preserve “optionality” •Anthropic's DoD standoff isn't the ethical win as portrayed. Dario is hardly a white knight - he's open to fully autonomous weapons if
  • @zeffmax Max Zeff on x
    I think this is the clearest eyed take I've read about what's happened between the AI industry and the Pentagon in the last 72 hours, with a chilling warning at the end. “The biggest losers in all of this are everyday people and civilians in conflict zones.”
  • @foomagemindset Kass Popper on x
    OpenAI's models can't be used to control drone swarms. Except they already are, as detailed in this post on the military use policies of AI companies. [image]
  • @sarahshoker Sarah Shoker on x
    I used to lead the Geopolitics Team at OpenAI. Today I published a few observations on frontier AI companies and their military usage policies from my perspective as a former employee and researcher active in the int'l security space. (Link below.) [image]
  • @sarahshoker Sarah Shoker on x
    My ask is pretty simple: Don't exploit ambiguous language to appease the public and your employees. (If the reaction on X is anything to go by, it's not working anyway.) https://sarahshoker.substack.com/ ...
  • @tcarmody Tim Carmody on bluesky
    Good read from a former OpenAI geopolitics person [embedded post]
  • @danprimack Dan Primack on x
    There is a valid argument for DoD not wanting to work w/ cos that used Claude in products being sold to DoD, given mission disagreement between the company and DoD. There is no good argument for banning Claude use at other, non-national security depts. Beyond spite.
  • @moskov.goodventures.org Dustin Moskovitz on bluesky
    “If this event contributed anything, it simply made the ongoing death more obvious and less deniable for me personally.  I consider the events of the last week a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.”  —  Don't sk…
  • @ericlevitz Eric Levitz on x
    It's really bizarre to see a bunch of ostensibly pro-market, right-leaning tech guys argue, “A private company asserting the right to decide what contracts it enters into is antithetical to democratic government” [image]
  • @ramez Ramez Naam on x
    Coming back to this. No AI company can stop DOD from misusing AI, because it's simply too easy to pick up or buy a different model. But by making the issue public, Dario has called the attention of voters, the press, and Congress to the potential misuse of AI. That's the win.
  • @justjoshinyou13 Josh You on x
    @stratechery This conflates multiple senses of control/power. By vetoing some government uses of Claude, Anthropic is not arrogating to itself the ability or right to use Claude for autonomous weapons or mass domestic surveillance.
  • @rabois Keith Rabois on x
    Yes.
  • @kellylsims Kelly Sims on x
    “What concerns me about Amodei and Anthropic in particular is the consistent pattern of being singularly focused on being the one winner with all of the power, with limited consideration of how everyone else may react to that situation.” This is a thoughtful piece on all this.
  • @ramez Ramez Naam on x
    The most important thing Dario did is get this issue in the news. At the end of the day, xAI will build a good enough model. Or Palantir can build a frontier model for a few hundred million. There are no technical moats here. The important thing is that the public and Congress
  • @irl_danb Dan on x
    Ben Thompson, as always, lays out the reality more clearly than I could have, despite my attempts by Dario's own words, he's building something akin to nukes he's simultaneously challenging the US government's authority to decide how to wield said power as much as I like [image]
  • @uswremichael @uswremichael on x
    Great article about the democratic process determining our nation's fate rather that a single tech founder overriding our leaders.
  • @jeremiahdjohns Jeremiah Johnson on x
    @stratechery This is one of the worst things I've read from you, and seems like obvious nonsense. “AI is as dangerous as nuclear weapons, which is why if a company expresses concerns about using AI for autonomous weapons, we will destroy them permanently”. What the hell?
  • @secwar @secwar on x
    Defense Secretary Pete Hegseth directs the DOD to designate Anthropic as a supply chain risk, barring military contractors from doing business with the company
  • @reckless Nilay Patel on bluesky
    Ben Thompson making a full-throated case for fascism here stratechery.com/2026/anthrop...  [image]
  • r/WeTheFifth r on reddit
    “No president in the modern era has ordered more military strikes against as many different countries as Donald Trump …
  • @tcarmody Tim Carmody on bluesky
    This makes it sound like Anthropic's funding might be revoked, which would be surprising — but that's not the case.  Investors are just worried about their ROI depending on how this supply chain risk designation plays out.  A nothing story.  [embedded post]
  • @rusty.todayintabs.com Rusty Foster on bluesky
    Earlier in the piece, he says that international law is “fake.”  It doesn't get much more cynical and amoral than this.  I haven't checked in on Ben in a while but this is straightforward Nazi thinking.  “Might makes right and only violent power is real.”  [embedded post]
  • @lopatto Elizabeth Lopatto on bluesky
    the contortions here are very funny if you're familiar with (a) ben's stance on other tech cos and (b) his objections to antitrust action.  do we think he's aware that he's describing and endorsing fascism?  stratechery.com/2026/anthrop...
  • @timkellogg.me Tim Kellogg on bluesky
    Fascinating article.  It argues that the republic is already dead, and the DoW incident is merely the signal  —  www.hyperdimensional.co/p/clawed [image]
  • @sammcallister Sam Mcallister on x
    @aidan_mclau @scrollvoid This isn't true. Anthropic hasn't offered a “helpful-only” model without safeguards for NatSec use. Claude Gov is a custom model with extra training, including technical safeguards. (We've also had FDEs and researchers implementing it, and we run our own …