/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Sources: OpenAI agreed to follow US laws that have allowed for mass surveillance in the past, and the DOD didn't budge from its demands over bulk analyzing data

On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced …

The Verge Hayden Field

Discussion

  • @haydenfield Hayden Field on x
    NEW: When OpenAI announced its Pentagon deal Friday night, people immediately challenged Sam Altman's claims. Why, they asked, would the DoD suddenly agree to red lines when it had said it would never do so? The answer, sources told me, is that it didn't. https://www.theverge.com…
  • @seanokane Sean O'Kane on bluesky
    it's almost like this guy sam is a little slippery with the truth sometimes [embedded post]
  • @reckless Nilay Patel on bluesky
    Sam Altman got played and spun it like a win - @haydenfield.bsky.social has the scoop from a weekend's worth of reporting from inside the Pentagon AI negotiations. www.theverge.com/ai-artificia...  [image]
  • @adamscochran Adam Cochran on x
    Well there is part of what the DoD Anthropic was about. Based on the contract language around analyzing bulk commercial data and deanonymizing it matches with this data discussion: Since 2021 the Pentagons DIA has been purchasing anonymized and harvested geolocation data that's
  • @tysonbrody Tyson Brody on x
    In 2021 the Pentagon's Defense Intelligence Agency told Senator Wyden it was purchasing geolocation data from commercial brokers harvested from cell phones and that it did not believe it needed a warrant to analyze American's data. This has to be part of what freaked Dario out. […
  • @deanwball Dean W. Ball on x
    This is quite a departure from the President's stated positions on AI and copyright: “there has been no bigger thief of American's public identity information en masse or creators' works than by Anthropic”
  • @uswremichael @uswremichael on x
    As usual, more lies from @DarioAmodei. @AnthropicAI wanted language that would prevent all @DeptofWar employees from doing a LinkedIn search! Then, they wanted to stop DoW from using any *PUBLIC* database that would enable us to, eg., recruit military services members or hire
  • @aaronbergman18 Aaron Bergman on x
    Our culture and expectations and norms around this stuff are hardly built for 2000 let alone 2026
  • @yashalevine Yasha Levine on x
    they have been doing this shit since day one — since that data was available for purchase.
  • @zeffmax Max Zeff on x
    Besides the fact that Emil deleted a version of this tweet criticizing Anthropic for training AI on copyrighted works, I am fascinated to learn the other ways the DoW uses AI to analyze social media posts. This also doesn't specifically refute the Atlantic's report?
  • @zyeine_art @zyeine_art on x
    This is the damn Under Secretary of War for the United States, why does it read like a man shouting at a pigeon in a carpark? (I'm in the UK, we don't have Wendy's but if we did, Sir.. this isn't one.) Perhaps it's unreasonable of me to expect the Under Secretary of WAR to
  • @dantalks1 @dantalks1 on x
    They're going to twist Dario's nipples at Anthropic until he squeals like a girl and surrenders to them.
  • @heidykhlaaf Dr Heidy Khlaaf on x
    From our “Mind the Gap” paper 2024, a snippet I have come back to what seems like dozens of time at this point. [image]
  • @jdcmedlock James Medlock on x
    So we should probably pass some laws on this stuff...
  • @beffjezos @beffjezos on x
    The saga continues. Interesting rebuttal from DoW Chief
  • @tysonbrody Tyson Brody on x
    He's calling Dario a liar, but this reads like official confirmation by the Pentagon that they wanted to use Claude to analyze “publicly available information,” which is the legal term used for all bulk data purchase from brokers.
  • @tyler_m_john Tyler John on x
    One of the odd things about the whole Anthropic DoW thing is Anthropic getting framed as trying to control the military when the whole point of rejecting autonomous killings is that they do not want Claude to be in control of killing, they want military staff to stay in control
  • @andy_timm Andy Timm on x
    My working assumption though, is that the game here is disassembling around the definition of mass domestic surveillance (which I would consider this to be!) https://www.theatlantic.com/ ...
  • @tysonbrody Tyson Brody on x
    The Atlantic is reporting that Anthropic was told the Defense Department wanted to use clause to “analyze bulk data collected from Americans.” Did we know about an ongoing pentagon data harvesting program targeting citizens? Is this news???? [image]
  • @emeriticus Pedro L. Gonzalez on x
    This is what Palmer Luckey, Andreessen goons, Musk, and the rest were defending: strong arming Anthropic into allowing the government to use its tech to surveil and control Americans in a truly Orwellian fashion. These people are the genuine enemies of America and civilization.
  • @krustelkram Céline Keller on x
    Just remembered this little bit of history about “the good guys” from Anthropic [image]
  • @loomdoop Y Disassembler on x
    This is Anthropic. They also have whole team of lobbyists focused on taking over every aspect of civil government at local, state, and federal levels. [image]
  • @thezvi Zvi Mowshowitz on x
    You know you're on tilt when you have to take down your post because you went directly against the White House on copyright, and this is the SECOND, FIXED version. My lord. [image]
  • @isaiah_bb Isi Breen on x
    Anthropic also announced the Pentagon is using its products for things like “target selection” and it's like everyone already knew that was happening? What the hell!
  • @andy_timm Andy Timm on x
    If this Atlantic source on Anthropic/DoW negotiations is substantially correct, then OpenAI has a lot of evil/dishonesty to answer for. If there are actually binding, hard lines drawn to prevent this, folks like @boazbaraktcs and @natseckatrina should share them. [image]
  • @krishnanrohit Rohit on x
    As an exercise in critical thinking I wish anthropic and openai would swap their names for a week. A sort of forced post hoc intellectual turing test for every take.
  • @tysonbrody Tyson Brody on x
    Ok yeah this is a coordinated media push from Anthropic, the NYT is reporting the Pentagon specifically requested to use Claude to analyze “unclassified, commercial bulk data on Americans, such as geolocation and web browsing data” [image]
  • @mikeelgan Mike Elgan on x
    Word of the moment: “loopholey” https://www.theatlantic.com/ ...
  • @hshaban Hamza Shaban on x
    The reason the Pentagon ended talks with Anthropic, creating an opening for OpenAI, was Anthropic's refusal to allow its tech to be used on unclassified, commercial bulk data on Americans, like geolocation and web browsing data, the Atlantic reports: [image]
  • @uswremichael @uswremichael on x
    As usual, more lies from @DarioAmodei.  @AnthropicAI wanted language that would prevent all @DeptofWar employees from doing a LinkedIn search!  Then, they wanted to stop DoW from using any *PUBLIC* database that would enable us to, eg., recruit military services members or hire n…
  • @zeffmax Max Zeff on x
    The Atlantic reports that the Pentagon wanted to use Anthropic's AI for some type of surveillance of Americans. Given the ways some companies are already using AI today to surveil their own employees's emails, chats, etc., I find this kind of use to be particularly disturbing [im…
  • @krishnanrohit Rohit on x
    If the DoD buying commercially available data is obviously not okay we should ask what commercially available means, and the anti argument has to be there should be a law against collecting the data because of modern tech, otherwise this is just mood affiliated misdirection.
  • @ednewtonrex Ed Newton-Rex on x
    The US' ‘Under Secretary of War’ just admitted he thinks AI companies are stealing creators' work to build their models. He said: “There has been no bigger thief of [...] creators' works than by Anthropic (search for the lawsuits).” Shortly after he said this and AI Czar David [i…
  • @nxthompson @nxthompson on x
    The fight between Anthropic and Pentagon over mass surveillance matters more than most people think. 1) the data we share with AI models is insanely personal 2) the ability of AI models to de-anonymize, and find patterns across platforms, is profound. https://www.theatlantic.com/…
  • @alexlmiller Alex Miller on x
    There's a lot the government does that is “legal” but you may not want to be a part of
  • @theatlantic @theatlantic on x
    The deal between the Pentagon and Anthropic fractured in part over the proposed use of autonomous weapons. @andersen on the question OpenAI staff should now be asking Sam Altman about his company's new deal with the Pentagon: https://www.theatlantic.com/ ...
  • @joyoftech @joyoftech on bluesky
    Claude VS the Department of War www.geekculture.com/joyoftech/ jo...  [image]
  • @wikisteff @wikisteff on bluesky
    Yeah man.  They have too much data and no way to weaponize it.  This is exactly the same playbook as Bannon 2014 and his “incel army” of motherfuckers to take apart the US from the inside out.  —  Now they need to get you to vote Republican.  [embedded post]
  • @mikeriverso Mike Riverso on bluesky
    The thing that gets me is that you don't even need an LLM to do this.  You can in fact do it better with a database and actual statistical analysis.  [embedded post]
  • @jmberger.com J.M. Berger on bluesky
    No universe in which it's appropriate for the Pentagon to be collecting this information about Americans [embedded post]
  • @masnick.com Mike Masnick on bluesky
    Reading this, again, you get the sense that someone at Anthropic knows how the intel community misleads by using definitions of words that are different than everyone else believes.  And the people at OpenAI simply don't know or don't care about that.  [embedded post]
  • @fbajak Frank Bajak on bluesky
    Best most detailed technical explanation I've seen so dar on the Anthropic-Hegseth dispute over military AI use - based on a source granted anonymity.  [embedded post]
  • @hlahmann Henning Lahmann on bluesky
    If this tired narrative of “this is bad mainly because it would affect american citizens” and the obvious implication of what would therefore *not* be objectionable doesn't make you want to burn the entire AI security industry to the ground then honestly idk what's the matter wit…
  • @davidryanmiller.com David Ryan Miller on bluesky
    Why does the Department of Defense want to analyze bulk data collected about Americans......................? [embedded post]
  • @joeuchill Joe Uchill on bluesky
    Something to think about while lawmakers complain they shouldn't be subject to subpoenas.  [embedded post]
  • @johnpanzer.com John Panzer on bluesky
    The PENTAGON wants to analyze bulk data about Americans?  —  Is there any way this is not wildly illegal? [embedded post]
  • @chathamharrison @chathamharrison on bluesky
    Sure is weird that Sam Altman thinks this is a great idea as long as he's the one doing it [embedded post]
  • @tonystark Tony Stark on bluesky
    Hooo boy.  There we go.  It was about domestic surveillance after all.  [embedded post]
  • @stahl @stahl on bluesky
    It's so cool that no one is even bothering to be mad about the government analyzing"bulk data collected about Americans" they're just arguing about which tool they're gonna use to do it [embedded post]
  • @damonberes.com Damon Beres on bluesky
    New details on the dispute between the Pentagon and Anthropic; how the negotiations broke down, and a particular sticking point on AI in the cloud vs inside of edge systems. by @rossandersen.bsky.social / tip @techmeme.com
  • r/neoliberal r on reddit
    Inside Anthropic's Killer-Robot Dispute With the Pentagon
  • r/technology r on reddit
    Inside Anthropic's Killer-Robot Dispute With the Pentagon |  New details on precisely where the lines were drawn
  • @ramez Ramez Naam on x
    Coming back to this. No AI company can stop DOD from misusing AI, because it's simply too easy to pick up or buy a different model. But by making the issue public, Dario has called the attention of voters, the press, and Congress to the potential misuse of AI. That's the win.
  • @ramez Ramez Naam on x
    The most important thing Dario did is get this issue in the news. At the end of the day, xAI will build a good enough model. Or Palantir can build a frontier model for a few hundred million. There are no technical moats here. The important thing is that the public and Congress
  • @secwar @secwar on x
    Defense Secretary Pete Hegseth directs the DOD to designate Anthropic as a supply chain risk, barring military contractors from doing business with the company
  • r/WeTheFifth r on reddit
    “No president in the modern era has ordered more military strikes against as many different countries as Donald Trump …
  • @hausfath Zeke Hausfather on x
    Hmm... https://www.nytimes.com/... [image]
  • @editorialiste Andrew Nusca on x
    “In the end, the talks ... were undone by weeks of building frustration between men who had differing philosophies ... and who did not like one another.” https://www.nytimes.com/...
  • @sigalsamuel Sigal Samuel on x
    Important new reporting suggests Anthropic was actually super close to inking a deal with the Pentagon https://www.nytimes.com/... [image]
  • @danprimack Dan Primack on x
    “OpenAI's new deal with the Pentagon does not explicitly prohibit the collection of Americans' publicly available information — a sticking point that rival Anthropic says is crucial for ensuring domestic mass surveillance doesn't take place.” https://www.axios.com/...
  • @druce.ai @druce.ai on bluesky
    Negotiations over a roughly $200 million Pentagon AI contract collapsed after Secretary Pete Hegseth labeled Anthropic a supply chain risk; OpenAI secured a competing framework deal the same night and Anthropic said it would sue.
  • @benjaminjriley Benjamin Riley on bluesky
    It's hard not to read this story and conclude that every single person and entity involved is moronic.  The US government is in the hands of facist morons, and the Big Tech companies producing AI tools are led by delusional or self-serving morons.  —  We are in a very dangerous p…
  • @nktpnd Ankit Panda on bluesky
    “...the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data, people briefed on the negotiations said” www.nytimes.com/2026/03/01/t...
  • @sheeraf Sheera Frenkel on bluesky
    We have some new details on this in our story: www.nytimes.com/2026/03/01/t...
  • @jenzhuscott Jen Zhu on x
    Worth a read - •Ex-OpenAI geopolitics lead: frontier AI labs' military policies r deliberately vague & changeable to preserve “optionality” •Anthropic's DoD standoff isn't the ethical win as portrayed. Dario is hardly a white knight - he's open to fully autonomous weapons if
  • @zeffmax Max Zeff on x
    I think this is the clearest eyed take I've read about what's happened between the AI industry and the Pentagon in the last 72 hours, with a chilling warning at the end. “The biggest losers in all of this are everyday people and civilians in conflict zones.”
  • @foomagemindset Kass Popper on x
    OpenAI's models can't be used to control drone swarms. Except they already are, as detailed in this post on the military use policies of AI companies. [image]
  • @sarahshoker Sarah Shoker on x
    I used to lead the Geopolitics Team at OpenAI. Today I published a few observations on frontier AI companies and their military usage policies from my perspective as a former employee and researcher active in the int'l security space. (Link below.) [image]
  • @sarahshoker Sarah Shoker on x
    My ask is pretty simple: Don't exploit ambiguous language to appease the public and your employees. (If the reaction on X is anything to go by, it's not working anyway.) https://sarahshoker.substack.com/ ...
  • @tcarmody Tim Carmody on bluesky
    Good read from a former OpenAI geopolitics person [embedded post]
  • @tszzl Roon on x
    there is no contractual redline obligation or safety guardrail on earth that will protect you from a counterparty that has its own secret courts, zero day retention, full secrecy on the provenance of its data etc. every deal you make here is a trust relationship
  • @tszzl Roon on x
    @allTheYud thankfully if I quit my job no one will ever work on ai or weapons technology again. you would have advised oppenheimer himself to quit his job
  • @unmarredreality @unmarredreality on x
    Every deal you ever make is a trust relationship. That's why there are conditions you simply don't agree to - especially when you're developing something with unprecedented scope and influence. Anthropic wisely declined such conditions. OpenAI agreed to them anyway.
  • @ciphergoth Paul Crowley on x
    OpenAI employees are already at a desperate barrel scraping stage of justifying continuing to work for Altman.
  • @miles_brundage Miles Brundage on x
    In light of what external lawyers and the Pentagon are saying, OpenAI employees' default assumption here should unfortunately be that OpenAI caved + framed it as not caving, and screwed Anthropic while framing it as helping them. Hope that is wrong + they get evidence otherwise
  • @peterwildeford Peter Wildeford on x
    I think it's important to circle back to Sam Altman here. About 20 hours ago people, including me, were applauding his moral clarity. But that moral clarity lasted barely half a day. OpenAI is now agreeing to be used for domestic surveillance and for lethal autonomous weapons,
  • @joshkale Josh Kale on x
    Everyone's saying OpenAI got the “same deal” Anthropic was banned for. Read the fine print. They're not the same: On weapons: Anthropic asked for “no fully autonomous weapons without human oversight” = a human involved in the decision. OpenAI's deal says “human responsibility
  • @thedextriarchy Adi Robertson on bluesky
    blinks in Edward Snowden [embedded post]
  • @haydenfield Hayden Field on bluesky
    NEW: On Friday night when OpenAI announced its Pentagon deal, people immediately challenged Sam Altman's claims.  Why, they asked, would the DoD suddenly agree to red lines when it had clearly said it would never budge?  —  The answer, sources told me, is that it didn't.  —  www.…
  • @zeffmax Max Zeff on x
    Powerful words from Dean Ball, former White House AI adviser. “That alone should make one thing clear: terms like this are not some ridiculous violation of the norms of defense contracting. Anyone attempting to convince you otherwise is misinformed or lying.”
  • @gallabytes @gallabytes on x
    very high quality post, an accounting of the true cost of the moment. an interesting question for this time of incredible leverage that I haven't seen enough ink on: what comes after the republic? what should governance even look like at the dawn of superintelligence?
  • @itsurboyevan Evan Armstrong on x
    Excellent—people seem to have forgotten that what makes America great is fundamental rights of speech, private property, and enforcement of contracts. I disagree with Dean on many (most?) AI policies, but without contract law that debate is meaningless.
  • @presidentlin @presidentlin on x
    Bars. Read to the end. My two favourite paragraphs [image]
  • @ericboehm87 Eric Boehm on x
    You really should read @deanwball's latest on the Trump administration's attempted corporate murder of Anthropic... [image]
  • @justinbullock14 Justin Bullock on x
    This week in AI policy, everything is different, and everything is the same. Brilliantly laid out by @deanwball, who has further increased my respect for him the last 5 days. Kudos, sir.
  • @mdudas Mike Dudas on x
    incredible piece on @AnthropicAI vs @DeptofWar via @deanwball https://www.hyperdimensional.co/ ... you simply can't pass laws anymore in america, which means regulators, courts and the president run the country [image]
  • @dkthomp Derek Thompson on x
    A quite brilliant essay on AI, the law, and the future of the republic. An upshot: If the US govt can go to any company, demand any contract language, and reserve the right to destroy your company if you have qualms, there is no such thing as private property rights in America.
  • @zdch Zac Hill on x
    One reason I am a State Capacity Maximalist (and why the work of e.g. @pahlkadot et al at Recoding America is so important to me) is that we just can't function as a Republic when the idea of passing legislation is at best a punchline. GOAT-tier essay from @deanwball today. [imag…
  • @eggerdc Andrew Egger on x
    Bracing stuff from @deanwball [image]
  • @boazbaraktcs Boaz Barak on x
    Extremely well put @deanwball ! A must read essay. My position is that: 1. Anthropic is a great company, people who work there care deeply about AI safety and the benefit of the U.S. Tagging it as a “supply chain risk” is a massive own-goal to American AI leadership. 2. The
  • @deredleritt3r Prinz on x
    Self-recommending, and a must-read. I agree with pretty much every word of this.
  • @ruark @ruark on x
    “I encourage you to avoid the assumption that “democratic” control—control “of the people, by the people, and for the people”—is synonymous with governmental control. The gap between these loci of control has always existed, but it is ever wider now.” https://www.hyperdimensional…
  • @thezvi Zvi Mowshowitz on x
    Now in a Twitter article, so you have no excuse. Read it. My stuff can wait.
  • @deanwball Dean W. Ball on x
    Clawed
  • @quastora Trey Causey on x
    @stratechery I believe this post fundamentally misunderstands the options that are / were actually available to the government and to Anthropic in a way that is undemocratic. I highly recommend reading @deanwball's piece on this today for a more accurate picture. https://www.hype…
  • @hamandcheese Samuel Hammond on x
    “At some point during my lifetime—I am not sure when—the American republic as we know it began to die.”
  • @rcbregman Rutger Bregman on x
    Wow, the lead author of Trump's AI Action Plan, Dean Ball, is calling out Pete Hegseth's mafia-style behavior toward Anthropic: “The fact that his shot is unlikely to be lethal (only very bloody) does not change the message sent to every investor and corporation in America: do [i…
  • @deanwball Dean W. Ball on x
    I have, for lack of a better phrase, “action plan mode,” and that part of me wants to be like, “just add a fucking clause to dfars you fools” and then I also have, uh, “macrohistorical literary analysis mode,” and I think this piece probably captures the two wolves pretty well
  • @andrewcurran_ Andrew Curran on x
    The old world is ending; more of it burns away every day. Things will never return to the way they were, not in two years, not in five, not ever. We have long since passed the threshold. This is an era of transformative change.
  • @alecstapp Alec Stapp on x
    This is not hyperbole, and every business leader in the country needs to recognize the stakes of what's happening: [image]
  • @deanwball Dean W. Ball on x
    I think this one needs no further explanation. [image]
  • @alecstapp Alec Stapp on x
    Really important point here: There were much, much less restrictive means available for the Department of War to achieve its stated ends. Instead, they are attempting to destroy one of our leading AI companies. [image]
  • @s_oheigeartaigh @s_oheigeartaigh on x
    This is essential reading. It's powerful, emotive, but also has exceptional clarity. This in particular is nail on head - “Even if I am right that we live in the “rapid capabilities growth” world, it will still be the case that the adoption of U.S. AI will be seen as especially
  • @tcarmody Tim Carmody on bluesky
    The means of production have been replaced by the terms of service.  [embedded post]
  • @arozenshtein Alan Rozenshtein on x
    Very interesting procurement analysis.
  • @jtillipman Jessica Tillipman on x
    Can AI companies restrict government use of their technology? They do it all the time. Whether and how depends on the acquisition pathway, contract type, and terms. My explainer: https://jessicatillipman.com/ ... #Anthropic #openai #pentagon #DoD #govcon
  • @codytfenwick Cody Fenwick on x
    This is excellent — and this point is particularly interesting: [image]
  • @scaling01 @scaling01 on x
    very good read on the Anthropic - OpenAI - DoW situation https://jessicatillipman.com/ ...
  • @jacquesthibs Jacques on x
    Great article from someone who knows what they are talking about [image]
  • @bradrcarson Brad Carson on x
    Signal-boosting an excellent explainer.
  • @timkellogg.me Tim Kellogg on bluesky
    A much more wholistic analysis of the OpenAI v Anthropic v DoW contract mess  — OpenAI gives up contractual enforcement of redlines in exchange for architectural enforcement (supposedly)  — the incident highlights severe problems with government procurement  —  jessicatillipman.c…
  • @andytseng Andy Tseng on bluesky
    In case anyone's interested, @jtillipman.bsky.social has an excellent, detailed analysis of the current Anthropic-DoD-OpenAI contract debate - lots of nuances I wasn't aware of!  —  #USPol #AI #AIGovernance #Anthropic #DoD #OpenAI #GovernmentProcurement #GovCon #ProcurementPolicy…
  • @aidan_mclau Aidan McLaughlin on x
    i personally don't think this deal was worth it
  • @shakeelhashim Shakeel on x
    Important context here is that OpenAI's team has DoW experience. And as @binarybits points out, they're likely well versed in playing word games. The statement OpenAI gave The Verge earlier today is a perfect example of this. [image]
  • @shakeelhashim Shakeel on x
    OpenAI says a bunch of safeguards in its contracts prevent its models from being used for these purposes. But the “protections” are flimsy at best, and OpenAI is yet to provide evidence of a clause that specifically prevents it. [image]
  • @shakeelhashim Shakeel on x
    In the last few days, OpenAI and its executives have claimed that its DoW deal prevents its models being used for mass domestic surveillance. As I write in a lengthy explainer for @ReadTransformer today, that appears to be misleading at best. [image]
  • @garymarcus Gary Marcus on x
    BREAKING: “OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.” Translation? 1. OpenAI is full of shit 2. They may well turn over everything you ever typed into ChatGPT if the US government asks.
  • @garymarcus Gary Marcus on x
    “OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.” Translation? 1. OpenAI is full of shit 2. They may well turn over everything you ever typed into ChatGPT if the US government asks. Scoop from
  • @thezvi Zvi Mowshowitz on x
    This is good and fully consistent with my reporting and understanding. OAI is permitting all legal use. OpenAI is trusting DoW to determine legality and relying on its safety stack to catch if DoW breaks their trust, and the red lines are only in highly illegal territory.
  • @shakeelhashim Shakeel on x
    Very important piece that confirms what I've suspected the last couple days: “If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it's technically legal, then the US military can use OpenAI's technology to carry it out.” [image]
  • @binarybits Timothy B. Lee on x
    Recall that the Obama Administration's view circa 2013 was that most of what Snowden revealed wasn't illegal or improper. They played a lot of word games to downplay and justify what a lot of ordinary people considered intrusive mass surveillance programs.
  • @binarybits Timothy B. Lee on x
    I don't understand why OpenAI thinks quoting this language would convince people concerned about autonomous weapon uses. “You can't do it in any case where it would be illegal” is another way of saying “you can do it if it's legal.” [image]
  • @binarybits Timothy B. Lee on x
    I think it's significant that @natseckatrina, who @sama tapped to help answer questions about the DoD deal on Twitter, led the Obama administration's “media and public policy response” to the Snowden disclosures, according to her LinkedIn. Explains a lot about their approach.
  • @binarybits Timothy B. Lee on x
    So of course when the government comes to OpenAI and says “don't worry we won't engage in mass surveillance,” they were inclined to believe them. Because one of their key decision-makers had been on the team that didn't think the Snowden revelations were problematic.
  • r/TrueReddit r on reddit
    How OpenAI caved to the Pentagon on AI surveillance
  • @ericlevitz Eric Levitz on x
    It's really bizarre to see a bunch of ostensibly pro-market, right-leaning tech guys argue, “A private company asserting the right to decide what contracts it enters into is antithetical to democratic government” [image]
  • @justjoshinyou13 Josh You on x
    @stratechery This conflates multiple senses of control/power. By vetoing some government uses of Claude, Anthropic is not arrogating to itself the ability or right to use Claude for autonomous weapons or mass domestic surveillance.
  • @kellylsims Kelly Sims on x
    “What concerns me about Amodei and Anthropic in particular is the consistent pattern of being singularly focused on being the one winner with all of the power, with limited consideration of how everyone else may react to that situation.” This is a thoughtful piece on all this.
  • @jeremiahdjohns Jeremiah Johnson on x
    @stratechery This is one of the worst things I've read from you, and seems like obvious nonsense. “AI is as dangerous as nuclear weapons, which is why if a company expresses concerns about using AI for autonomous weapons, we will destroy them permanently”. What the hell?
  • @rabois Keith Rabois on x
    Yes.
  • @uswremichael @uswremichael on x
    Great article about the democratic process determining our nation's fate rather that a single tech founder overriding our leaders.
  • @irl_danb Dan on x
    Ben Thompson, as always, lays out the reality more clearly than I could have, despite my attempts by Dario's own words, he's building something akin to nukes he's simultaneously challenging the US government's authority to decide how to wield said power as much as I like [image]
  • @taylorlorenz Taylor Lorenz on x
    And someone was just claiming this entire thing wasn't related to laws like KOSA yesterday. We need the left to wake tf up and start fighting these mass surveillance laws being pushed under the guise of child safety asap
  • r/politics r on reddit
    How OpenAI caved to the Pentagon on AI surveillance |  The law doesn't say what Sam Altman claims it does.
  • @aclu.org @aclu.org on bluesky
    The Department of Defense is buying up our data and seeking to use powerful AI systems to amass information about our private lives without a warrant.  —  That's the definition of Big Brother surveillance, and it's unconstitutional.
  • r/singularity r on reddit
    How OpenAI caved to the Pentagon on AI surveillance |  The law doesn't say what Sam Altman claims it does.
  • r/technology r on reddit
    How OpenAI caved to the Pentagon on AI surveillance |  The law doesn't say what Sam Altman claims it does
  • @nathanpmyoung Nathan on x
    My current read is that OpenAI have said they maintained Anthropic's red lines without having done so. Not consistently candid. Anthropic senior staff assured people that RSPs were binding. They weren't. Not exactly candid either. Choose for yourself how bad each is.
  • @danprimack Dan Primack on x
    There is a valid argument for DoD not wanting to work w/ cos that used Claude in products being sold to DoD, given mission disagreement between the company and DoD. There is no good argument for banning Claude use at other, non-national security depts. Beyond spite.
  • @moskov.goodventures.org Dustin Moskovitz on bluesky
    “If this event contributed anything, it simply made the ongoing death more obvious and less deniable for me personally.  I consider the events of the last week a kind of death rattle of the old republic, the outward expression of a body that has thrown in the towel.”  —  Don't sk…
  • @timkellogg.me Tim Kellogg on bluesky
    Fascinating article.  It argues that the republic is already dead, and the DoW incident is merely the signal  —  www.hyperdimensional.co/p/clawed [image]
  • @packym Packy McCormick on x
    Ben Thompson with the best take on DOD v. Anthropic, which is basically: if you don't want the government to treat your technology like nuclear weapons, stop comparing your technology to nuclear weapons. Hype Tax. [image]
  • @benthompson Ben Thompson on x
    @EricLevitz I wasn't making a normative argument. Of course I think this is bad. I was pointing out what will inevitably happen with AI in reality
  • @reckless Nilay Patel on bluesky
    Ben Thompson making a full-throated case for fascism here stratechery.com/2026/anthrop...  [image]
  • @lopatto Elizabeth Lopatto on bluesky
    the contortions here are very funny if you're familiar with (a) ben's stance on other tech cos and (b) his objections to antitrust action.  do we think he's aware that he's describing and endorsing fascism?  stratechery.com/2026/anthrop...
  • @rusty.todayintabs.com Rusty Foster on bluesky
    Earlier in the piece, he says that international law is “fake.”  It doesn't get much more cynical and amoral than this.  I haven't checked in on Ben in a while but this is straightforward Nazi thinking.  “Might makes right and only violent power is real.”  [embedded post]
  • @tcarmody Tim Carmody on bluesky
    This makes it sound like Anthropic's funding might be revoked, which would be surprising — but that's not the case.  Investors are just worried about their ROI depending on how this supply chain risk designation plays out.  A nothing story.  [embedded post]
  • @undersecretaryf @undersecretaryf on x
    For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon
  • @natseckatrina @natseckatrina on x
    A lot of the concerns about the government's “all lawful use” language seem to stem from mistrust that government will follow the laws. At the same time, people believe that Anthropic took an important stand by insisting on contract language around their redlines. We cannot
  • @_nathancalvin Nathan Calvin on x
    From reading this and Sam's tweet, it really seems like OpenAI *did* agree to the compromise that Anthropic rejected - “all lawful use” but with additional explanation of what the DOW means by all lawful use. The concerns Dario raised in his response would still apply here
  • @shakeelhashim Shakeel on x
    Lots of new, hard to follow details today about the OpenAI-Pentagon deal. Here's a roundup of the most important things about using commercially available data for surveillance on Americans. TL;DR: It seems the Pentagon wanted Anthropic to allow this, and Anthropic's refusal is
  • @thebasepoint Joshua Batson on x
    For those wondering how mass domestic surveillance could be consistent with “all lawful use” of AI models, I recommend a declassified report from the ODNI on just how much can be done with commercially available data (CAI): “...to identify ever person who attended a protest” [ima…
  • @nabla_theta Leo Gao on x
    the contract snippet from the openai dow blog post is so obviously just “all lawful use” followed by a bunch of stuff that is not really operative except as window dressing. the referenced DoD Directive 3000.09 basically says the DoD gets to decide when autonomous weapons systems
  • @justanotherlaw Lawrence Chan on x
    OpenAI has released the language in their contract with the DoW, and it's exactly as Anthropic was claiming: “legalese that would allow those safeguards to be disregarded at will”. Note: the first paragraph doesn't say “no autonomous weapons”! It says “AI can't control [image]
  • @max_spero_ Max Spero on x
    Confirmation by the administration that the OpenAI contract contained the “all lawful use” wording that Anthropic rejected. Sam's wordsmithing aside, this opens the door for Trump or a future leader to authorize autonomous weapons or mass domestic surveillance with AI.
  • @emmyprobasco Emmy Probasco on x
    There is a narrow but important gap between the “all lawful use” stipulation and “no autonomous weapons.” On the one hand, you could interpret these two positions as being essentially aligned. But it is more complicated than that. 🧵
  • @livgorton Liv on x
    I feel like I am going insane and no one has read the articles. It appears that OpenAI has not brought about harmony and still has the “all lawful use” clause in their contract that was the issue in the first place? I think they've negotiated functionally the same contact they've
  • @shakeelhashim Shakeel on x
    What we know about the OpenAI-DoW deal: OpenAI agreed to the terms Anthropic rejected. The terms include an “all lawful use” clause. The contract “references certain existing legal authorities” which the govt claims prove that domestic mass surveillance is already illegal.
  • @undersecretaryf @undersecretaryf on x
    @tedlieu The axios article doesn't have much detail and this is DoW's decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective co…
  • @deredleritt3r Prinz on x
    My thoughts on OpenAI's agreement with the DoD: On autonomous AI weapons: 1. “The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.” This says that OpenAI's models may not [image]
  • @johnschulman2 John Schulman on x
    There's some discussion about whether contract terms ("all lawful use" vs more specific terms) vs safety stack (monitoring systems) are more effective as safeguards against AI misuse. It'd be useful for someone to game out how they'd hold up against historical incidents of
  • @fortenforge @fortenforge on x
    In fewer words: Anthropic doesn't trust the current administration's own interpretation of “all lawful use” and wanted consultation. OpenAI was more than happy to trust Hegseth and Trump with their technology.
  • @shakeelhashim Shakeel on x
    “We cannot say that the government cannot be trusted to interpret laws and contracts the right way, but also agree that Anthropic's policy redlines, in a contract, would have been effective.” This is a fair and good point.
  • @mattbgilliland Matt Gilliland on x
    Anyone who thinks “all lawful use” + LLMs doesn't enable unprecedented mass surveillance is ignorant of the state of the law, the state of the technology, or both.
  • @gjmcgowan George McGowan on x
    This is just “all lawful use” with extra words - no way the pentagon would have a huge hissy fit about these redlines and then immediately agree to a new contract with the same ones in it
  • @sammcallister Sam Mcallister on x
    @aidan_mclau @scrollvoid This isn't true. Anthropic hasn't offered a “helpful-only” model without safeguards for NatSec use. Claude Gov is a custom model with extra training, including technical safeguards. (We've also had FDEs and researchers implementing it, and we run our own …
  • @benspringwater Ben Springwater on x
    I love @benthompson. He is my favorite tech commentator. I listen to @stratechery every day. But his justification for the US Govt seeking to destroy Anthropic is incredibly glib and misguided. AI :: nuclear weapons is sometimes a useful analogy but it's obviously an imperfect [i…
  • @deanwball Dean W. Ball on x
    @BearForce_Won as someone who has idolized ben since the days of “no, the iPhone is going to be resilient to commodification” (his beginning)—and obviously is operating in ben's shadow as a tech newsletter writer—I was disappointed with his piece today.
  • @secscottbessent Treasury Secretary Scott Bessent on x
    At the direction of @POTUS, the @USTreasury is terminating all use of Anthropic products, including the use of its Claude platform, within our department. The American people deserve confidence that every tool in government serves the public interest, and under President Trump
  • @chamath Chamath Palihapitiya on x
    This is an important moment for all companies: By picking only one model, you absorb that model maker's institutional biases and idiosyncrasies. If those deviate from your POV, you are taking on massive risk as we saw with the DoW this weekend. No real business should take