/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

A look at the rights AI companies have in US government contracts, such as the “any lawful use” standard, amid the Anthropic-DOD dispute and the OpenAI-DOD deal

But Users Aren't Buying It

Jessica Tillipman

Discussion

  • @undersecretaryf @undersecretaryf on x
    For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to.  But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safe…
  • @natseckatrina @natseckatrina on x
    A lot of the concerns about the government's “all lawful use” language seem to stem from mistrust that government will follow the laws. At the same time, people believe that Anthropic took an important stand by insisting on contract language around their redlines. We cannot
  • @_nathancalvin Nathan Calvin on x
    From reading this and Sam's tweet, it really seems like OpenAI *did* agree to the compromise that Anthropic rejected - “all lawful use” but with additional explanation of what the DOW means by all lawful use. The concerns Dario raised in his response would still apply here
  • @nabla_theta Leo Gao on x
    the contract snippet from the openai dow blog post is so obviously just “all lawful use” followed by a bunch of stuff that is not really operative except as window dressing. the referenced DoD Directive 3000.09 basically says the DoD gets to decide when autonomous weapons systems
  • @shakeelhashim Shakeel on x
    Lots of new, hard to follow details today about the OpenAI-Pentagon deal. Here's a roundup of the most important things about using commercially available data for surveillance on Americans. TL;DR: It seems the Pentagon wanted Anthropic to allow this, and Anthropic's refusal is
  • @thebasepoint Joshua Batson on x
    For those wondering how mass domestic surveillance could be consistent with “all lawful use” of AI models, I recommend a declassified report from the ODNI on just how much can be done with commercially available data (CAI): “...to identify ever person who attended a protest” [ima…
  • @justanotherlaw Lawrence Chan on x
    OpenAI has released the language in their contract with the DoW, and it's exactly as Anthropic was claiming: “legalese that would allow those safeguards to be disregarded at will”. Note: the first paragraph doesn't say “no autonomous weapons”! It says “AI can't control [image]
  • @deredleritt3r Prinz on x
    My thoughts on OpenAI's agreement with the DoD: On autonomous AI weapons: 1. “The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.” This says that OpenAI's models may not [image]
  • @shakeelhashim Shakeel on x
    “We cannot say that the government cannot be trusted to interpret laws and contracts the right way, but also agree that Anthropic's policy redlines, in a contract, would have been effective.” This is a fair and good point.
  • @max_spero_ Max Spero on x
    Confirmation by the administration that the OpenAI contract contained the “all lawful use” wording that Anthropic rejected. Sam's wordsmithing aside, this opens the door for Trump or a future leader to authorize autonomous weapons or mass domestic surveillance with AI.
  • @emmyprobasco Emmy Probasco on x
    There is a narrow but important gap between the “all lawful use” stipulation and “no autonomous weapons.” On the one hand, you could interpret these two positions as being essentially aligned. But it is more complicated than that. 🧵
  • @livgorton Liv on x
    I feel like I am going insane and no one has read the articles. It appears that OpenAI has not brought about harmony and still has the “all lawful use” clause in their contract that was the issue in the first place? I think they've negotiated functionally the same contact they've
  • @shakeelhashim Shakeel on x
    What we know about the OpenAI-DoW deal: OpenAI agreed to the terms Anthropic rejected. The terms include an “all lawful use” clause. The contract “references certain existing legal authorities” which the govt claims prove that domestic mass surveillance is already illegal.
  • @undersecretaryf @undersecretaryf on x
    @tedlieu The axios article doesn't have much detail and this is DoW's decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective co…
  • @fortenforge @fortenforge on x
    In fewer words: Anthropic doesn't trust the current administration's own interpretation of “all lawful use” and wanted consultation. OpenAI was more than happy to trust Hegseth and Trump with their technology.
  • @mattbgilliland Matt Gilliland on x
    Anyone who thinks “all lawful use” + LLMs doesn't enable unprecedented mass surveillance is ignorant of the state of the law, the state of the technology, or both.
  • @gjmcgowan George McGowan on x
    This is just “all lawful use” with extra words - no way the pentagon would have a huge hissy fit about these redlines and then immediately agree to a new contract with the same ones in it
  • @johnschulman2 John Schulman on x
    There's some discussion about whether contract terms ("all lawful use" vs more specific terms) vs safety stack (monitoring systems) are more effective as safeguards against AI misuse. It'd be useful for someone to game out how they'd hold up against historical incidents of
  • @arozenshtein Alan Rozenshtein on x
    Very interesting procurement analysis.
  • @jtillipman Jessica Tillipman on x
    Can AI companies restrict government use of their technology? They do it all the time. Whether and how depends on the acquisition pathway, contract type, and terms. My explainer: https://jessicatillipman.com/ ... #Anthropic #openai #pentagon #DoD #govcon
  • @codytfenwick Cody Fenwick on x
    This is excellent — and this point is particularly interesting: [image]
  • @scaling01 @scaling01 on x
    very good read on the Anthropic - OpenAI - DoW situation https://jessicatillipman.com/ ...
  • @jacquesthibs Jacques on x
    Great article from someone who knows what they are talking about [image]
  • @bradrcarson Brad Carson on x
    Signal-boosting an excellent explainer.
  • @andytseng Andy Tseng on bluesky
    In case anyone's interested, @jtillipman.bsky.social has an excellent, detailed analysis of the current Anthropic-DoD-OpenAI contract debate - lots of nuances I wasn't aware of!  —  #USPol #AI #AIGovernance #Anthropic #DoD #OpenAI #GovernmentProcurement #GovCon #ProcurementPolicy…
  • @timkellogg.me Tim Kellogg on bluesky
    A much more wholistic analysis of the OpenAI v Anthropic v DoW contract mess  — OpenAI gives up contractual enforcement of redlines in exchange for architectural enforcement (supposedly)  — the incident highlights severe problems with government procurement  —  jessicatillipman.c…
  • @ianbetteridge.com Ian Betteridge on bluesky
    An actual expert on government contracts: “Contractors restrict the government's use of their products all the time.”  —  Ben Thompson: “this insistence on controlling the U.S. military, however, is fundamentally misaligned with reality”  —  I just don't know who to believe!