/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Sources: OpenAI agreed to follow US laws that have allowed for mass surveillance in the past, and the DOD didn't budge from its demands over bulk analyzing data

On Friday evening, amidst fallout from a standoff between the Department of Defense and Anthropic, OpenAI CEO Sam Altman announced …

The Verge Hayden Field

Discussion

  • @aidan_mclau Aidan McLaughlin on x
    i personally don't think this deal was worth it
  • @haydenfield Hayden Field on x
    NEW: When OpenAI announced its Pentagon deal Friday night, people immediately challenged Sam Altman's claims. Why, they asked, would the DoD suddenly agree to red lines when it had said it would never do so? The answer, sources told me, is that it didn't. https://www.theverge.com…
  • @nathanpmyoung Nathan on x
    My current read is that OpenAI have said they maintained Anthropic's red lines without having done so. Not consistently candid. Anthropic senior staff assured people that RSPs were binding. They weren't. Not exactly candid either. Choose for yourself how bad each is.
  • @shakeelhashim Shakeel on x
    Very important piece that confirms what I've suspected the last couple days: “If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it's technically legal, then the US military can use OpenAI's technology to carry it out.” [image]
  • @thezvi Zvi Mowshowitz on x
    This is good and fully consistent with my reporting and understanding. OAI is permitting all legal use. OpenAI is trusting DoW to determine legality and relying on its safety stack to catch if DoW breaks their trust, and the red lines are only in highly illegal territory.
  • @shakeelhashim Shakeel on x
    Important context here is that OpenAI's team has DoW experience. And as @binarybits points out, they're likely well versed in playing word games. The statement OpenAI gave The Verge earlier today is a perfect example of this. [image]
  • @shakeelhashim Shakeel on x
    OpenAI says a bunch of safeguards in its contracts prevent its models from being used for these purposes. But the “protections” are flimsy at best, and OpenAI is yet to provide evidence of a clause that specifically prevents it. [image]
  • @garymarcus Gary Marcus on x
    “OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.” Translation? 1. OpenAI is full of shit 2. They may well turn over everything you ever typed into ChatGPT if the US government asks. Scoop from
  • @garymarcus Gary Marcus on x
    BREAKING: “OpenAI agreed to follow laws that have allowed for mass surveillance in the past, while insisting they protect its red lines.” Translation? 1. OpenAI is full of shit 2. They may well turn over everything you ever typed into ChatGPT if the US government asks.
  • @binarybits Timothy B. Lee on x
    Recall that the Obama Administration's view circa 2013 was that most of what Snowden revealed wasn't illegal or improper. They played a lot of word games to downplay and justify what a lot of ordinary people considered intrusive mass surveillance programs.
  • @binarybits Timothy B. Lee on x
    I don't understand why OpenAI thinks quoting this language would convince people concerned about autonomous weapon uses. “You can't do it in any case where it would be illegal” is another way of saying “you can do it if it's legal.” [image]
  • @binarybits Timothy B. Lee on x
    I think it's significant that @natseckatrina, who @sama tapped to help answer questions about the DoD deal on Twitter, led the Obama administration's “media and public policy response” to the Snowden disclosures, according to her LinkedIn. Explains a lot about their approach.
  • @binarybits Timothy B. Lee on x
    So of course when the government comes to OpenAI and says “don't worry we won't engage in mass surveillance,” they were inclined to believe them. Because one of their key decision-makers had been on the team that didn't think the Snowden revelations were problematic.
  • @tszzl Roon on x
    there is no contractual redline obligation or safety guardrail on earth that will protect you from a counterparty that has its own secret courts, zero day retention, full secrecy on the provenance of its data etc. every deal you make here is a trust relationship
  • @unmarredreality @unmarredreality on x
    Every deal you ever make is a trust relationship. That's why there are conditions you simply don't agree to - especially when you're developing something with unprecedented scope and influence. Anthropic wisely declined such conditions. OpenAI agreed to them anyway.
  • @tszzl Roon on x
    @allTheYud thankfully if I quit my job no one will ever work on ai or weapons technology again. you would have advised oppenheimer himself to quit his job
  • @ciphergoth Paul Crowley on x
    OpenAI employees are already at a desperate barrel scraping stage of justifying continuing to work for Altman.
  • @thedextriarchy Adi Robertson on bluesky
    blinks in Edward Snowden [embedded post]
  • @seanokane Sean O'Kane on bluesky
    it's almost like this guy sam is a little slippery with the truth sometimes [embedded post]
  • @haydenfield Hayden Field on bluesky
    NEW: On Friday night when OpenAI announced its Pentagon deal, people immediately challenged Sam Altman's claims.  Why, they asked, would the DoD suddenly agree to red lines when it had clearly said it would never budge?  —  The answer, sources told me, is that it didn't.  —  www.…
  • @reckless Nilay Patel on bluesky
    Sam Altman got played and spun it like a win - @haydenfield.bsky.social has the scoop from a weekend's worth of reporting from inside the Pentagon AI negotiations. www.theverge.com/ai-artificia...  [image]
  • @druce.ai @druce.ai on bluesky
    Negotiations over a roughly $200 million Pentagon AI contract collapsed after Secretary Pete Hegseth labeled Anthropic a supply chain risk; OpenAI secured a competing framework deal the same night and Anthropic said it would sue.
  • @nktpnd Ankit Panda on bluesky
    “...the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data, people briefed on the negotiations said” www.nytimes.com/2026/03/01/t...
  • r/artificial r on reddit
    How OpenAI caved to the Pentagon on AI surveillance
  • r/singularity r on reddit
    How OpenAI caved to the Pentagon on AI surveillance |  The law doesn't say what Sam Altman claims it does.
  • r/technology r on reddit
    How OpenAI caved to the Pentagon on AI surveillance |  The law doesn't say what Sam Altman claims it does
  • r/politics r on reddit
    How OpenAI caved to the Pentagon on AI surveillance |  The law doesn't say what Sam Altman claims it does.
  • r/TrueReddit r on reddit
    How OpenAI caved to the Pentagon on AI surveillance
  • @danprimack Dan Primack on x
    There is a valid argument for DoD not wanting to work w/ cos that used Claude in products being sold to DoD, given mission disagreement between the company and DoD. There is no good argument for banning Claude use at other, non-national security depts. Beyond spite.