Sources: the Pentagon used Claude in its major air attack in Iran, hours after Trump declared that the federal government will end its use of Anthropic's tools
Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic …
Wall Street Journal
Related Coverage
- Inside the Pentagon's Fight to Use AI Any Way It Wants in Weapons and Surveillance Inc.com · Kevin Haynes
- CENTCOM Gives a Bombshell Update on Iran Strikes in New Briefing Townhall · Joseph Chalfant
- White House Moves to End Federal Use of Anthropic's Claude AI PYMNTS.com
- Pete Hegseth punishes company for trying to protect the privacy of its customers Lawyers, Guns & Money · Scott Lemieux
- Pentagon and Anthropic have until 5:01pm to reach a deal. Here's what they can't agree on Washington Examiner · Mike Brest
- The Pentagon's War on Anthropic The Power Law · Peter Wildeford
- Pentagon Used Anthropic's Claude AI During Iran Strike Hours After Trump Ordered Ban: Report Benzinga · Mohd Haider
- Trump banned Anthropic — hours later, US military used its Claude AI in Iran strikes: Report Livemint · Aman Gupta
- Anthropic Just Got Fired by the U.S. Government. It's the Best Thing That Ever Happened to Its Brand Inc · Jason Aten
- Trump Moves to Ban Anthropic From the US Government Wired · Will Knight
- Trump's furious response to Anthropic is as much about power as it is about AI safety Sky News · Tom Clarke
- From contract partner to security risk: The Anthropic-Pentagon dispute explained Moneycontrol
- Trump blacklists Anthropic, opening the door to Elon Musk and xAI MarketWatch · William Gavin
- Trump orders all federal agencies to cease using Anthropic Politico · Brendan Bordelon
- Trump orders federal agencies to stop using Anthropic as dispute escalates Al Jazeera
- Trump orders federal agencies to stop using Anthropic's AI after clash with Pentagon Los Angeles Times · Queenie Wong
- Blacklisted: Trump declares war on Anthropic The San Francisco Standard · Jacob Clemente
- Trump orders federal agencies to “immediately” stop using Anthropic AI tech, threatens “criminal consequences” against “radical left, woke company” DatacenterDynamics · Sebastian Moss
- Trump says he is banning all federal agencies from using Anthropic XDA Developers · Patrick O'Rourke
- Trump orders federal government to stop using Anthropic in dispute between AI firm and Pentagon Washington Times · Tom Howell Jr
- Trump Orders Anthropic Tech Out of Government Agencies Newser · Bob Cronin
- Trump Escalates AI Clash With Anthropic GovInfoSecurity.com · Chris Riotta
- Trump bans government use of Anthropic AI after Pentagon clash San Francisco Chronicle · Aidin Vaziri
- Trump Directs Federal Agencies To Stop Using Anthropic's AI The Information · Erin Woo
- Trump orders U.S. government to stop using Anthropic but gives Pentagon six months to phase it out amid standoff over AI use Fortune · Jason Ma
- Trump bans Anthropic from government use NBC News · Jared Perlo
- Trump orders every federal agency to stop using Anthropic an hour before Pentagon deadline Washington Examiner · Mike Brest
- Anthropic faces deadline on deal for Pentagon to use AI model UPI · Lisa Hornung
- Clearly the whole drama with Pentagon making a big deal of showing that they're trying to force AI companies to build autonomous AI killing machines and spy on citizens is completely manufactured. — Anthropic was always going to comply, and the goal is to just create a marketing campaign them as heroically resisting. … @yogthos@social.marxist.network
- U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban — ~ “Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, Trump launched a major air attack in Iran with the help of those very same tools.” … @dalfen@mstdn.social
- US military used Anthropic in Iran strike despite ban order by Trump: WSJ Cointelegraph · Amin Haqshanas
- What Happens to Anthropic Now? — President Trump is terminating the government's relationship … The Atlantic · Matteo Wong
- Trump orders federal agencies to stop using Anthropic's AI technology CBS News · Melissa Quinn
- OpenAI announces Pentagon deal after Trump bans Anthropic NPR
- Claude became the #1 free app in the US App Store on Saturday, after DOD designated Anthropic a supply chain risk; it hovered in the top 20 for much of February CNBC · Jordan Novet
- Anthropic's Claude Tops Apple App Store Charts Day After Trump Administration Bars Agency Use Benzinga · Rounak Jain
- Dario Amodei says “we are patriotic Americans” and Anthropic fears some AI uses could clash with American values as AI's potential gets “ahead of the law” CBS News · Jo Ling
- Sam Altman Insists He Also Has Principles as Anthropic's Pentagon Stand Off Continues Gizmodo · AJ Dellinger
- OpenAI defends rival Anthropic against Pentagon ban, Sam Altman calls it ' ‘extremely scary precedent’ Livemint · Aman Gupta
- OpenAI on Pentagon's clash with Anthropic: Here's all that Sam Altman said after signing the deal The Economic Times
- OpenAI's Sam Altman announces Pentagon deal with ‘technical safeguards’ TechCrunch · Anthony Ha
- OpenAI Says Hell Yea To To Helping Government With ‘Fully Autonomous Weapons’ As Trump Bombs More Countries Kotaku · Zack Kotzer
- OpenAI CEO Sam Altman answers questions on Pentagon deal, accountability, and whether governments can ‘nationalise’ AI Moneycontrol
- OpenAI Makes Deal With Pentagon, Including Safeguards Anthropic Requested Before Ban SFist · Leanne Maxwell
- OpenAI CEO Sam Altman answers questions on new Pentagon deal: ‘This technology is super important’ Fox Business
- 5 big takeaways from Sam Altman's Saturday night AMA on OpenAI's Pentagon deal DNYUZ · Saul Loeb
- OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash New York Times · Cade Metz
- OpenAI reaches deal with Pentagon after Trump drops Anthropic UPI · Danielle Haynes
- Sam Altman Is Marketing OpenAI as America's Wartime AI Company Whether He Intends to or Not Gizmodo · Mike Pearl
- OpenAI to work with Pentagon after Anthropic dropped by Trump over company's ethics concerns The Guardian
- Pentagon reaches deal with OpenAI amid Anthropic beef The Hill · Julia Shapero
- OpenAI strikes deal with Pentagon hours after Trump admin bans Anthropic CNN · Hadas Gold
- Anthropic lost the battle, OpenAI won the war? Digit · Jayesh Shinde
- OpenAI strikes deal with Pentagon to use tech in ‘classified network’ Al Jazeera · Lyndal Rowlands
- Market Resources — Trading Tools & Education — Ring the Bell — Service Status Benzinga · Eva Mathew
- Sam Altman's OpenAI Moves Ahead With Pentagon AI Deal After Anthropic Says No Blockonomi · Brenda Mary
- OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump CNBC
- Source: Sam Altman told employees the DOD is willing to let OpenAI build its own “safety stack” and won't force OpenAI to comply if its model refuses a task Fortune · Sharon Goldman
- Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic Slashdot
- Anthropic CEO Dario Amodei says ‘we are patriotic Americans’ committed to defending the U.S. but won't budge on ‘red lines’ Fortune · Jason Ma
- Anthropic to Department of Defense: Drop dead Computerworld · Steven Vaughan-Nichols
- Anthropic was right not to trust Pete Hegseth MS NOW · Hayes Brown
- Pentagon Casts Cloud of Doubt Over Anthropic's AI Business Bloomberg Law
- Warren accuses Trump, Hegseth of trying ‘extort’ Anthropic into removing AI guardrails The Hill · Ryan Mancini
- Anthropic's Killer-Robot Dispute with The Pentagon Hacker News
- OpenAI shares more details about its agreement with the Pentagon TechCrunch · Anthony Ha
- OpenAI, creator of ChatGPT, makes its technology available to the Pentagon WWL-TV · Katrina Morgan
- ‘No ethics at all’: the ‘cancel ChatGPT’ trend is growing after OpenAI signs a deal with the US military TechRadar · David Nield
- How Pentagon turns Claude into America's most downloaded app Türkiye Today · Zehra Unlu
- OpenAI gives Pentagon AI model access after Anthropic dustup Hartford Courant
- OpenAI-Pentagon deal faces same safety concerns that plagued Anthropic talks Axios · Maria Curi
- The government's AI standoff could decide who really controls America's military tech Business Insider
- 🗞️ OpenAI sweeps in to ink deal with Pentagon as Anthropic is designated a ‘supply chain risk’ Rohan's Bytes · Rohan Paul
- OpenAI details layered protections in US defense department pact Reuters
- OpenAI-Pentagon deal highlights deeper conflict over who controls AI safeguards DigiTimes
- A Few Observations on AI Companies and Their Military Usage Policies fishbowlification · Sarah Shoker
- OpenAI Defends Pentagon Deal, Claims Safety Exceeds Anthropic's Bloomberg
- OpenAI strikes a deal with the Defense Department to deploy its AI models Engadget · Mariella Moon
- OpenAI agrees to deploy AI models on Pentagon network Tech in Asia · Diya Lal
- OpenAI just crossed the Rubicon. — After Anthropic's Pentagon relationship collapsed over surveillance and autonomous weapons concerns … Sergey Kochnev
- I've seen a narrative emerge this week that the only thing standing between Americans and the use of AI for mass domestic surveillance … Katrina Mulligan
- Sam Altman Reveals OpenAI's Urgent Shift To Classified Pentagon Projects Benzinga · Bibhu Pattnaik
- The Pentagon-OpenAI-Anthropic fallout comes down to three words: “all lawful use” The Decoder · Matthias Bastian
- OpenAI: Pentagon deal has stronger guardrails than Anthropic's Reuters
- OpenAI Signed the Pentagon Deal. Anthropic Wrote It. Implicator.ai · Marcus Schuler
- OpenAI strikes Pentagon deal with ‘safeguards’ Hürriyet Daily News
- OpenAI reaches AI agreement with Defense Dept. after Anthropic clash Mercury News
- OpenAI shares its contract language and ‘red lines’ in agreement with the Department of War DNYUZ
- OpenAI shares its contract language and ‘red lines’ in agreement with the Department of Defense Business Insider · Katherine Tangalakis-Lippert
- 13 thoughts on Anthropic, OpenAI and the Department of War Silver Bulletin · Nate Silver
- OpenAI gives Pentagon AI model access after Anthropic dustup The Japan Times
- Trump Boots San Francisco AI Firm From Feds As Pentagon Slaps Risk Label Originally Reported … · David Abrams
- OpenAI signs Pentagon AI deal after Trump orders Anthropic ban The Next Web · Cristian Dina
- OpenAI strikes deal with Pentagon following Claude blacklisting — Anthropic to challenge supply chain risk designation in court Tom's Hardware · Luke James
- Trump blacklists Anthropic - and OpenAI swoops in Dow Jones Newswires · William Gavin
- Pentagon moves to designate Anthropic as a supply-chain risk TechCrunch · Russell Brandom
- Hours after Pentagon bans Anthropic, OpenAI strikes defense deal Semafor · Reed Albergotti
- OpenAI signs deal with US Department of War to deploy AI models Nairametrics · Samson Akintaro
- Pentagon Switches AI Partners: OpenAI Replaces Anthropic After Security Dispute Blockonomi · Trader Edge
- As Pentagon Targets Anthropic, OpenAI Moves to Fill the Void The Information
- OpenAI wins defense contract hours after government ditches Anthropic Cointelegraph · Amin Haqshanas
- OpenAI secures Pentagon deal amid Anthropic “Supply Chain Risk” designation Neowin · Pradeep Viswanathan
- OpenAI Lands Pentagon Deal as Trump Blacklists Rival Anthropic Techstrong.ai · Jon Swartz
- Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute The Hacker News
- OpenAI strikes deal with US Department of War as Anthropic faces supply-chain risk threat Business Today · Arun Padmanabhan
- Anthropic CEO on “retaliatory and punitive” Pentagon action CBS News
- OpenAI announces new deal with Pentagon — including ethical safeguards Politico · Bob King
- Anthropic to take Trump's Pentagon to court over AI dispute Axios · Maria Curi
- Trump Bans Anthropic As Pentagon Reportedly Accepts OpenAI's Military AI Safeguards — Anthony Scaramucci, Ilya Sutskever, Ross Gerber Weigh In Benzinga · Ananya Gairola
- Anthropic calls supply chain risk designation ‘unprecedented,’ ‘legally unsound’ The Hill · Julia Shapero
- Anthropic to Challenge Any Supply Chain Risk Designation Bloomberg · Yi Wei Wong
- In a day where the news cycle is dominated by WAR - again - I want to voice my total support for Anthropic and Dario Amodei for standing … Stefano
- We do not think Anthropic should be designated as a supply chain risk Hacker News
- Our Agreement with the Department of War Hacker News
- US military reportedly used Claude in Iran strikes despite Trump's ban The Guardian · Ed Pilkington
- The US reportedly used Anthropic's AI for its attack on Iran, just after banning it Engadget · Jackson Chen
- Anthropic AI Aided U.S. Attack in Iran, Despite Trump Ban Inc.com · Kevin Haynes
- Is AI already killing people by accident? Marcus on AI · Gary Marcus
- Anthropic CEO Dario Amodei calls White House's actions “retaliatory and punitive” CBS News · Faris Tanyos
- The Pentagon's Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use Gizmodo · Mike Pearl
- 🎯 Axios AM: He never saw it coming Axios · Mike Allen
- Pentagon Deploys Anthropic AI for Iran Strikes Despite Trump's Federal Ban Baller Alert · Iesha
- Downloads of Anthropic's Claude surge after Pentagon spat Semafor · J.D. Capelouto
Discussion
-
@nitasha
Nitasha Tiku
on bluesky
WSJ reporting that the U.S. used Claude for the air strikes in Iran. Centcom has been using Claude “for intelligence assessments, target identification and simulating battle scenarios” www.wsj.com/livecoverage... [image]
-
r/LocalLLaMA
r
on reddit
The U.S. used Anthropic AI tools during airstrikes on Iran
-
r/ClaudeAI
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
r/singularity
r
on reddit
While everyone is angry at OAI for accepting the DOD deal, Military has used Claude for its attack at Iran
-
r/ArtificialInteligence
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
@brianfagioli
Brian Fagioli
on x
@Techmeme Ayatollah used Copilot
-
@joshkale
Josh Kale
on x
Woah, it's now confirmed the US DID use Anthropic's Claude AI in its strikes on Iran They used Claude for: - Intelligence assessments - Target identification - Simulating battle scenarios This is the same AI that Trump banned 12 hours before the bombs fell. The same AI the [image…
-
@edzitron.com
Ed Zitron
on bluesky
lol even though they banned them the government used Claude anyway. Slop strategies for the Epic Bacon War. This could not have gone worse for Altman — www.wsj.com/livecoverage... [images]
-
@robertscotthorton
Scott Horton
on bluesky
Hegseth illegally uses Anthropic software in Iran War.
-
@histoftech
Jennifer Uncoolidge
on bluesky
“Within hours of declaring that the federal government will end its use of AI tools made by tech company Anthropic, Pres. Trump launched a major air attack in Iran with the help of those very same tools.” — U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban: — w…
-
r/fednews
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
r/BetterOffline
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
@kurtrisser
Kurt Risser
on bluesky
America and the world need more CEOs with the integrity of Dario Amodei. — Thanks for sticking by your principles, and doing the right thing! — www.cbsnews.com/news/pentago...
-
r/politics
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
r/ArtificialInteligence
r
on reddit
President Trump bans Anthropic from use in government systems
-
r/neoliberal
r
on reddit
President Trump bans Anthropic from use in government systems
-
r/law
r
on reddit
President Trump bans Anthropic from use in government systems
-
r/politics
r
on reddit
President Trump bans Anthropic from use in government systems
-
r/politicsinthewild
r
on reddit
President Trump ‘Bans’ Anthropic from use in Government Systems; OpenAI CEO says he shares Anthropic's “red lines”
-
r/NPR
r
on reddit
President Trump bans Anthropic from use in government systems
-
@sama
Sam Altman
on x
@Austen @wholemars No, we had some different ones. But our terms would now be available to them (and others) if they wanted.
-
@sama
Sam Altman
on x
@nummanali @TheRealAdamG Enforcing the SCR designation on Anthropic would be very bad for our industry and our country, and obviously their company. We said to the DoW before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-escl…
-
@gmiller
Geoffrey Miller
on x
@sama I'll ask the key question, since nobody else will: In your quest to build Artificial Superintelligence, what's the maximum p(doom) you're willing to impose on all of our kids, without our consent?
-
@jeremymstamper
Jeremy Stamper
on x
@sama What would happen if you called it the DoD? I'm not suggesting you do that but I'm curious what would happen if.
-
@apples_jimmy
@apples_jimmy
on x
@sama Why the rush to sign the deal ? Obviously the optics don't look great
-
@sama
Sam Altman
on x
@captgouda24 I don't think this will happen. But of course if we are confident it's unconstitutional, we wouldn't follow it. The constitution is more important than any job, or staying out of jail, or whatever. In my experience, the people in our military are far more committed t…
-
@sama
Sam Altman
on x
@peterwildeford We deliver a system (including choosing what models to deploy), and they can use it bound by lawful ways, including laws and directives around autonomous weapons and surveillance. But we get to decide what system to build, and the DoW understands that there are lo…
-
@sama
Sam Altman
on x
@viralmuskmelon This is a complicated one we struggled with a lot, and until recently it was easier for us to just not have to think about it much and let other companies figure it out. We decided we will work with other allied nations, and we think a balance of power in the worl…
-
@mreiffy
@mreiffy
on x
@sama @Jack_Raines Why move forward with the DoW agreement now, after months of more cautious talks, and how confident are you that the technical / policy safeguards will hold up in a real high stakes military setting?
-
@sama
Sam Altman
on x
@AlexCVJ I value my liberty and safety, and yours. I believe that strong democracy, and a strong US in particular, is a very good thing for the world. The 16 year old me thought every country should just abolish their defense department at the same time. I wish he were right, but…
-
@provisionalidea
James Rosen-Birch
on x
In the farce of a thread, Sam finally and most clearly admits the only bounds on DoW are whatever they deem legal (which anyone who read the contract text already knew).
-
@viralmuskmelon
@viralmuskmelon
on x
@sama @sama Sam, with OpenAI now powering classified military ops, how do you square that with your original mission to benefit all of humanity—not just one side in global conflicts? Genuine ask
-
@sama
Sam Altman
on x
@mcbyrne Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authori…
-
@alexcvj
@alexcvj
on x
@sama How did you go from “a tool for the betterment of the human race” to “let's work with the department of WAR”?
-
@sama
Sam Altman
on x
@theo For a long time, we were planning to non-classified work only. We thought the DoW clearly needed an AI partner, and doing classified work is clearly much more complex. We have said no to previous deals in classified settings that Anthropic took. We started talking with the …
-
@xw33bttv
Lex
on x
The absolute destruction of an industry first-mover in less than a year is unprecedented. It shouldn't be this easy to solely blame Sama... and yet, behind every blatant lie and poorly thought-out detrimental change sits his decision. Its tragic when you really think about it.
-
@sama
Sam Altman
on x
@tszzl Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that.
-
@captgouda24
Nicholas Decker
on x
@sama If the DoW gives you what you believe to be an unconstitutional order, do you refuse to follow it until the courts rule? Or do you do it until the courts bar it?
-
@captgouda24
Nicholas Decker
on x
@sama If the government comes back with a memo saying that, in their view, mass domestic surveillance is legal, do you do that? Do you do it until the courts bar it, or do you delay until the courts approve it? Second, would mass domestic surveillance be a lawful use right now?
-
@sama
Sam Altman
on x
@chatgpt21 I can't speak for them, but to speculate with the best understanding of the situation. *First, I saw reporting that they were extremely close on a deal, and for much of the time both sides really wanted to reach one. I have seen what happens in tense negotiations when …
-
@chatgpt21
Chris
on x
@sama What was the core difference why you think the DoW accepted OpenAI but not Anthropic
-
@sama
Sam Altman
on x
@mreiffy @Jack_Raines The main reason for the rush was an attempt to de-escalate matters at a time when it felt like things could get extremely hot. I am confident in our team's ability to build a safe system with all of their tools—including policy and legal matters, but also ma…
-
@peterwildeford
Peter Wildeford
on x
@sama So I'm confused - maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected. The way you bridge this is by saying the protections live in this “deployment architecture…
-
@sama
Sam Altman
on x
@DouthatNYT Yes; I think it is an extremely scary precedent and I wish they handled it a different way. I don't think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution.
-
@mattyglesias
Matthew Yglesias
on x
@sama 1. What kind of implicit or explicit threats did you receive from DOW before striking the deal? 2. If you received such threats, would you disclose them in public during a Twitter AMA? 3. If the answer to (2) is “no” (which of course it is) what's the point of this?
-
@peterwildeford
Peter Wildeford
on x
OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
-
@nateberkopec
Nate Berkopec
on x
Sam would do good to remember that Hegseth thinks it's sedition when Sen Mark Kelly says “don't follow illegal orders.” Saying that your models will only be used to follow legal orders is the barest of fig leaves in the current administration.
-
@valhalla_dev
@valhalla_dev
on x
Oh come tf on dude this is the dumbest PR larp of all time who is falling for this [image]
-
@theo
@theo
on x
@sama How long has this conversation with DoW been going for? What was the reason for announcing so close to the deadline they gave Anthropic?
-
@sama
Sam Altman
on x
@mattyglesias 1. No explicit or implicit threats. In fact, I could tell that as of Weds, the DoW was genuinely surprised we were willing to consider. 2. I think I would, and it would be lost in the noise of the SCR stuff.
-
@sama
Sam Altman
on x
@gothburz We will deploy FDEs, and have cleared researchers.
-
@gothburz
Peter Girnus
on x
@sama On classified networks, your engineers won't have clearance to monitor how the model is used. Your safeguards are contractual, not architectural. How do you enforce a red line you can't see being crossed?
-
@tyler_m_john
Tyler John
on x
@sama Which of the following is true? a) the contract permits all lawful use, + therefore mass surveillance + autonomous weapons, which have no legal prohibition b) the contract has substantive red lines that constrain lawful use c) OpenAI has a controversial interpretation of th…
-
@mcbyrne
@mcbyrne
on x
@sama Will you turn off the tool if they violate the rules?
-
@douthatnyt
Ross Douthat
on x
@sama Does the precedent that the DoW is setting by effectively blacklisting Anthropic make you concerned about what any future dispute with the Pentagon would mean for your own company's independence and viability?
-
@tszzl
Roon
on x
@sama are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk? I find this part to be the most worrying out of distribution thing to happen this past week
-
@sama
Sam Altman
on x
@gmiller I have never known how to put an exact number on p(doom), to say nothing of how to think about the differential. I will say this: if I thought going to work every day made it less likely that we all continue to thrive into the future, I would retire and just hang with my…
-
@sama
Sam Altman
on x
Sam Altman says OpenAI reached an agreement with the DOD to deploy its models in DOD's classified network and asks DOD to extend those terms to all AI companies
-
r/NewsOfTheStupid
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
@garymarcus
Gary Marcus
on x
The race to shove AI into everything is grossly premature, because the tech fundamentally lack reliability. Meanwhile, the chance that we will get straight answers is probably close to zero. (Altman, for his part, doesn't seem to care.)
-
@tyler_a_harper
Tyler Austin Harper
on x
Genuine question for people who might have a better grasp on how Claude is being used by the military than I do: WSJ says Claude was used for “target identification.” Is it possible that the bombing of the girls' school that left nearly 150 dead was an AI error or hallucination?
-
@tcarmody
Tim Carmody
on bluesky
WSJ cites sources who say US Central Command is using Claude “for intelligence assessments, target identification and simulating battle scenarios.” — To my knowledge — besides everything else wrong with this — plain old Claude is not trained to do any of those things. — www.w…
-
@patigallardo
@patigallardo
on bluesky
We don't know that an LLM had a hand in the horrific bombing of a girls school in Iran. — But if it did...
-
@jeffroushwriting
Jeff Roush
on bluesky
The current regime has determined that a company is a national security risk because it does NOT want its AI product used for mass surveillance on Americans. That is the Hegseth-Trump bright line: they demand an unfettered ability to spy on all of us. — www.npr.org/2026/02/27/…
-
@lulu.sheshed.rocks
Lulu
on bluesky
Our government floated the idea of invoking the Korean War era Defense Production Act to compel Anthropic to allow use of its tools...WTF.
-
@justinhendrix
Justin Hendrix
on bluesky
Claude goes to war in Iran: “Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.”
-
@nkalamb
Nathan Kalman-Lamb
on bluesky
The WSJ is reporting that AI, specifically Claude, was used in targeting for the attacks by the Epstein Empire. — That would mean the use of AI led directly to the massacre of 115 schoolchildren and 20 volleyball players. — www.wsj.com/livecoverage... [image]
-
@joshuafoust.com
Joshua Foust
on bluesky
The regime won't let you have clean hands if you work for them, even if they're punishing you. This was leaked on purpose, is uncomfortably close to confessing to a war crime, and highlights once again how tech's unthinking growth directly leads to dead kids.
-
@theatlantic
@theatlantic
on x
The deal between the Pentagon and Anthropic fractured in part over the proposed use of autonomous weapons. @andersen on the question OpenAI staff should now be asking Sam Altman about his company's new deal with the Pentagon: https://www.theatlantic.com/ ...
-
@chathamharrison
@chathamharrison
on bluesky
Sure is weird that Sam Altman thinks this is a great idea as long as he's the one doing it [embedded post]
-
@tonystark
Tony Stark
on bluesky
Hooo boy. There we go. It was about domestic surveillance after all. [embedded post]
-
@stahl
@stahl
on bluesky
It's so cool that no one is even bothering to be mad about the government analyzing"bulk data collected about Americans" they're just arguing about which tool they're gonna use to do it [embedded post]
-
@damonberes.com
Damon Beres
on bluesky
New details on the dispute between the Pentagon and Anthropic; how the negotiations broke down, and a particular sticking point on AI in the cloud vs inside of edge systems. by @rossandersen.bsky.social / tip @techmeme.com
-
Flux
Matthew Sheffield
on x
The Hegsethian jihad against Anthropic
-
@arozenshtein
Alan Rozenshtein
on x
These are NOT meaningful redlines. For example it only prohibits autonomous weapons “ in any case where law, regulation, or Department policy requires human control.” But the relevant safeguard against autonomous weapons is a DOD directive that Hegseth can change at will! Also
-
@uswremichael
@uswremichael
on x
The DoW has always believed in safety and human oversight of all its weapons and defense systems and has strict comprehensive policies on that. Further, the DoW does not engage in any unlawful domestic surveillance with or without an AI system and always strictly complies with
-
@2plus2make5
Emma Pierson
on x
.@OpenAI is nothing without its people — many of whom are brilliant, ethical, and able to work anywhere. Please, guys — is this empowerment of authoritarians really what you want to be striving towards? Your talents are better-used elsewhere.
-
@natesilver538
Nate Silver
on x
The eagerness for OpenAI to sign the contract on the very night their rival got fired is likely to be a lot more revealing than the contract terms, which in any event are ambiguous and unlikely to be enforced by a court that gives a lot of deference to the executive.
-
@sama
Sam Altman
on x
@apples_jimmy It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good. If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a com…
-
@gothburz
Peter Girnus
on x
I work in government affairs at OpenAI. My job is federal partnerships. When an agency wants our models, I make sure the paperwork is beautiful. Paperwork is my love language. On my desk I have a framed quote that says “Policy Is Just Code That Runs on People.” I bought the
-
@alexolegimas
Alex Imas
on x
Anthropic was started when senior OpenAI researchers were concerned that the company was not doing enough around safety and alignment for the powerful tech they were building. So they started their own company, around principles that builders of AI models should do as much as
-
@theo
@theo
on x
I am disappointed in OpenAI's decision to work with the Department of War. The way DoW treated Anthropic stands against everything that makes America great. It know it's not this simple, but it feels super opportunistic in a way that doesn't sit right with me.
-
@captgouda24
Nicholas Decker
on x
I strongly suspect that the letter of the law no longer matters. What matters is whether the leadership at OpenAI will pull access if it is used illegally, and whether they have technical bars to illegal usage. Hegseth knew Anthropic has a backbone — what's that say about OAI?
-
@laneless_
Jai
on x
At least one of these is true: 1. OpenAI leadership doesn't know that the NSA is part of the DoD they just agreed to serve 2. They don't think the NSA spies on any domestic communications 3. They're profoundly dishonest
-
@sama
Sam Altman
on x
@captgouda24 We would not do that, because it violates the constitution. Also, I cannot overstate how much the DoW has been extremely aligned on this point. However, maybe this is the question you are really asking: what would we do if there were a constitutional amendment that m…
-
@tysonbrody
Tyson Brody
on x
Does the administration and all of its loudest cheerleaders on here endorse OpenAI's claim that it has the ability to terminate it's contracted services if it decides the government is in violation of their agreement? How does that differ than complaints about Dario? [image]
-
@thezvi
Zvi Mowshowitz
on x
If you are an employee at OpenAI, get as much information and detail about the terms as possible. Read all of it. Run it by your lawyers and AIs. Decide whether this protects the things you care about and whether it was represented fairly. This here does not tell us enough.
-
@natseckatrina
@natseckatrina
on x
@uday_devops @sama it's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact
-
@rcbregman
Rutger Bregman
on x
Read: we bribed Trump for $25M, publicly supported Anthropic while we were conspiring with Hegseth, signed a deal full of legalese about our fake red lines while giving the regime what it wants, and then we threw Anthropic under the bus again. Such a despicable company.
-
@morallawwithin
@morallawwithin
on x
remember—you should do whatever the government wants, even things you think are immoral, because otherwise you're deciding what you can do instead of the government, which is undemocratic
-
@thdxr
Dax
on x
absolutely zero clarity right now
-
@petereharrell
Peter Harrell
on x
I understand why Anthropic did not agree to this language. I also get why OpenAI did agree. DoW/governent should respect both choices. Just end the Anthropic contracts, and work with OpenAI. It's the broader retaliation and effort to harm Anthropic that is the problem.
-
@blackhc
Andreas Kirsch
on x
I'm speechless at OpenAI releasing that contract excerpt and acting as if there aren't gaping holes that could be exploited far beyond their stated “red lines.” I'm not a lawyer, but this is pretty obvious and common sense. (And to be clear: if Google had signed the same deal, [i…
-
@markvalorian
Mark Valorian
on x
This unfortunately says nothing. The US was willing to incur significant costs retrofitting the entire government with a new provider because Anthropic wouldn't give them something. They wouldn't do that just to get the same deal from someone else. OpenAI *must* be giving them
-
@sama
Sam Altman
on x
Three general things from this AMA: 1. There is more open debate than I thought ther ewould be, at least in this part of Twitter, about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something
-
@stephenlcasper
Cas
on x
To get this straight, OpenAI is making a couple of pretty extraordinary claims (vaguely, in legalese): A. They have negotiated a deal with the DoW that will actually lead to better guardrails against mass surveillance and lethal AI weapons than what Anthropic wanted. B. They
-
@josheakle
Joshua Reed Eakle
on x
Sam Altman saw the 700K+ users drop his platform in a single day and decided to pivot his PR approach. 🤡
-
@manlikemishap
Pamela Mishkin
on x
the wildest part? If OAI actually wanted the redlines, they had the leverage to get them! pentagon not going to declare a SECOND merican AI company a supply chain risk, could have held the line and forced real concessions and safety!
-
@gupgup12212657
GupGup
on x
I say this as someone who often stood against the vitriol lobbed at OAI for many years. I am done with OAI Never have I seen such a willfully gullible and irresponsibly incurious set of employees.
-
@pawelhuryn
Paweł Huryn
on x
I've just canceled my OpenAI subscription and turned down a collaboration with OpenAI. Some say Anthropic has lost. To me, they just earned something no contract can buy - trust. And something tells me that's not the end of the story.
-
@eggerdc
Andrew Egger
on x
It's remarkable that OpenAI is so explicitly claiming that its agreement upholds a “no autonomous weapons” redline when the text of the agreement so plainly does not.
-
@jacquesthibs
Jacques
on x
Claude's response: TLDR: OpenAI's red lines are real. The contract language enforcing them defers to laws and policies the Pentagon can rewrite. Every prohibition is conditional on the thing it's supposed to constrain. — This fits a pattern. Sam Altman's reputation —
-
@gjmcgowan
George McGowan
on x
This is just “all lawful use” with extra words - no way the pentagon would have a huge hissy fit about these redlines and then immediately agree to a new contract with the same ones in it
-
@boazbaraktcs
Boaz Barak
on x
The DoW does not spy on domestic communication of U.S. people (including via commercial collection) and to do so would be unlawful and profoundly un-American.
-
@neil_chilson
Neil Chilson
on x
In the reactions to this post, I see a lot of people concerned with the state of the current law on surveillance. I share those deep concerns. I am surprised, however, by how many people want to address those concerns by having the CEO of a private corporation set the rules.
-
@max_spero_
Max Spero
on x
“all lawful purposes” confirmed to be included in the contract. I sure hope we never have an executive order authorizing the use of fully autonomous weapons and AI-enabled mass domestic surveillance. [image]
-
@deredleritt3r
Prinz
on x
My thoughts on OpenAI's agreement with the DoD: On autonomous AI weapons: 1. “The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.” This says that OpenAI's models may not [image]
-
@krishnanrohit
Rohit
on x
A good question to ask is under what contractual provisions and safety mechanisms you would trust the counterparty. If the answer is “none of them”, which is totally fair, then that too is an answer. https://x.com/...
-
@benedictk__
Benedict Kerres
on x
If you followed the latest ai discussion (Dow): Now that we released below, it's clear we offered a workable solution with MORE guardrails and redlines.
-
@krishnanrohit
Rohit
on x
An observation of the Anthropic | OpenAI | DoW discussion is that many seem to think of a commercial contract like they think of AI alignment. A binding commitment that would prevent anyone from doing anything wrong with it after. It's wrong about alignment and it's wrong about
-
@trekedge
Daniel Steigman
on x
So in the end, OpenAI will be able to control and deploy the entire safety stack, with the ability to add or update classifiers at will. This is the kind of strong enforcement that's needed. A big win for all AI labs, including Anthropic.
-
@allinallnotbad
Samuel Roland
on x
Though I personally think this language is superior to what I suspect Anthropic was asking for, the legalese here (to my read), allows the DoW to modify at least the 3000.09 restriction (which is regulation, not law). I don't think a fair read of this is as stronger protections,
-
@kimmonismus
@kimmonismus
on x
Upon initial review, it appears that OpenAI has indeed achieved what Anthropic failed to do: a deal with the DoW under the following three rules: - No use of OpenAI technology for mass domestic surveillance. - No use of OpenAI technology to direct autonomous weapons systems. - [i…
-
@amasad
Amjad Masad
on x
Interesting: “We think our deployment has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.”
-
@boazbaraktcs
Boaz Barak
on x
@TheZvi Had Anthropic “won” and got the conditions they wanted, or even under the original contract, would you have confidence that the DoW would not have been able to find lawyers that interpret these terms in any way they wanted? Usage policies are important, but without a safe…
-
@thezvi
Zvi Mowshowitz
on x
I could be wrong, but based on what I see here I do not think it will be difficult for DoW to find lawyers saying it can do pretty much whatever it wants, and that's all they will need. If there is additional language that fixes that, please do share it.
-
@polynoamial
Noam Brown
on x
For those following the DoW AI drama, I highly recommend reading this post explaining how @OpenAI approached the negotiations with the DoW. [image]
-
@darlingtondev
Mike Darlington
on x
@OpenAI More guardrails than any previous agreement, including Anthropic's', but Anthropic's agreement had guardrails that couldn't be overridden. Yours apparently has legalese that allows them to be disregarded at will. More guardrails means nothing if they're decorative.
-
@zeffmax
Max Zeff
on x
OpenAI is out with a blog on its pentagon agreement. Looks like there are some real carveouts in here around surveillance and autonomous weapons... curious how this compares to the agreement Anthropic was given! [image]
-
@masnick.com
Mike Masnick
on bluesky
OpenAI posted the terms of the deal. Reveals that it absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons. — open…
-
@timkellogg.me
Tim Kellogg
on bluesky
ah, i think i got too optimistic about OpenAI — so basically, Anthropic pushed back, OpenAI kept the channel warm with Greg Brockman's Trump donations and stepped in right at the moment the whole thing felt like it could never recover [embedded post]
-
@wildebees
Wessel van Rensburg
on bluesky
OpenAI is serious reputation washing mode. [embedded post]
-
@mshelton
Martin Shelton
on bluesky
OpenAI is saying, here are the laws that make this decision okay. Then they go on to list a series of laws that creative lawyers are taking advantage of to enact surveillance both internationally, and domestically. I'm not sure this is the kind of defense they think it is. open…
-
@hunesocial
Hune
on bluesky
In the hands of a far-right or authoritarian- leaning government, powerful AI can greatly amplify surveillance, repression, and propaganda, far beyond what older tech allowed. — I feel very uneasy about that scenario.
-
@kalihays
Kali Hays
on bluesky
With friends like these, who needs enemies [embedded post]
-
@alanrozenshtein.com
Alan Rozenshtein
on bluesky
These are NOT meaningful redlines. For example it only prohibits autonomous weapons “ in any case where law, regulation, or Department policy requires human control.” But the relevant safeguard against autonomous weapons is a DOD directive that Hegseth can change at will! opena…
-
@matthew.flux.community
Matthew Sheffield
on bluesky
OpenAI has published a blog post that addresses its recent announcement of a contract with the U.S. Department of Defense. — It claims that the software deployments it is contracted to build are “cloud only,” but does not define what that means. Nor does it discuss API outputs…
-
@mshelton@mastodon.social
Martin
on mastodon
OpenAI is saying, here are the laws that make this decision okay. Then they go on to list a series of laws that creative lawyers are taking advantage of to enact surveillance both internationally, and domestically. I'm not sure this is the kind of defense they think it is. http…
-
r/technology
r
on reddit
Our agreement with the Department of War
-
r/codex
r
on reddit
OpenAI: “Our agreement with the Department of War” | February 28, 2026
-
r/ChatGPT
r
on reddit
Our agreement with the Department of War
-
r/OpenAI
r
on reddit
Our agreement with the Department of War
-
r/singularity
r
on reddit
OpenAI: Our agreement with the Department of War
-
@gauravkapadia
Gaurav Kapadia
on x
Governments should be able to choose their vendors based on their terms of service and capabilities. Deeming a company a security risk- and threatening their very existence- because you don't like their ToS is a very slippery slope. Wouldn't want XAI and OAI to be threatened by
-
@arthurconmy
Arthur Conmy
on x
There are many open questions on the current situation. But on the particular narrow point on stance, this seems a good sign: https://x.com/...
-
@chrisharihar
Chris Harihar
on x
OpenAI needs to stop being reactive and overexplaining every move. They also need to stop explicitly addressing what competitors are/seem to be doing. It's embarrassing.
-
@terronk
Lee Edwards
on x
Stand with free markets, competition, and American tech alongside OpenAI and Anthropic.
-
@captgouda24
Nicholas Decker
on x
Certainly, but one can say many things without working to effect them. One need not even call it lying.
-
@daniellefong
Danielle Fong
on x
is your position clear enough that it is itself a red line
-
@_nathancalvin
Nathan Calvin
on x
Good - in Sam's previous comments on CNBC, he only mentioned using the DPA to force anthropic was a bad idea, so I appreciate they are making clear this also applies to the SCR designation. Other companies should also state this position as clearly as possible.
-
@claudia_sahm
Claudia Sahm
on x
Put your money where your mouth is.
-
@openai
@openai
on x
Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. We think our approach better protects against unacceptable use. In our agreement, we protect our redlines through a
-
@openai
@openai
on x
Our agreement with the Department of War upholds our redlines: - No use of OpenAI technology for mass domestic surveillance. - No use of OpenAI technology to direct autonomous weapons systems. - No use of OpenAI technology for high-stakes automated decisions (e.g. systems such
-
@wildebees
Wessel van Rensburg
on bluesky
OpenAI trying to protect its reputation [embedded post]
-
r/politics
r
on reddit
OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash
-
r/technology
r
on reddit
Employees at Google and OpenAI support Anthropic's Pentagon stand in open letter
-
r/technology
r
on reddit
Pentagon moves to designate Anthropic as a supply-chain risk