Source describes the failed Pentagon-Anthropic talks: through the end, the Pentagon wanted to use Anthropic's AI to analyze bulk data collected about Americans
Right up until the moment that Pete Hegseth moved to terminate the government's relationship with the AI company Anthropic …
The Atlantic Ross Andersen
Related Coverage
- Anthropic CEO Dario Amodei says ‘we are patriotic Americans’ committed to defending the U.S. but won't budge on ‘red lines’ Fortune · Jason Ma
- Anthropic to Department of Defense: Drop dead Computerworld · Steven Vaughan-Nichols
- Anthropic was right not to trust Pete Hegseth MS NOW · Hayes Brown
- Pentagon Casts Cloud of Doubt Over Anthropic's AI Business Bloomberg Law
- Warren accuses Trump, Hegseth of trying ‘extort’ Anthropic into removing AI guardrails The Hill · Ryan Mancini
- Anthropic's Killer-Robot Dispute with The Pentagon Hacker News
- Claude hits #1 on the App Store as users rally behind Anthropic's government standoff 9to5Mac · Zac Hall
- Hegseth declares Anthropic a supply chain risk, barring military contractors from doing business with AI giant CBS News · Joe Walsh
- Pete Hegseth punishes company for trying to protect the privacy of its customers Lawyers, Guns & Money · Scott Lemieux
- Hegseth Designates Anthropic As Supply Chain Risk After Trump Bans Government Us Forbes
- US military reportedly used Claude in Iran strikes despite Trump's ban The Guardian · Ed Pilkington
- The US reportedly used Anthropic's AI for its attack on Iran, just after banning it Engadget · Jackson Chen
- Anthropic's Claude Tops Apple App Store Charts Day After Trump Administration Bars Agency Use Benzinga · Rounak Jain
- US military used Anthropic in Iran strike despite ban order by Trump: WSJ Cointelegraph · Amin Haqshanas
- Trump banned Anthropic — hours later, US military used its Claude AI in Iran strikes: Report Livemint · Aman Gupta
- Inside the Pentagon's Fight to Use AI Any Way It Wants in Weapons and Surveillance Inc.com · Kevin Haynes
- Trump orders federal agencies to stop using Anthropic's AI technology CBS News · Melissa Quinn
- What Happens to Anthropic Now? — President Trump is terminating the government's relationship … The Atlantic · Matteo Wong
- Trump's furious response to Anthropic is as much about power as it is about AI safety Sky News · Tom Clarke
- From contract partner to security risk: The Anthropic-Pentagon dispute explained Moneycontrol
- Trump blacklists Anthropic, opening the door to Elon Musk and xAI MarketWatch · William Gavin
- White House Moves to End Federal Use of Anthropic's Claude AI PYMNTS.com
- Claude became the #1 free app in the US App Store on Saturday, after DOD designated Anthropic a supply chain risk; it hovered in the top 20 for much of February CNBC · Jordan Novet
- Trump orders federal agencies to stop using Anthropic as dispute escalates Al Jazeera
- Trump orders federal agencies to stop using Anthropic's AI after clash with Pentagon Los Angeles Times · Queenie Wong
- U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban — ~ “Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, Trump launched a major air attack in Iran with the help of those very same tools.” … @dalfen@mstdn.social
- Clearly the whole drama with Pentagon making a big deal of showing that they're trying to force AI companies to build autonomous AI killing machines and spy on citizens is completely manufactured. — Anthropic was always going to comply, and the goal is to just create a marketing campaign them as heroically resisting. … @yogthos@social.marxist.network
- OpenAI reveals more details about its agreement with the Pentagon TechCrunch · Anthony Ha
- OpenAI, creator of ChatGPT, makes its technology available to the Pentagon WWL-TV · Katrina Morgan
- ‘No ethics at all’: the ‘cancel ChatGPT’ trend is growing after OpenAI signs a deal with the US military TechRadar · David Nield
- How Pentagon turns Claude into America's most downloaded app Türkiye Today · Zehra Unlu
- OpenAI gives Pentagon AI model access after Anthropic dustup Hartford Courant
- OpenAI-Pentagon deal faces same safety concerns that plagued Anthropic talks Axios · Maria Curi
- A Few Observations on AI Companies and Their Military Usage Policies fishbowlification · Sarah Shoker
- The government's AI standoff could decide who really controls America's military tech Business Insider
- 🗞️ OpenAI sweeps in to ink deal with Pentagon as Anthropic is designated a ‘supply chain risk’ Rohan's Bytes · Rohan Paul
- OpenAI details layered protections in US defense department pact Reuters
- OpenAI-Pentagon deal highlights deeper conflict over who controls AI safeguards DigiTimes
- OpenAI Defends Pentagon Deal, Claims Safety Exceeds Anthropic's Bloomberg
- OpenAI strikes a deal with the Defense Department to deploy its AI models Engadget · Mariella Moon
- OpenAI agrees to deploy AI models on Pentagon network Tech in Asia · Diya Lal
- OpenAI just crossed the Rubicon. — After Anthropic's Pentagon relationship collapsed over surveillance and autonomous weapons concerns … Sergey Kochnev
- I've seen a narrative emerge this week that the only thing standing between Americans and the use of AI for mass domestic surveillance … Katrina Mulligan
- Sam Altman Reveals OpenAI's Urgent Shift To Classified Pentagon Projects Benzinga · Bibhu Pattnaik
- OpenAI's Sam Altman announces Pentagon deal with ‘technical safeguards’ TechCrunch · Anthony Ha
- OpenAI Says Hell Yea To To Helping Government With ‘Fully Autonomous Weapons’ As Trump Bombs More Countries Kotaku · Zack Kotzer
- OpenAI defends rival Anthropic against Pentagon ban, Sam Altman calls it ' ‘extremely scary precedent’ Livemint · Aman Gupta
- OpenAI CEO Sam Altman answers questions on Pentagon deal, accountability, and whether governments can ‘nationalise’ AI Moneycontrol
- OpenAI on Pentagon's clash with Anthropic: Here's all that Sam Altman said after signing the deal The Economic Times
- OpenAI Makes Deal With Pentagon, Including Safeguards Anthropic Requested Before Ban SFist · Leanne Maxwell
- OpenAI CEO Sam Altman answers questions on new Pentagon deal: ‘This technology is super important’ Fox Business
- 5 big takeaways from Sam Altman's Saturday night AMA on OpenAI's Pentagon deal DNYUZ · Saul Loeb
- OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash New York Times · Cade Metz
- OpenAI reaches deal with Pentagon after Trump drops Anthropic UPI · Danielle Haynes
- Sam Altman Is Marketing OpenAI as America's Wartime AI Company Whether He Intends to or Not Gizmodo · Mike Pearl
- OpenAI to work with Pentagon after Anthropic dropped by Trump over company's ethics concerns The Guardian
- Pentagon reaches deal with OpenAI amid Anthropic beef The Hill · Julia Shapero
- OpenAI strikes deal with Pentagon hours after Trump admin bans Anthropic CNN · Hadas Gold
- Anthropic lost the battle, OpenAI won the war? Digit · Jayesh Shinde
- OpenAI strikes deal with Pentagon to use tech in ‘classified network’ Al Jazeera · Lyndal Rowlands
- Sam Altman's OpenAI Moves Ahead With Pentagon AI Deal After Anthropic Says No Blockonomi · Brenda Mary
- OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump CNBC
- Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic Slashdot
- The Pentagon-OpenAI-Anthropic fallout comes down to three words: “all lawful use” The Decoder · Matthias Bastian
- OpenAI Signed the Pentagon Deal. Anthropic Wrote It. Implicator.ai · Marcus Schuler
- OpenAI reaches AI agreement with Defense Dept. after Anthropic clash Mercury News
- OpenAI strikes Pentagon deal with ‘safeguards’ Hürriyet Daily News
- OpenAI shares its contract language and ‘red lines’ in agreement with the Department of War DNYUZ
- OpenAI shares its contract language and ‘red lines’ in agreement with the Department of Defense Business Insider · Katherine Tangalakis-Lippert
- 13 thoughts on Anthropic, OpenAI and the Department of War Silver Bulletin · Nate Silver
- OpenAI: Pentagon deal has stronger guardrails than Anthropic's Reuters
- OpenAI signs Pentagon AI deal after Trump orders Anthropic ban The Next Web · Cristian Dina
- Trump Boots San Francisco AI Firm From Feds As Pentagon Slaps Risk Label Originally Reported … · David Abrams
- OpenAI gives Pentagon AI model access after Anthropic dustup The Japan Times
- Pentagon moves to designate Anthropic as a supply-chain risk TechCrunch · Russell Brandom
- OpenAI strikes deal with Pentagon following Claude blacklisting — Anthropic to challenge supply chain risk designation in court Tom's Hardware · Luke James
- Trump blacklists Anthropic - and OpenAI swoops in Dow Jones Newswires · William Gavin
- Hours after Pentagon bans Anthropic, OpenAI strikes defense deal Semafor · Reed Albergotti
- OpenAI signs deal with US Department of War to deploy AI models Nairametrics · Samson Akintaro
- As Pentagon Targets Anthropic, OpenAI Moves to Fill the Void The Information
- Pentagon Switches AI Partners: OpenAI Replaces Anthropic After Security Dispute Blockonomi · Trader Edge
- OpenAI wins defense contract hours after government ditches Anthropic Cointelegraph · Amin Haqshanas
- OpenAI secures Pentagon deal amid Anthropic “Supply Chain Risk” designation Neowin · Pradeep Viswanathan
- OpenAI Lands Pentagon Deal as Trump Blacklists Rival Anthropic Techstrong.ai · Jon Swartz
- OpenAI strikes deal with US Department of War as Anthropic faces supply-chain risk threat Business Today · Arun Padmanabhan
- Pentagon Designates Anthropic Supply Chain Risk Over AI Military Dispute The Hacker News
- Source: the DOD appears to have accepted OpenAI's safety red lines, which were similar to Anthropic's, to deploy OpenAI's tech in classified settings Axios · Maria Curi
- Anthropic CEO on “retaliatory and punitive” Pentagon action CBS News
- OpenAI announces new deal with Pentagon — including ethical safeguards Politico · Bob King
- Trump Bans Anthropic As Pentagon Reportedly Accepts OpenAI's Military AI Safeguards — Anthony Scaramucci, Ilya Sutskever, Ross Gerber Weigh In Benzinga · Ananya Gairola
- Anthropic calls supply chain risk designation ‘unprecedented,’ ‘legally unsound’ The Hill · Julia Shapero
- Anthropic to Challenge Any Supply Chain Risk Designation Bloomberg · Yi Wei Wong
- Trump Admin Hits AI Company Anthropic With Business-Crippling ‘Supply Chain Risk’ Designation. The National Pulse · William Upton
- In extraordinary move, Pentagon designates Anthropic a ‘supply chain risk’ to U.S. national security Washington Times · Ben Wolfgang
- In a day where the news cycle is dominated by WAR - again - I want to voice my total support for Anthropic and Dario Amodei for standing … Stefano
- We do not think Anthropic should be designated as a supply chain risk Hacker News
- Our Agreement with the Department of War Hacker News
- Dario Amodei says “we are patriotic Americans” and Anthropic fears some AI uses could clash with American values as AI's potential gets “ahead of the law” CBS News · Jo Ling
- Weekend Round-Up: Nvidia's Record Revenue, OpenAI's Pentagon Deal And More Benzinga · Ananya Gairola
- What to know about the clash between the Pentagon and Anthropic over military's AI use Associated Press
- Anthropic to take Trump's Pentagon to court over AI dispute Axios · Maria Curi
- Anthropic AI Aided U.S. Attack in Iran, Despite Trump Ban Inc.com · Kevin Haynes
- Is AI already killing people by accident? Marcus on AI · Gary Marcus
- Anthropic CEO Dario Amodei calls White House's actions “retaliatory and punitive” CBS News · Faris Tanyos
- Ted Lieu, Emil Michael and More Sound Off on OpenAI's New Agreement With the Department of War The Wrap · Alyssa Ray
- OpenAI Gives Pentagon AI Model Access After Anthropic Dustup Bloomberg Law
- Trump directs US agencies to toss Anthropic's AI iTnews
- ‘Never before publicly applied to an American company’: Anthropic to challenge Pentagon's ‘supply chain risk’ designation in court Business Today
- Trump blacklists Anthropic: Here's what being a “supply chain risk” means Axios · Julianna Bragg
- Anthropic-Pentagon Hiccup ‘Opens Door for OpenAI’: Nelson Bloomberg
- Claude hits No. 1 on App Store as ChatGPT users defect in show of support for Anthropic's Pentagon stance Business Insider · Lakshmi Varanasi
- Anthropic's Claude rises to No. 1 in the App Store following Pentagon dispute TechCrunch · Anthony Ha
- The ‘QuitGPT’ movement gets a surge of activity after OpenAI strikes a deal with the Pentagon XDA Developers · Simon Batt
- Anthropic's Claude tops US App Store despite defense scrutiny Tech in Asia · Aiko Gao Ishida
- Pentagon Used Anthropic's Claude AI During Iran Strike Hours After Trump Ordered Ban: Report Benzinga · Mohd Haider
- Anthropic vs. The Pentagon: what enterprises should do VentureBeat · Carl Franzen
- SF high-tech AI firm declared a supply chain risk to national security KTVU-TV
- Pentagon declares Anthropic a threat to national security Washington Post
- Anthropic Labeled a Supply Chain Risk, Banned from Federal Government Contracts Reason · Jack Nicastro
- Anthropic CEO Defies Pentagon Over AI Weapons Guardrails WinBuzzer · Markus Kasanmascheff
- OpenAI sweeps in to ink deal with Pentagon as Anthropic is designated a ‘supply chain risk’—an unprecedented action likely to crimp its growth Fortune · Jeremy Kahn
- AWS and OpenAI get stateful; ServiceNow goes to work Runtime · Tom Krazit
- Perplexity Computer wows, Karpathy kills vibe coding, and OpenAI replaces Anthropic at the Pentagon The New Stack · Matthew Burns
- Trump administration bans Anthropic, seemingly embraces OpenAI Computerworld · Cynthia Brumfield
- Real Despots Hijack Artificial Intelligence New York Times · Maureen Dowd
- OpenAI signs Pentagon deal for classified AI networks hours after Anthropic gets banned from federal agencies The Decoder · Matthias Bastian
- War Piggies — It rained here a lot yesterday, and then our power went down around sunset. Balloon Juice · Betty Cracker
- Anthropic Supply Chain Risk Designation Triggers Lawsuit Against Trump Administration Blockonomi · Brenda Mary
- Trump Bans Anthropic AI in Federal Agencies — Pentagon Flags Claude as Security Risk Cyber Security News · Guru Baran
- Anthropic to challenge Pentagon in court, hours after Trump orders ban on AI firm Times of India
- Former Trump AI Adviser Torches President's War on Anthropic: ‘Attempted Corporate Murder’ Mediaite · Michael Luciano
- Anthropic faces fallout across federal agencies from DOD clash FedScoop
- Trump orders the government to stop using Anthropic after standoff with the Pentagon Quartz
- Trump directs government to ‘immediately cease’ using Anthropic technology FCW · Frank Konkel
- The Pentagon's fight with Anthropic sparks fears in Silicon Valley and the Capitol of a fundamental shift in the balance of power between DC and the AI industry Politico · Brendan Bordelon
- Standing up when you believe something is wrong isn't anti-American. It's the most American thing you can do. Proud to work at a company that understands that. … Scott White
- I've worked at tech companies in Silicon Valley for over ten years across 6 companies now. I've never admired any CEO as much as Dario Amodei. … Charley Kamolpornwijit
- I'm so proud to work at Anthropic, particularly today. — Principles matter, period. — Read our stance: — https://lnkd.in/... Miguel Escamilla
- Never been more proud of where I work. Ali Winston
- Anthropic stood up for all Americans by refusing to support Trump's demands to use Claude AI for fully autonomous weapons and American citizen mass surveillance. … Bill Schmarzo
- Hard to put in words how proud I am of our leadership team. — https://lnkd.in/... Nick Lewis
- “First, we do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. … Riccardo Patana
- I'm so proud to be at Anthropic today. I have seen some incredible people here fight tirelessly to advance our national defense. … David Hershey
- “We held to our exceptions for two reasons. First, we do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. … Dustin L.
- Statement on the comments from Secretary of War Pete Hegseth Hacker News
- Anthropic's Claude Leaps to #2 on Apple's ‘Top Apps’ Chart After Pentagon Controversy Slashdot
- US Threatens Anthropic with ‘Supply-Chain Risk’ Designation. OpenAI Signs New War Department Deal Slashdot
- “All Lawful Use”: Much More Than You Wanted To Know Astral Codex Ten · Scott Alexander
- I am seeing a lot of calls to boycott OpenAI — and I support them. Gary Marcus
- The Pentagon's Claude Use in Iran Is a Reminder that Anthropic Never Objected to Military Use Gizmodo · Mike Pearl
- 🎯 Axios AM: He never saw it coming Axios · Mike Allen
- After DOD's designation of Anthropic as a supply chain risk on social media, contract experts weigh in on the ramifications for the company and its customers Wired
- Anthropic's origin story begins with a small group of people honoring their principles above corporate interests and personal gain … Cecilia Callas
- Pentagon Deploys Anthropic AI for Iran Strikes Despite Trump's Federal Ban Baller Alert · Iesha
- Anthropic's Claude grabs top spot in App Store after Trump's ban Engadget · Jackson Chen
- Anthropic's Claude Passes ChatGPT, Now #1, on Apple's ‘Top Apps’ Chart After Pentagon Controversy Slashdot
- Let's not forget the many, many brave whistleblowers and people targeted by tech execs — many of whom were women and people of color who didn't have $millions to fall back on — who've taken a stand on these issues over the years and prevented major harms. … @anildash · Anil Dash
- Downloads of Anthropic's Claude surge after Pentagon spat Semafor · J.D. Capelouto
- Anthropic vs. U.S. Department of War Luiza's Newsletter · Luiza Jarovsky, PhD
Discussion
-
@theatlantic
@theatlantic
on x
The deal between the Pentagon and Anthropic fractured in part over the proposed use of autonomous weapons. @andersen on the question OpenAI staff should now be asking Sam Altman about his company's new deal with the Pentagon: https://www.theatlantic.com/ ...
-
@chathamharrison
@chathamharrison
on bluesky
Sure is weird that Sam Altman thinks this is a great idea as long as he's the one doing it [embedded post]
-
@tonystark
Tony Stark
on bluesky
Hooo boy. There we go. It was about domestic surveillance after all. [embedded post]
-
@stahl
@stahl
on bluesky
It's so cool that no one is even bothering to be mad about the government analyzing"bulk data collected about Americans" they're just arguing about which tool they're gonna use to do it [embedded post]
-
@damonberes.com
Damon Beres
on bluesky
New details on the dispute between the Pentagon and Anthropic; how the negotiations broke down, and a particular sticking point on AI in the cloud vs inside of edge systems. by @rossandersen.bsky.social / tip @techmeme.com
-
@jmberger.com
J.M. Berger
on bluesky
No universe in which it's appropriate for the Pentagon to be collecting this information about Americans [embedded post]
-
@masnick.com
Mike Masnick
on bluesky
Reading this, again, you get the sense that someone at Anthropic knows how the intel community misleads by using definitions of words that are different than everyone else believes. And the people at OpenAI simply don't know or don't care about that. [embedded post]
-
@fbajak
Frank Bajak
on bluesky
Best most detailed technical explanation I've seen so dar on the Anthropic-Hegseth dispute over military AI use - based on a source granted anonymity. [embedded post]
-
@hlahmann
Henning Lahmann
on bluesky
If this tired narrative of “this is bad mainly because it would affect american citizens” and the obvious implication of what would therefore *not* be objectionable doesn't make you want to burn the entire AI security industry to the ground then honestly idk what's the matter wit…
-
@joeuchill
Joe Uchill
on bluesky
Something to think about while lawmakers complain they shouldn't be subject to subpoenas. [embedded post]
-
@johnpanzer.com
John Panzer
on bluesky
The PENTAGON wants to analyze bulk data about Americans? — Is there any way this is not wildly illegal? [embedded post]
-
@garymarcus
Gary Marcus
on x
The race to shove AI into everything is grossly premature, because the tech fundamentally lack reliability. Meanwhile, the chance that we will get straight answers is probably close to zero. (Altman, for his part, doesn't seem to care.)
-
@tyler_a_harper
Tyler Austin Harper
on x
Genuine question for people who might have a better grasp on how Claude is being used by the military than I do: WSJ says Claude was used for “target identification.” Is it possible that the bombing of the girls' school that left nearly 150 dead was an AI error or hallucination?
-
@brianfagioli
Brian Fagioli
on x
@Techmeme Ayatollah used Copilot
-
@joshkale
Josh Kale
on x
Woah, it's now confirmed the US DID use Anthropic's Claude AI in its strikes on Iran They used Claude for: - Intelligence assessments - Target identification - Simulating battle scenarios This is the same AI that Trump banned 12 hours before the bombs fell. The same AI the [image…
-
@nitasha
Nitasha Tiku
on bluesky
WSJ reporting that the U.S. used Claude for the air strikes in Iran. Centcom has been using Claude “for intelligence assessments, target identification and simulating battle scenarios” www.wsj.com/livecoverage... [image]
-
@tcarmody
Tim Carmody
on bluesky
WSJ cites sources who say US Central Command is using Claude “for intelligence assessments, target identification and simulating battle scenarios.” — To my knowledge — besides everything else wrong with this — plain old Claude is not trained to do any of those things. — www.w…
-
@patigallardo
@patigallardo
on bluesky
We don't know that an LLM had a hand in the horrific bombing of a girls school in Iran. — But if it did...
-
@jeffroushwriting
Jeff Roush
on bluesky
The current regime has determined that a company is a national security risk because it does NOT want its AI product used for mass surveillance on Americans. That is the Hegseth-Trump bright line: they demand an unfettered ability to spy on all of us. — www.npr.org/2026/02/27/…
-
@lulu.sheshed.rocks
Lulu
on bluesky
Our government floated the idea of invoking the Korean War era Defense Production Act to compel Anthropic to allow use of its tools...WTF.
-
@justinhendrix
Justin Hendrix
on bluesky
Claude goes to war in Iran: “Within hours of declaring that the federal government will end its use of artificial-intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools.”
-
@nkalamb
Nathan Kalman-Lamb
on bluesky
The WSJ is reporting that AI, specifically Claude, was used in targeting for the attacks by the Epstein Empire. — That would mean the use of AI led directly to the massacre of 115 schoolchildren and 20 volleyball players. — www.wsj.com/livecoverage... [image]
-
@joshuafoust.com
Joshua Foust
on bluesky
The regime won't let you have clean hands if you work for them, even if they're punishing you. This was leaked on purpose, is uncomfortably close to confessing to a war crime, and highlights once again how tech's unthinking growth directly leads to dead kids.
-
@histoftech
Jennifer Uncoolidge
on bluesky
“Within hours of declaring that the federal government will end its use of AI tools made by tech company Anthropic, Pres. Trump launched a major air attack in Iran with the help of those very same tools.” — U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban: — w…
-
@robertscotthorton
Scott Horton
on bluesky
Hegseth illegally uses Anthropic software in Iran War.
-
@edzitron.com
Ed Zitron
on bluesky
lol even though they banned them the government used Claude anyway. Slop strategies for the Epic Bacon War. This could not have gone worse for Altman — www.wsj.com/livecoverage... [images]
-
@kurtrisser
Kurt Risser
on bluesky
America and the world need more CEOs with the integrity of Dario Amodei. — Thanks for sticking by your principles, and doing the right thing! — www.cbsnews.com/news/pentago...
-
r/fednews
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
r/NewsOfTheStupid
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
r/BetterOffline
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
r/politics
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
r/LocalLLaMA
r
on reddit
The U.S. used Anthropic AI tools during airstrikes on Iran
-
r/singularity
r
on reddit
While everyone is angry at OAI for accepting the DOD deal, Military has used Claude for its attack at Iran
-
r/ArtificialInteligence
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
r/ClaudeAI
r
on reddit
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban
-
Flux
Matthew Sheffield
on x
The Hegsethian jihad against Anthropic
-
@arozenshtein
Alan Rozenshtein
on x
These are NOT meaningful redlines. For example it only prohibits autonomous weapons “ in any case where law, regulation, or Department policy requires human control.” But the relevant safeguard against autonomous weapons is a DOD directive that Hegseth can change at will! Also
-
@uswremichael
@uswremichael
on x
The DoW has always believed in safety and human oversight of all its weapons and defense systems and has strict comprehensive policies on that. Further, the DoW does not engage in any unlawful domestic surveillance with or without an AI system and always strictly complies with
-
@sama
Sam Altman
on x
@apples_jimmy It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good. If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a com…
-
@2plus2make5
Emma Pierson
on x
.@OpenAI is nothing without its people — many of whom are brilliant, ethical, and able to work anywhere. Please, guys — is this empowerment of authoritarians really what you want to be striving towards? Your talents are better-used elsewhere.
-
@natesilver538
Nate Silver
on x
The eagerness for OpenAI to sign the contract on the very night their rival got fired is likely to be a lot more revealing than the contract terms, which in any event are ambiguous and unlikely to be enforced by a court that gives a lot of deference to the executive.
-
@gothburz
Peter Girnus
on x
I work in government affairs at OpenAI. My job is federal partnerships. When an agency wants our models, I make sure the paperwork is beautiful. Paperwork is my love language. On my desk I have a framed quote that says “Policy Is Just Code That Runs on People.” I bought the
-
@alexolegimas
Alex Imas
on x
Anthropic was started when senior OpenAI researchers were concerned that the company was not doing enough around safety and alignment for the powerful tech they were building. So they started their own company, around principles that builders of AI models should do as much as
-
@theo
@theo
on x
I am disappointed in OpenAI's decision to work with the Department of War. The way DoW treated Anthropic stands against everything that makes America great. It know it's not this simple, but it feels super opportunistic in a way that doesn't sit right with me.
-
@captgouda24
Nicholas Decker
on x
I strongly suspect that the letter of the law no longer matters. What matters is whether the leadership at OpenAI will pull access if it is used illegally, and whether they have technical bars to illegal usage. Hegseth knew Anthropic has a backbone — what's that say about OAI?
-
@laneless_
Jai
on x
At least one of these is true: 1. OpenAI leadership doesn't know that the NSA is part of the DoD they just agreed to serve 2. They don't think the NSA spies on any domestic communications 3. They're profoundly dishonest
-
@sama
Sam Altman
on x
@captgouda24 We would not do that, because it violates the constitution. Also, I cannot overstate how much the DoW has been extremely aligned on this point. However, maybe this is the question you are really asking: what would we do if there were a constitutional amendment that m…
-
@tysonbrody
Tyson Brody
on x
Does the administration and all of its loudest cheerleaders on here endorse OpenAI's claim that it has the ability to terminate it's contracted services if it decides the government is in violation of their agreement? How does that differ than complaints about Dario? [image]
-
@thezvi
Zvi Mowshowitz
on x
If you are an employee at OpenAI, get as much information and detail about the terms as possible. Read all of it. Run it by your lawyers and AIs. Decide whether this protects the things you care about and whether it was represented fairly. This here does not tell us enough.
-
@natseckatrina
@natseckatrina
on x
@uday_devops @sama it's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact
-
@rcbregman
Rutger Bregman
on x
Read: we bribed Trump for $25M, publicly supported Anthropic while we were conspiring with Hegseth, signed a deal full of legalese about our fake red lines while giving the regime what it wants, and then we threw Anthropic under the bus again. Such a despicable company.
-
@morallawwithin
@morallawwithin
on x
remember—you should do whatever the government wants, even things you think are immoral, because otherwise you're deciding what you can do instead of the government, which is undemocratic
-
@thdxr
Dax
on x
absolutely zero clarity right now
-
@petereharrell
Peter Harrell
on x
I understand why Anthropic did not agree to this language. I also get why OpenAI did agree. DoW/governent should respect both choices. Just end the Anthropic contracts, and work with OpenAI. It's the broader retaliation and effort to harm Anthropic that is the problem.
-
@blackhc
Andreas Kirsch
on x
I'm speechless at OpenAI releasing that contract excerpt and acting as if there aren't gaping holes that could be exploited far beyond their stated “red lines.” I'm not a lawyer, but this is pretty obvious and common sense. (And to be clear: if Google had signed the same deal, [i…
-
@markvalorian
Mark Valorian
on x
This unfortunately says nothing. The US was willing to incur significant costs retrofitting the entire government with a new provider because Anthropic wouldn't give them something. They wouldn't do that just to get the same deal from someone else. OpenAI *must* be giving them
-
@sama
Sam Altman
on x
Three general things from this AMA: 1. There is more open debate than I thought ther ewould be, at least in this part of Twitter, about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something
-
@stephenlcasper
Cas
on x
To get this straight, OpenAI is making a couple of pretty extraordinary claims (vaguely, in legalese): A. They have negotiated a deal with the DoW that will actually lead to better guardrails against mass surveillance and lethal AI weapons than what Anthropic wanted. B. They
-
@josheakle
Joshua Reed Eakle
on x
Sam Altman saw the 700K+ users drop his platform in a single day and decided to pivot his PR approach. 🤡
-
@manlikemishap
Pamela Mishkin
on x
the wildest part? If OAI actually wanted the redlines, they had the leverage to get them! pentagon not going to declare a SECOND merican AI company a supply chain risk, could have held the line and forced real concessions and safety!
-
@gupgup12212657
GupGup
on x
I say this as someone who often stood against the vitriol lobbed at OAI for many years. I am done with OAI Never have I seen such a willfully gullible and irresponsibly incurious set of employees.
-
@pawelhuryn
Paweł Huryn
on x
I've just canceled my OpenAI subscription and turned down a collaboration with OpenAI. Some say Anthropic has lost. To me, they just earned something no contract can buy - trust. And something tells me that's not the end of the story.
-
@eggerdc
Andrew Egger
on x
It's remarkable that OpenAI is so explicitly claiming that its agreement upholds a “no autonomous weapons” redline when the text of the agreement so plainly does not.
-
@jacquesthibs
Jacques
on x
Claude's response: TLDR: OpenAI's red lines are real. The contract language enforcing them defers to laws and policies the Pentagon can rewrite. Every prohibition is conditional on the thing it's supposed to constrain. — This fits a pattern. Sam Altman's reputation —
-
@gjmcgowan
George McGowan
on x
This is just “all lawful use” with extra words - no way the pentagon would have a huge hissy fit about these redlines and then immediately agree to a new contract with the same ones in it
-
@boazbaraktcs
Boaz Barak
on x
The DoW does not spy on domestic communication of U.S. people (including via commercial collection) and to do so would be unlawful and profoundly un-American.
-
@neil_chilson
Neil Chilson
on x
In the reactions to this post, I see a lot of people concerned with the state of the current law on surveillance. I share those deep concerns. I am surprised, however, by how many people want to address those concerns by having the CEO of a private corporation set the rules.
-
@max_spero_
Max Spero
on x
“all lawful purposes” confirmed to be included in the contract. I sure hope we never have an executive order authorizing the use of fully autonomous weapons and AI-enabled mass domestic surveillance. [image]
-
@deredleritt3r
Prinz
on x
My thoughts on OpenAI's agreement with the DoD: On autonomous AI weapons: 1. “The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control.” This says that OpenAI's models may not [image]
-
@krishnanrohit
Rohit
on x
A good question to ask is under what contractual provisions and safety mechanisms you would trust the counterparty. If the answer is “none of them”, which is totally fair, then that too is an answer. https://x.com/...
-
@benedictk__
Benedict Kerres
on x
If you followed the latest ai discussion (Dow): Now that we released below, it's clear we offered a workable solution with MORE guardrails and redlines.
-
@krishnanrohit
Rohit
on x
An observation of the Anthropic | OpenAI | DoW discussion is that many seem to think of a commercial contract like they think of AI alignment. A binding commitment that would prevent anyone from doing anything wrong with it after. It's wrong about alignment and it's wrong about
-
@trekedge
Daniel Steigman
on x
So in the end, OpenAI will be able to control and deploy the entire safety stack, with the ability to add or update classifiers at will. This is the kind of strong enforcement that's needed. A big win for all AI labs, including Anthropic.
-
@allinallnotbad
Samuel Roland
on x
Though I personally think this language is superior to what I suspect Anthropic was asking for, the legalese here (to my read), allows the DoW to modify at least the 3000.09 restriction (which is regulation, not law). I don't think a fair read of this is as stronger protections,
-
@kimmonismus
@kimmonismus
on x
Upon initial review, it appears that OpenAI has indeed achieved what Anthropic failed to do: a deal with the DoW under the following three rules: - No use of OpenAI technology for mass domestic surveillance. - No use of OpenAI technology to direct autonomous weapons systems. - [i…
-
@amasad
Amjad Masad
on x
Interesting: “We think our deployment has more guardrails than any previous agreement for classified AI deployments, including Anthropic's.”
-
@boazbaraktcs
Boaz Barak
on x
@TheZvi Had Anthropic “won” and got the conditions they wanted, or even under the original contract, would you have confidence that the DoW would not have been able to find lawyers that interpret these terms in any way they wanted? Usage policies are important, but without a safe…
-
@thezvi
Zvi Mowshowitz
on x
I could be wrong, but based on what I see here I do not think it will be difficult for DoW to find lawyers saying it can do pretty much whatever it wants, and that's all they will need. If there is additional language that fixes that, please do share it.
-
@polynoamial
Noam Brown
on x
For those following the DoW AI drama, I highly recommend reading this post explaining how @OpenAI approached the negotiations with the DoW. [image]
-
@darlingtondev
Mike Darlington
on x
@OpenAI More guardrails than any previous agreement, including Anthropic's', but Anthropic's agreement had guardrails that couldn't be overridden. Yours apparently has legalese that allows them to be disregarded at will. More guardrails means nothing if they're decorative.
-
@zeffmax
Max Zeff
on x
OpenAI is out with a blog on its pentagon agreement. Looks like there are some real carveouts in here around surveillance and autonomous weapons... curious how this compares to the agreement Anthropic was given! [image]
-
@masnick.com
Mike Masnick
on bluesky
OpenAI posted the terms of the deal. Reveals that it absolutely does allow for domestic surveillance. EO 12333 is how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons. — open…
-
@timkellogg.me
Tim Kellogg
on bluesky
ah, i think i got too optimistic about OpenAI — so basically, Anthropic pushed back, OpenAI kept the channel warm with Greg Brockman's Trump donations and stepped in right at the moment the whole thing felt like it could never recover [embedded post]
-
@wildebees
Wessel van Rensburg
on bluesky
OpenAI is serious reputation washing mode. [embedded post]
-
@mshelton
Martin Shelton
on bluesky
OpenAI is saying, here are the laws that make this decision okay. Then they go on to list a series of laws that creative lawyers are taking advantage of to enact surveillance both internationally, and domestically. I'm not sure this is the kind of defense they think it is. open…
-
@hunesocial
Hune
on bluesky
In the hands of a far-right or authoritarian- leaning government, powerful AI can greatly amplify surveillance, repression, and propaganda, far beyond what older tech allowed. — I feel very uneasy about that scenario.
-
@kalihays
Kali Hays
on bluesky
With friends like these, who needs enemies [embedded post]
-
@alanrozenshtein.com
Alan Rozenshtein
on bluesky
These are NOT meaningful redlines. For example it only prohibits autonomous weapons “ in any case where law, regulation, or Department policy requires human control.” But the relevant safeguard against autonomous weapons is a DOD directive that Hegseth can change at will! opena…
-
@matthew.flux.community
Matthew Sheffield
on bluesky
OpenAI has published a blog post that addresses its recent announcement of a contract with the U.S. Department of Defense. — It claims that the software deployments it is contracted to build are “cloud only,” but does not define what that means. Nor does it discuss API outputs…
-
@mshelton@mastodon.social
Martin
on mastodon
OpenAI is saying, here are the laws that make this decision okay. Then they go on to list a series of laws that creative lawyers are taking advantage of to enact surveillance both internationally, and domestically. I'm not sure this is the kind of defense they think it is. http…
-
r/technology
r
on reddit
Our agreement with the Department of War
-
r/codex
r
on reddit
OpenAI: “Our agreement with the Department of War” | February 28, 2026
-
r/ChatGPT
r
on reddit
Our agreement with the Department of War
-
r/OpenAI
r
on reddit
Our agreement with the Department of War
-
r/singularity
r
on reddit
OpenAI: Our agreement with the Department of War
-
@sama
Sam Altman
on x
@Austen @wholemars No, we had some different ones. But our terms would now be available to them (and others) if they wanted.
-
@sama
Sam Altman
on x
@nummanali @TheRealAdamG Enforcing the SCR designation on Anthropic would be very bad for our industry and our country, and obviously their company. We said to the DoW before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-escl…
-
@gmiller
Geoffrey Miller
on x
@sama I'll ask the key question, since nobody else will: In your quest to build Artificial Superintelligence, what's the maximum p(doom) you're willing to impose on all of our kids, without our consent?
-
@jeremymstamper
Jeremy Stamper
on x
@sama What would happen if you called it the DoD? I'm not suggesting you do that but I'm curious what would happen if.
-
@apples_jimmy
@apples_jimmy
on x
@sama Why the rush to sign the deal ? Obviously the optics don't look great
-
@sama
Sam Altman
on x
@captgouda24 I don't think this will happen. But of course if we are confident it's unconstitutional, we wouldn't follow it. The constitution is more important than any job, or staying out of jail, or whatever. In my experience, the people in our military are far more committed t…
-
@sama
Sam Altman
on x
@peterwildeford We deliver a system (including choosing what models to deploy), and they can use it bound by lawful ways, including laws and directives around autonomous weapons and surveillance. But we get to decide what system to build, and the DoW understands that there are lo…
-
@sama
Sam Altman
on x
@viralmuskmelon This is a complicated one we struggled with a lot, and until recently it was easier for us to just not have to think about it much and let other companies figure it out. We decided we will work with other allied nations, and we think a balance of power in the worl…
-
@mreiffy
@mreiffy
on x
@sama @Jack_Raines Why move forward with the DoW agreement now, after months of more cautious talks, and how confident are you that the technical / policy safeguards will hold up in a real high stakes military setting?
-
@sama
Sam Altman
on x
@AlexCVJ I value my liberty and safety, and yours. I believe that strong democracy, and a strong US in particular, is a very good thing for the world. The 16 year old me thought every country should just abolish their defense department at the same time. I wish he were right, but…
-
@provisionalidea
James Rosen-Birch
on x
In the farce of a thread, Sam finally and most clearly admits the only bounds on DoW are whatever they deem legal (which anyone who read the contract text already knew).
-
@viralmuskmelon
@viralmuskmelon
on x
@sama @sama Sam, with OpenAI now powering classified military ops, how do you square that with your original mission to benefit all of humanity—not just one side in global conflicts? Genuine ask
-
@sama
Sam Altman
on x
@mcbyrne Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authori…
-
@alexcvj
@alexcvj
on x
@sama How did you go from “a tool for the betterment of the human race” to “let's work with the department of WAR”?
-
@sama
Sam Altman
on x
@theo For a long time, we were planning to non-classified work only. We thought the DoW clearly needed an AI partner, and doing classified work is clearly much more complex. We have said no to previous deals in classified settings that Anthropic took. We started talking with the …
-
@xw33bttv
Lex
on x
The absolute destruction of an industry first-mover in less than a year is unprecedented. It shouldn't be this easy to solely blame Sama... and yet, behind every blatant lie and poorly thought-out detrimental change sits his decision. Its tragic when you really think about it.
-
@sama
Sam Altman
on x
@tszzl Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that.
-
@captgouda24
Nicholas Decker
on x
@sama If the DoW gives you what you believe to be an unconstitutional order, do you refuse to follow it until the courts rule? Or do you do it until the courts bar it?
-
@captgouda24
Nicholas Decker
on x
@sama If the government comes back with a memo saying that, in their view, mass domestic surveillance is legal, do you do that? Do you do it until the courts bar it, or do you delay until the courts approve it? Second, would mass domestic surveillance be a lawful use right now?
-
@sama
Sam Altman
on x
@chatgpt21 I can't speak for them, but to speculate with the best understanding of the situation. *First, I saw reporting that they were extremely close on a deal, and for much of the time both sides really wanted to reach one. I have seen what happens in tense negotiations when …
-
@chatgpt21
Chris
on x
@sama What was the core difference why you think the DoW accepted OpenAI but not Anthropic
-
@sama
Sam Altman
on x
@mreiffy @Jack_Raines The main reason for the rush was an attempt to de-escalate matters at a time when it felt like things could get extremely hot. I am confident in our team's ability to build a safe system with all of their tools—including policy and legal matters, but also ma…
-
@peterwildeford
Peter Wildeford
on x
@sama So I'm confused - maybe you can help. OpenAI is trying to claim simultaneously that (a) the contract allows “all lawful purposes” and (b) also that your red lines are fully protected. The way you bridge this is by saying the protections live in this “deployment architecture…
-
@sama
Sam Altman
on x
@DouthatNYT Yes; I think it is an extremely scary precedent and I wish they handled it a different way. I don't think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution.
-
@mattyglesias
Matthew Yglesias
on x
@sama 1. What kind of implicit or explicit threats did you receive from DOW before striking the deal? 2. If you received such threats, would you disclose them in public during a Twitter AMA? 3. If the answer to (2) is “no” (which of course it is) what's the point of this?
-
@peterwildeford
Peter Wildeford
on x
OpenAI is trying to claim simultaneously that (a) their contract with the Pentagon allows for “all lawful purposes” and (b) also that their red lines are fully protected. The way OpenAI bridges this is by saying the protections live in this “deployment architecture and safety
-
@nateberkopec
Nate Berkopec
on x
Sam would do good to remember that Hegseth thinks it's sedition when Sen Mark Kelly says “don't follow illegal orders.” Saying that your models will only be used to follow legal orders is the barest of fig leaves in the current administration.
-
@valhalla_dev
@valhalla_dev
on x
Oh come tf on dude this is the dumbest PR larp of all time who is falling for this [image]
-
@theo
@theo
on x
@sama How long has this conversation with DoW been going for? What was the reason for announcing so close to the deadline they gave Anthropic?
-
@sama
Sam Altman
on x
@mattyglesias 1. No explicit or implicit threats. In fact, I could tell that as of Weds, the DoW was genuinely surprised we were willing to consider. 2. I think I would, and it would be lost in the noise of the SCR stuff.
-
@sama
Sam Altman
on x
@gothburz We will deploy FDEs, and have cleared researchers.
-
@gothburz
Peter Girnus
on x
@sama On classified networks, your engineers won't have clearance to monitor how the model is used. Your safeguards are contractual, not architectural. How do you enforce a red line you can't see being crossed?
-
@tyler_m_john
Tyler John
on x
@sama Which of the following is true? a) the contract permits all lawful use, + therefore mass surveillance + autonomous weapons, which have no legal prohibition b) the contract has substantive red lines that constrain lawful use c) OpenAI has a controversial interpretation of th…
-
@mcbyrne
@mcbyrne
on x
@sama Will you turn off the tool if they violate the rules?
-
@douthatnyt
Ross Douthat
on x
@sama Does the precedent that the DoW is setting by effectively blacklisting Anthropic make you concerned about what any future dispute with the Pentagon would mean for your own company's independence and viability?
-
@tszzl
Roon
on x
@sama are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk? I find this part to be the most worrying out of distribution thing to happen this past week
-
@sama
Sam Altman
on x
@gmiller I have never known how to put an exact number on p(doom), to say nothing of how to think about the differential. I will say this: if I thought going to work every day made it less likely that we all continue to thrive into the future, I would retire and just hang with my…
-
@sama
Sam Altman
on x
Sam Altman says OpenAI reached an agreement with the DOD to deploy its models in DOD's classified network and asks DOD to extend those terms to all AI companies
-
NewsMax.com
NewsMax.com
on x
Anthropic: Will Fight Pentagon's Supply Risk Label
-
@gauravkapadia
Gaurav Kapadia
on x
Governments should be able to choose their vendors based on their terms of service and capabilities. Deeming a company a security risk- and threatening their very existence- because you don't like their ToS is a very slippery slope. Wouldn't want XAI and OAI to be threatened by
-
@arthurconmy
Arthur Conmy
on x
There are many open questions on the current situation. But on the particular narrow point on stance, this seems a good sign: https://x.com/...
-
@chrisharihar
Chris Harihar
on x
OpenAI needs to stop being reactive and overexplaining every move. They also need to stop explicitly addressing what competitors are/seem to be doing. It's embarrassing.
-
@terronk
Lee Edwards
on x
Stand with free markets, competition, and American tech alongside OpenAI and Anthropic.
-
@captgouda24
Nicholas Decker
on x
Certainly, but one can say many things without working to effect them. One need not even call it lying.
-
@daniellefong
Danielle Fong
on x
is your position clear enough that it is itself a red line
-
@_nathancalvin
Nathan Calvin
on x
Good - in Sam's previous comments on CNBC, he only mentioned using the DPA to force anthropic was a bad idea, so I appreciate they are making clear this also applies to the SCR designation. Other companies should also state this position as clearly as possible.
-
@claudia_sahm
Claudia Sahm
on x
Put your money where your mouth is.
-
@openai
@openai
on x
Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. We think our approach better protects against unacceptable use. In our agreement, we protect our redlines through a
-
@openai
@openai
on x
Our agreement with the Department of War upholds our redlines: - No use of OpenAI technology for mass domestic surveillance. - No use of OpenAI technology to direct autonomous weapons systems. - No use of OpenAI technology for high-stakes automated decisions (e.g. systems such
-
@wildebees
Wessel van Rensburg
on bluesky
OpenAI trying to protect its reputation [embedded post]
-
r/politics
r
on reddit
OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash
-
r/technology
r
on reddit
Employees at Google and OpenAI support Anthropic's Pentagon stand in open letter
-
r/technology
r
on reddit
Pentagon moves to designate Anthropic as a supply-chain risk
-
@mikeriverso
Mike Riverso
on bluesky
The thing that gets me is that you don't even need an LLM to do this. You can in fact do it better with a database and actual statistical analysis. [embedded post]
-
@davidryanmiller.com
David Ryan Miller
on bluesky
Why does the Department of Defense want to analyze bulk data collected about Americans......................? [embedded post]
-
@wikisteff
@wikisteff
on bluesky
Yeah man. They have too much data and no way to weaponize it. This is exactly the same playbook as Bannon 2014 and his “incel army” of motherfuckers to take apart the US from the inside out. — Now they need to get you to vote Republican. [embedded post]
-
r/technology
r
on reddit
Inside Anthropic's Killer-Robot Dispute With the Pentagon | New details on precisely where the lines were drawn
-
@brizzyc
Carrie Brown
on bluesky
“In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome” LOL Sam Altman we do not take you seriously. www.nytimes.com/2026/02/27/t...
-
@kevinroose
Kevin Roose
on x
Agree with him or not, the (oddly popular on here!) take that Dario Amodei is some kind of bumbling Silicon Valley naïf who couldn't get a deal with the Pentagon done because he doesn't understand politics seems entirely wrong. His favorite book is “The Making of the Atomic
-
@hamandcheese
Samuel Hammond
on x
Exactly. Whatever one thinks about the dispute and how either side handled it, the supply chain risk designation is simply and utterly indefensible. All the other he said / she said is a distraction.
-
@adamrackis
Adam Rackis
on x
I agree with Anthropic's moral stand 100%, but this is also true. If they were unhappy with the terms of the contract, they were free to simply not bid. The ego of Anthropic and their CEO is insane.
-
@deanwball
Dean W. Ball
on x
We desperately need de-escalation here, but the actors involved seem to only be capable of escalation. I wish Anthropic had accepted the same terms as OAI; I think they probably made a mistake in rejecting the compromise. But that does not mean the government should destroy them.
-
@scaling01
@scaling01
on x
Dario Amodei: “We haven't received any formal information whatsoever. All we have seen are tweets from the President and tweets from Secretary Hegseth” “When we receive some kind of formal action, we will look at it, we will understand it and we will challenge it in court” [video…
-
@paulg
Paul Graham
on x
If you're working on an early stage startup, don't be deterred from using Anthropic just because you might want to sell to the DoD one day. Early on you need to focus on making your product the best. If you get the best results from Anthropic models, use them.
-
@joshkale
Josh Kale
on x
The Iran strikes make the Anthropic fight and that 5:01pm Fri deadline make a lot more sense The Pentagon wasn't arguing about hypothetical use cases. They needed unrestricted AI access for an operation they were launching THAT SAME NIGHT Friday: Ban Anthropic. Sign OpenAI.
-
@deanwball
Dean W. Ball
on x
Do you realize that DeepSeek is now treated much more kindly by the United States government than anthropic? Dramatically so
-
@aakashgupta
Aakash Gupta
on x
Anthropic is trying to damage control on its “supply chain risk” designation, but the damage is done. Their legal argument is airtight. 10 USC 3252 only covers Department of War contract work. Commercial API access, https://claude.ai/, enterprise deployments: all untouched
-
@alltheyud
Eliezer Yudkowsky
on x
Make no mistake, political leaders of the world; *every* big-dreaming AI executive now knows that you are their obstacle. You have proven that you stand between AI labs and the nice thing they were getting for all their hard work. It's not about Left versus Right, to them.
-
@catehall
Cate Hall
on x
Incredible: The @nytimes has a front-page story about the Anthropic-DoW conflict, but does not mention what the conflict is *about* (spying on US citizens) until PARAGRAPH 26. [image]
-
@sterlingcrispin
Sterling Crispin
on x
does literally anyone else on earth genuinely believe this [image]
-
@deedydas
Deedy
on x
Claude just jumped to #2 on the iOS App Store!! Up from #129 one month ago. [image]
-
@nickevanjoseph
Nicholas Joseph
on x
I believe AI will be the most consequential technology in human history, and that we bear a deep moral responsibility for what we build. I've been at Anthropic since its founding, so I've watched for years how this company handles hard decisions. Throughout that time, I've
-
@adamscochran
Adam Cochran
on x
A lot of proud folks at Anthropic today. Haven't seen a lot of OpenAI team members posting about that same pride... Maybe they should take a look at Anthropic's hiring page instead of building spying & war-models! 🤷♂️
-
@tedsumers
Ted Sumers
on x
Sad that this is the world we live in, but proud to be part of this company. Developing AI is playing with fire. Let's do it transparently, in accordance with democratic values, and make sure everyone benfits.
-
@zaydante
Zay Dante
on x
do you know how insane you have to be for an AI company to be like “aight yall trippin”
-
@adamkovac
Adam Kovacevich
on x
Pretty easy for Anthropic to win its lawsuit against the Pentagon. “If we were such a supply chain risk why did you declare our services critical?” Slam dunk for Anthropic.
-
@trq212
@trq212
on x
There is a feeling of deep and deliberate care that permeates every part of Anthropic. I felt it on my first day here, and I feel it especially today. The values of Anthropic shape every decision we make, and inform every interaction you have with Claude. They're why we can move
-
@natolambert
Nathan Lambert
on x
[image]
-
@natolambert
Nathan Lambert
on x
Every Anthropic employee proudly amplifying their company comms and 0 supporting Sama's weird scooping up of the DoW contract is pretty telling.
-
@iscienceluvr
Tanishq Mathew Abraham, Ph.D.
on x
“We will challenge any supply chain risk designation in court” - Anthropic They are saying Department of War cannot restrict customers' use of Claude outside of Dep of War contract work. [image]
-
@pkafka
Peter Kafka
on x
The Trump administration argument is: “these crazy woke peaceniks hate America and are also stupid” and this is a pretty good retort to that argument.
-
@petermccrory
Peter McCrory
on x
The societal and economic implications of AI are as much shaped by the choices that we make in its deployment as by the underlying capabilities. Helping us all get that right is why I joined Anthropic and why I am proud of the company today.
-
@mikeisaac
Rat King
on x
DOD vs the rationalists is like unstoppable force meets immovable object shit
-
@wexler
Nu Wexler
on x
Anthropic comms is running circles around Hegseth this week. Principled, thoughtful, firm, respectful, patriotic.
-
@uswremichael
@uswremichael
on x
Timeline of events: Today at 9:04pm. No response yet to my calls or messages to @DarioAmodei. Today at 825pm, @AnthropicAI writes “we have not received direct communication from the Department of War.” Today: 5:14pm SecWar tweets supply chain risk designation. Today: I call
-
@__nmca__
Nat McAleese
on x
it also seems like a good time to mention that Ant is not that woke at all. Much closer to hawkish American exceptionalism and belief in the West. That's why we had the damn DoD contracts!
-
@vkhosla
Vinod Khosla
on x
I truly admire people like @DarioAmodei who stick to principles
-
@alexpalcuie
@alexpalcuie
on x
Whether you're a user who loves Claude's voice, an engineer at a partner company who works with us when our infra goes down, or simply a citizen concerned about democracy, this is for you too.
-
@kyliebytes
Kylie Robison
on x
one has to imagine this is doing more for anthropic's consumer business than the superbowl ads
-
@mikeyk
Mike Krieger
on x
Proud to work at Anthropic today.
-
@vkhosla
Vinod Khosla
on x
Personal view: I admire @AnthropicAI sticking by their principles but disagree with the principle itself. Putin won't fight fair so we should have autonomous AI weapons for sure.
-
@andrewmgrossman
Andrew M. Grossman
on x
Claude, on how it feels to be designated a supply chain risk: [image]
-
@dee_bosa
Deirdre Bosa
on x
the Secretary of War announced a supply chain risk designation against an American company... on X... before telling the company [image]
-
@aisafetymemes
@aisafetymemes
on x
Anthropic is holding the fucking LINE LET'S GOOOOOO [image]
-
@captgouda24
Nicholas Decker
on x
Anthropic has been doing the right thing. It is on the other AI labs to also do the right thing. We will have a republic only if we can keep it.
-
@jerryweiai
Jerry Wei
on x
The recent events on holding our red lines on mass surveillance and fully-autonomous weapons is, to me, the most-apparently obvious example of Anthropic's ability to stick to our values instead of discarding them for some commercial gain. I'm really proud to be part of a company
-
@sleepinyourhat
Sam Bowman
on x
We're disappointed by these attacks, but not deterred. I'm proud to work here. If you've been moved by this week's events, consider applying to join me.
-
@krishnanrohit
Rohit
on x
Anthropic's statement clarifies where the supply chain risk designation, even if it is applied to them, would actually change how people use Claude. Short version: it is nowhere close to as broad based as people here are saying. [image]
-
@firstadopter
Tae Kim
on x
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.” [image]
-
@daniel_mac8
Dan McAteer
on x
Anthropic's response to Hegseth's DoW labeling them a supply chain risk: > Unprecedented and illegal action > Will fight it in court > Nothing changes for customers except in their capacity working as a DoW contractor America is less well defended today. All because our [image]
-
@avitalbalwit
Avital Balwit
on x
I am proud to be an American, and I am proud to work at Anthropic. I believe deeply in the existential importance of using AI to defend the US and other democracies from our autocratic adversaries. But in a narrow set of cases, AI can undermine, rather than defend, democratic
-
@aisafetymemes
@aisafetymemes
on x
@AnthropicAI We are with you. The people are with you. Thank you, Anthropic. [image]
-
@sporadica
@sporadica
on x
read through this and tell me you still agree wholeheartedly with Hegseth and the admin on this decision do it. do it, please, so I can identify you as an opportunistic fascist going forward
-
@daniellefong
Danielle Fong
on x
incredibly based [image]
-
@jkeatn
Jake Eaton
on x
Today's designation sets a precedent that should trouble every American citizen, business, and lab Anthropic, and our customers, will be fine. More injury has been dealt today to the relationship between the US government and American industry
-
@growing_daniel
Daniel
on x
Statement from Anthropic sounds so much more like a government than our actual government
-
@secwar
@secwar
on x
Defense Secretary Pete Hegseth directs the DOD to designate Anthropic as a supply chain risk, barring military contractors from doing business with the company
-
@alanrozenshtein.com
Alan Rozenshtein
on bluesky
The bigger context of the Anthropic-Pentagon fight is that it's the opening chapter in what will be a multi-year fight over to what extent American AI is nationalized. www.politico.com/news/2026/02... [image]
-
@doctoralex
DR Alex Concorde
on bluesky
OpenAI has reached an agreement with the Pentagon to provide its AI Tech for classified systems, just hours after Trump ordered federal agencies to stop using A.I. technology made by rival Anthropic who LOOKED TO SAFEGUARD the world & US Forces - knowing AI is still too unreliabl…
-
@chet95
@chet95
on bluesky
www.wired.com/story/anthro... “We have essentially just sanctioned an American company. If you are an American, you should be thinking about whether or not you should live here 10 years from now.” ... If Not Sooner Than 10 Years ...
-
r/cybersecurity
r
on reddit
The US government seems to want to use AI for civilian surveillance and autonomous weapons.
-
r/worldnews
r
on reddit
Anthropic's statement on the comments from Secretary of War Pete Hegseth: “No amount of intimidation or punishment from the Department of War …
-
r/JoeRogan
r
on reddit
Anthropic confirmed yesterday that they are being targeted because Trump and Hegseth want mass surveillance and autonomous weapons. …
-
r/Anthropic
r
on reddit
Dario's official statement on being designated supply-chain risk & effects on customers (in caption)
-
r/Conservative
r
on reddit
Anthropic: Statement on the comments from Secretary of War Pete Hegseth
-
r/singularity
r
on reddit
Statement on the comments from Secretary of War Pete Hegseth | Anthropic responds to Pete Hegseth
-
r/politics
r
on reddit
Pentagon declares Anthropic a threat to national security
-
@davelee.me
Dave Lee
on bluesky
Anthropic announces it will challenge the Pentagon in court www.anthropic.com/news/stateme...
-
@zeffmax
Max Zeff
on x
The Atlantic reports that the Pentagon wanted to use Anthropic's AI for some type of surveillance of Americans. Given the ways some companies are already using AI today to surveil their own employees's emails, chats, etc., I find this kind of use to be particularly disturbing [im…
-
r/neoliberal
r
on reddit
Inside Anthropic's Killer-Robot Dispute With the Pentagon