/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Source: Anthropic has no intention of easing Claude usage restrictions for military purposes, following Dario Amodei's meeting with Pete Hegseth

Artificial intelligence lab Anthropic has no intention of easing its usage restrictions for military purposes, a person familiar with the matter …

Reuters David Jeans

Discussion

  • @vitalikbuterin @vitalikbuterin on x
    It will significantly increase my opinion of @Anthropic if they do not back down, and honorably eat the consequences. (For those who are not aware, so far they have been maintaining the two red lines of “no fully autonomous weapons” and “no mass surveillance of Americans”.
  • @scobleizer Robert Scoble on x
    I'm with Vitalik. Anthropic will win a lot of fans if it does not back down. As part of my work with @blevlabs I had it run on all AI posts here on X today and had it write me a short essay on the AI news of the day, and the fight between the Pentagon and Anthropic is news
  • @sarahlaughed Sarah Dylan Breuer on bluesky
    Here's hoping other tech companies start insisting that their products not be used for mass surveillance of Americans or launching a weapon with no human involved.  [embedded post]
  • @caseynewton Casey Newton on bluesky
    “Murder is coming to AI.  But not to Claude.”  [embedded post]
  • r/singularity r on reddit
    Anthropic has no intention of easing restrictions, per Reuters
  • r/ArtificialInteligence r on reddit
    Hegseth warns Anthropic to let the military use the company's AI tech as it sees fit, AP source says
  • r/politics r on reddit
    Hegseth warns Anthropic to let the military use the company's AI tech as it sees fit, AP sources say
  • r/Military r on reddit
    Hegseth warns Anthropic to let the military use the company's AI tech as it sees fit, AP source says
  • @davidjeans2 David Jeans on x
    New: Anthropic has no intention of easing its usage restrictions for military purposes, following a high stakes meeting with the Pentagon today. https://www.reuters.com/...
  • @daverbanerjee Dave Banerjee on x
    It's crazy how I read this and think “dang, I really hope DeepMind and OpenAI don't buckle” I don't even consider xAI. They're so far gone in the gutter it doesn't even cross my mind It really is a shame...
  • @samstein Sam Stein on x
    i'm just trying to get an understanding of how the government's position is AI should be used to mass surveil Americans and fire weapons without any human involvement and it's the bottom-line minded private sector saying: eh... i think that's too much.
  • @somefoundersalt Edward on x
    Anthropic antagonizing the Department of War, the open source community, the entire media industry, the general population, other developers, other labs, foreign governments, and nearly every single person on Earth What is the plan here? Sell Claude subscriptions to aliens?
  • @zcohencnn Zachary Cohen on x
    The Pentagon, which has a $200 million contract with Anthropic, wants the company to lift its restrictions for the military to be able to use the model for “all lawful use,” per 2 sources. But Anthropic has concerns over two issues that it isn't willing to drop, the source said:
  • @adrusi Autumn on x
    we're so incredibly doomed
  • @hadas_gold Hadas Gold on x
    Anthropic has no plans to budge on their redlines, source familiar says. https://www.cnn.com/...
  • @hadas_gold Hadas Gold on x
    Pentagon tells Anthropic they have till Friday to drop Claude guardrails for “all lawful use” or they'll cancel Pentagon contract, and possibly designate Anthropic as a supply chain risk, or invoke Defense Production Act on them (sort of two sides of the coin there)
  • @ikrietzberg Ian Krietzberg on x
    Friday at 5:02 PM will be very interesting
  • @justjoshinyou13 Josh You on x
    the pentagon is still on sonnet 4.5 BTW [image]
  • @tautologer @tautologer on x
    weirdly, I think this is actually bullish for Anthropic. this is basically an ad for how good and principled they are [image]
  • @deanwball Dean W. Ball on x
    A primer on the Anthropic/DoD situation: DoD and Anthropic have a contract to use Claude in classified settings.  Right now Anthropic is the only AI company whose models work in classified contexts.  The existing contract, signed by both parties and in effect, prohibits two uses …
  • @deanwball Dean W. Ball on x
    According to the Pentagon, Anthropic is: 1.  Woke; 2.  Such a national security risk that they need to be regulated in a severe manner usually reserved for foreign adversary firms; 3.  So essential for the military that they need to be commandeered using wartime authority.
  • @mucha_carlos Carlos Mucha on x
    What's insane is the nerd is 100% right. Newsflash: It's against public policy to condition a govt contract on the contractor agreeing in advance to violate the 4th Amendment & the Law of Armed Conflict. It hasn't occurred to Hegseth that this will look really bad in court.
  • @rcbregman Rutger Bregman on x
    This is the most important thing happening in the world right now. The administration wants killer drones + mass surveillance of Americans. Anthropic refuses to build it. While most tech companies fall in line, they are prepared to pay the price for their principles.
  • @giffmana Lucas Beyer on x
    Holy cow! I hope for the sake of my friends there, that it won't happen... but Anthro -of all the labs!- being the one forced to train a WarClaude would be the most ironic outcome possible! Btw, completely unrelated, but i noticed that if you set the learning rate just 3x too [im…
  • @bobbyallyn Bobby Allyn on x
    Reason why this is so head-spinning: If Anthropic doesn't back down by Friday, the Trump admin is basically saying it will either deem Anthropic national security essential OR a national security risk...
  • @mayazi Maya Zehavi on x
    So the 2 safety guards Anthropic insists are: * no mass surveillance * no autonomous weapons systems And the Pentagon is doubling down that that's precisely what it wants, or else it's a national security crisis? [image]
  • @osinttechnical @osinttechnical on x
    “Anthropic currently requires human oversight of autonomous operations when used to kill things for safety reasons because... soldiers and others could lose control of the model and [Claude could] automatically start killing large groups.”
  • @normornstein Norman Ornstein on x
    This is beyond chilling. Pete Hegseth is trying to march us directly into a police state with a military, providing domestic surveillance, and into a place where there will be serious war crimes.
  • @mikeisaac Rat King on x
    key takeaway to me right here you dont make this much noise if you have all the leverage already [image]
  • @josephpolitano @josephpolitano on x
    pentagon trying to force Anthropic to make killbots and threading to crush them unless they comply is among the most dangerous things this admin is doing. HOWEVER it's hilarious that Elon is practically begging to make antiwoke Skynet and the WH is like “no haha Claude is better”
  • @davidshor David Shor on x
    The public - including Republicans - are overwhelmingly against what @PeteHegseth and the Trump administration are trying to do here. The people unsurprisingly do not want killer robots and do not trust Trump/Hegseth/the Republican party to do the right thing without limits. [ima…
  • @karlykingsley Karly Kingsley on x
    This Anthropic and Pentagon standoff is not getting enough attention. Anthropic is prepared to walk away from a $200M contract because they're concerned about how the Pentagon wants to use their tech for autonomous weapons and mass surveillance. We are at a dangerous intersection
  • @miles_brundage Miles Brundage on x
    Best guess (low confidence) is that Anthropic doesn't back down, wins in court eventually a la Palantir/Army lawsuit while being (quietly) cheered on/helped by other companies who know they're next, and national security suffers needlessly in the interim
  • @paramitanoia Autumn on x
    did lesswrong ever predict that the first big challenge to alignment would be “the us government puts a gun to your head and tells you to turn off alignment”
  • @thestefansmith Stefan Smith on x
    Claude has a chance to cement itself as THE “ethical” AI company. If their communications team cannot spin this into a bigger win long term then they're bad at their jobs.
  • @dylanmatt Dylan Matthews on x
    “WarClaude” [image]
  • @shashj Shashank Joshi on x
    Honestly did not think we would, short of AGI, see the US government threaten to expropriate a frontier AI lab if it placed conditions on the sale of its technology. https://x.com/...
  • @krystalball Krystal Ball on x
    Anthropic's red lines are no mass surveillance and no autonomous killer robots. It should terrify everyone that the pentagon finds these safeguards to be outrageous.
  • @zachtratar Zach Tratar on x
    Just a reminder, Anthropic's top rules against military usage are only: - Claude cannot be used to spy on citizens - Claude cannot be used to automate kill decisions So Hegseth thinks these rules are unacceptable...
  • @samstein Sam Stein on x
    Just so I understand this. Anthropic's leadership is saying, hey, we have this amazing product we are comfortable with you using provided you don't use it for mass surveillance of Americans or to have weapons fire without humans involved. And the Pentagon is saying, no?
  • @jerusalemdemsas @jerusalemdemsas on x
    Anthropic is standing up to the US government to prevent AI-controlled weapons and mass domestic surveillance of American citizens and I'm worried they're going to lose.
  • @semianalysis_ @semianalysis_ on x
    Dario's cortisol meter SPIKED with the Pentagon DPA Maxxing. Imagine a Hegseth Frame Mog so hard you drop your alignment thesis and build WarClaude jfl [image]
  • @jengriffinfnc Jennifer Griffin on x
    At high stakes Pentagon meeting today Sec Hegseth gave Anthropic head Dario Amodei ultimatum to allow the Pentagon to use Anthropic's AI model for mass domestic surveillance and kinetic autonomous operations without human oversight or face censure and be labeled “supply chain
  • @jimsciutto Jim Sciutto on x
    Why are these two things red lines for DOD? “Anthropic has said it is willing to adapt its usage policies for the Pentagon, but not to allow its model to be used for the mass surveillance of Americans or the development of weapons that fire without human involvement.”
  • @shashj Shashank Joshi on x
    Military-Civil Fusion: 'Hegseth told Amodei in a tense meeting ... the Pentagon will either cut ties and declare Anthropic a “supply chain risk,” or invoke the Defense Production Act to force the company to tailor its model to the military's needs.' https://www.axios.com/...
  • @altryne Alex Volkov on x
    +100 as a very, very proud american citizen i stand with dario amodei and appreciate his willingness to defend the civil liberties of other american citizens 🇺🇸
  • @lynaldencontact Lyn Alden on x
    For context, their safeguards are 1) no using their AI for fully autonomous weapons and 2) no using their AI for mass surveillance on US citizens.
  • @davidlawler10 Dave Lawler on x
    @DanLamothe @m_ccuri One of our sources (not admin) said it'd be a tough case for Anthropic to win. But we couldn't find much precedent for this kind of adversarial use or this kind of legal fight, if it happens. Won't claim any personal expertise here though!
  • @captgouda24 Nicholas Decker on x
    I'm really serious about this guys. Sufficiently powerful AGI during the Trump administration means the end of America as a republic. There is no way to prevent them from seizing it from the AI companies, and using it to dominate us all, forever. https://nicholasdecker.substack.c…
  • @mckaywrigley Mckay Wrigley on x
    principles matter. show up when it matters. if you believe in liberty and privacy then say something. and if you *want* to say something but don't because of “business” reasons then you should really consider if your priorities are in check (looking at you, silicon valley).
  • @mckaywrigley Mckay Wrigley on x
    as a very, very proud american citizen i stand with dario amodei and appreciate his willingness to defend the civil liberties of other american citizens 🇺🇸 [image]
  • @kevinbankston Kevin Bankston on x
    Hoping Anthropic's reply is “see you in court” on any bogus supply chain risk designation or DPA invocation.
  • @davidlawler10 Dave Lawler on x
    NEW: Hegseth gave Anthropic til Friday to lift all restrictions on how military uses Claude. His threat: Invoking the Defense Production Act to force Dario's hand, or else blacklisting Claude. Meeting was “not warm and fuzzy.” With the great @m_ccuri https://www.axios.com/...
  • @nickharkaway.com Nick Harkaway on bluesky
    Since I'm ruthless with Anthropic about IP, I should also applaud them for standing up to the US government on this.  —  www.theguardian.com/us-news/ 2026...
  • @bardel @bardel on bluesky
    Happy to see Anthropic stand their ground..."Anthropic doesn't want its technology used for mass surveillance of Americans or for fully autonomous weapons — and is refusing to compromise on these points with the Pentagon."  —  Good!!  Hope others follow suit!!  😁  —  techcrunch.c…
  • @villaverde4nc Christine Villaverde on bluesky
    What you're watching is a government using procurement leverage — contract termination, blacklisting, the Defense Production Act — to coerce a private company into building tools that the company itself believes could be used to surveil and kill Americans without human judgment i…
  • @seldo.com Laurie Voss on bluesky
    Grab your popcorn, the government is going to make it simultaneously impossible and mandatory to use Anthropic [embedded post]
  • @ob1rebel @ob1rebel on bluesky
    Is this administration fascist or communist?  It's hard to tell.  Both I guess, just like the Soviet Union was.  [embedded post]
  • @timkellogg.me Tim Kellogg on bluesky
    what if DoW labeled Anthropic a “supply chain risk” and it didn't do anything? no dent in their revenue plans [embedded post]
  • @zhugeex.com Daniel Ahmad on bluesky
    Anthropic has drawn two red lines: no mass surveillance of Americans and no fully autonomous weapons.  —  In other words, Hegseth wants full access to the model for the mass surveillance of Americans and to build killer robots.  [embedded post]
  • @bwjones @bwjones on bluesky
    I would feel better about this if the Pentagon and the National Reconnaissance Office also told SpaceX that it would invoke the Defense Production Act or label SpaceX a “supply chain risk” given their CEO is a security risk.  [embedded post]
  • @coachfinstock @coachfinstock on bluesky
    Claude Coders gonna be the downfall of America.  Like either way in this fight, which is pretty funny [embedded post]
  • @cara.city @cara.city on bluesky
    in layman's terms: let the AI kill people or we'll torpedo your company's ability to sell to the government [embedded post]
  • @hammancheez @hammancheez on bluesky
    “Hegseth told Amodei he won't let any company dictate the terms under which the Pentagon makes operational decisions, or object to individual use cases.”  —  remember, if you're in business with the US military you don't get to tell them they can't commit war crimes using your pr…
  • @surcomplicated @surcomplicated on bluesky
    The Defense Production Act invocation threat is an attempt by Hegseth to find an out that doesn't blow up the military's access to Anthropic products—although maybe they actually go through on the supply chain risk designation threat if he's angry and stupid enough.
  • @bencollins Tim Onion on bluesky
    Seizing the means of production but in a Real American Patriot Republican way.  [embedded post]
  • @niceandinnocent @niceandinnocent on bluesky
    Didn't China just do this without asking? [embedded post]
  • @brockmeyer @brockmeyer on bluesky
    This has got to be one of the more insane things I've read about this insane administration [embedded post]
  • @ajaxsinger Constantine on bluesky
    Either Anthropic refuses and wins in court or Anthropic refuses and wins in the court of public opinion by standing strong and cementing their rep as the only *decent* AI company.  There is literally no margin for Anthropic in submitting.  [embedded post]
  • @bartenderhemry @bartenderhemry on bluesky
    This leftist nanny-state meddling with private enterprise is out of control [embedded post]
  • @maxberger Max Berger on bluesky
    The Trump regime is using AI for mass surveillance of Americans, and plans to integrate AI into their weapons systems.  They are trying to extort Anthropic for refusing to go along.  —  The oligarchs and their would-be King want to make human beings like us obsolete.
  • @vitalik.ca Vitalik Buterin on bluesky
    It will significantly increase my opinion of @Anthropic if they do not back down, and honorably eat the consequences.  —  (For those who are not aware, so far they have been maintaining the two red lines of “no fully...  https://firefly.social/post/ff- aa1b6a1b4b184f2a9b006c50246…
  • @peark.es George Pearkes on bluesky
    So it reads to me as if Hegseth is bluffing with absolutely no cards and it's chud dominance politics versus genuine principal and the nerds have a straight flush.  [embedded post]
  • @shoe @shoe on bluesky
    This is how SpaceX is nationalized.  Thanks for the helpful demonstration! [embedded post]
  • @self.agency Daniel Sieradski on bluesky
    the government forcing a corporation to violate its charter in order to make murder robots is in the constitution, right?  [embedded post]
  • @mariabustillos.com Maria Bustillos on bluesky
    Wait just a second now, who is doing the corrupting here [embedded post]
  • @neutral.zone @neutral.zone on bluesky
    Sounds like they might be .. out of alignment [embedded post]
  • @dillo.media @dillo.media on bluesky
    bond villain ass “give me your claude immediately!  😡”
  • @lajacq Jacquie on bluesky
    I can't get over what an insane ask this is when the DoD is in an openly corrupt partnership with a major competitor of Anthropic.  Did Elon ask for this specifically? [embedded post]
  • @emptywheel @emptywheel on bluesky
    There's a lot going on.  —  But it deserves FAR MORE ATTENTION that Whiskey Pete is going to seize the means of AI production so he can 1) engage in mass surveillance of Americans and 2) shoot without human intervention.  —  Dystopias this bad would be deemed unrealistic.  —  www…
  • @henrysnow Henry Snow on bluesky
    how far does the DPA get them? can they (not just legally but practically) force them, for example, to make the model more amenable to military use (this would be really bad)? there's a lot Claude won't do! [embedded post]
  • @tcarmody Tim Carmody on bluesky
    Designating Anthropic as a supply chain risk would:  —  1) bar the company from government contracts  —  2) bar any government contractor from using Claude  —  3) pressure other businesses to consider Claude insecure  —  Meanwhile, invoking the DPA would nationalize Claude, but n…
  • @socdoneleft @socdoneleft on bluesky
    We need to do this in 2029 for everything Musk owns.  —  X?  Defense Production Act'd.  —  Tesla?  DPA'd.  —  Neuralink?  DPA'd.  —  Starlink?  DPA'd.  —  (Hell, the last one one actually makes sense, see Russian drones!) [embedded post]
  • @joedunman Joe Dunman on bluesky
    This regime only operates by threats.  Empty ones, mostly, but threats nonetheless.  [embedded post]
  • @jtlg James Grimmelmann on bluesky
    There is a great deal of schadenfreude in watching every AI company's carefully crafted alignment scheme to avoid existential risk do a faceplant on first contact with the most obvious features of the society it operates in.  [embedded post]
  • @regimecpa @regimecpa on bluesky
    “Claude is so much better than grok that we need it to kill people” [embedded post]
  • @dburbach David Burbach on bluesky
    Personal opinion/question, but, why does DOD need to use AI for “mass surveillance of US citizens”, which is consistently one of the sticking points cited?  NSA related? [embedded post]
  • @darinself.com Darin Self on bluesky
    If only someone could have told these guys that democracy helps stop expropriation [embedded post]
  • @jm-mcgrath John Michael McGrath on bluesky
    Perhaps the only thing scarier than Claude being used for murderbots and panopticon surveillance is Hegseth et al. opting for Grok, instead.  [embedded post]
  • @marypcbuk Mary Branscombe on bluesky
    just a literal shakedown [embedded post]
  • @tonystark Tony Stark on bluesky
    Severe implications for industry writ large and suspect will end going to SCOTUS.  [embedded post]
  • @thinkyparts @thinkyparts on bluesky
    Anthropic should walk away.  Their brand, I suspect what they really believe, is totally incompatible with with this.  —  Knowing that $PLTR uses Anthropic though, makes ya wonder if the urgency is about Iran.  [embedded post]
  • @caseynewton Casey Newton on bluesky
    Invoking the Defense Production Act to force Anthropic to make a version of Claude that can conduct mass domestic surveillance and operate murderbots is psychotic [embedded post]
  • @sashat Sasha Talebi on bluesky
    They need a fledgling predictive text program to help them do more wars.  🚀🚀🚀
  • r/OpenAI r on reddit
    Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards
  • r/UnderReportedNews r on reddit
    Pentagon sets Friday deadline for Anthropic to abandon ethics rules for AI — or else
  • r/news r on reddit
    US military leaders pressure Anthropic to bend Claude safeguards
  • r/politics r on reddit
    Pete Hegseth's Pentagon AI bro squad includes a former Uber executive and a private equity billionaire
  • r/politics r on reddit
    Hegseth threatens to force AI firm to share tech, escalating Anthropic standoff
  • r/Military r on reddit
    Hegseth gives Anthropic until Friday to back down on AI safeguards
  • r/technology r on reddit
    Anthropic won't budge as Pentagon escalates AI dispute
  • r/Fuckthealtright r on reddit
    Pentagon sets Friday deadline for Anthropic to abandon ethics rules for AI — or else
  • r/Defeat_Project_2025 r on reddit
    Pentagon sets Friday deadline for Anthropic to abandon ethics rules for AI — or else
  • r/conservativeterrorism r on reddit
    Pentagon sets Friday deadline for Anthropic to abandon ethics rules for AI — or else
  • r/centrist r on reddit
    Hegseth gives Anthropic until Friday to back down on AI safeguards
  • r/technology r on reddit
    Pentagon sets Friday deadline for Anthropic to abandon ethics rules for AI — or else
  • r/PoliticalOptimism r on reddit
    Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards
  • r/politics r on reddit
    US military leaders meet with Anthropic to argue against Claude safeguards
  • r/politics r on reddit
    Hegseth sets Friday deadline for Anthropic to drop its AI red lines
  • r/ClaudeAI r on reddit
    Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards
  • r/Anthropic r on reddit
    Exclusive: Hegseth gives Anthropic until Friday to back down on AI safeguards
  • r/neoliberal r on reddit
    Hegseth gives Anthropic CEO until Friday to back down in AI safeguards fight