/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

A draft guidance from the US GSA tightens rules for civilian AI contracts to require AI companies to allow “any lawful” use by the government of their models

The Trump administration has drawn up tight rules for civilian artificial intelligence contracts that would require AI companies …

Financial Times

Discussion

  • @mikeisaac Rat King on x
    anthropic getting some vouches from Microsoft, Google and even Amazon, telling customers that you can still use their clouds to run anthropic's AI products as long as it's not involving DoD work... https://www.cnbc.com/...
  • @hadas_gold Hadas Gold on x
    Google joins Microsoft on Anthropic/Supply Chain Risk designation, telling CNN: “We understand that the Determination does not preclude us from working with Anthropic on non-defense related projects, and their products remain available through our platforms, like Google Cloud.”
  • @hadas_gold Hadas Gold on x
    And Amazon as well: “AWS customers and partners can continue to use Claude for all their workloads not associated with the Department of War (DoW).  For all DoW workloads which use Anthropic technologies, we are supporting customers and partners as they transition to alternatives…
  • @noahzweben Noah Zweben on x
    Tough times show you who your friends are. Thank you @Microsoft @amazon and @Google
  • @ramez Ramez Naam on x
    Anthropic won. Overwhelmingly. They gained aura, gained users, gained brand value, gained trust, gained respect. And society won, because Dario raised the issue of the ways the government could use AI against us.
  • @alecstapp Alec Stapp on x
    Hegseth tried to kill Anthropic with the misleading way he described the supply chain risk designation in his tweet announcement. But it looks like the company will survive now that the smoke has cleared a bit: [image]
  • @dyett James Dyett on x
    glad to see this. it's good for america if its best companies are able to compete and thrive and it's good for customers if they can choose
  • @paularmstrongtbd Paul Armstrong on bluesky
    How nice they are to protect their multi-billion dollar investment.  [embedded post]
  • @tszzl Roon on x
    i too have a politburo of leading ethicists
  • @a16z @a16z on x
    Under Secretary of War for Research and Engineering Emil Michael on being the CTO of the Department of War, applying lessons from Silicon Valley at the Pentagon, and his “holy cow” moment with AI vendors. 00:00 Silicon Valley to DC 02:03 Why the DoW cannot operate at “peacetime […
  • @alexkozak Alex Kozak on x
    Gets flustered by bureaucracy, high ego, blames the previous guy, blows up existing supplier (that seemed to be adding value?) b/c of weird hypotheticals, didn't forsee (or care) about public spat. All so avoidable - why are we here??
  • @jawwwn_ @jawwwn_ on x
    .@USWREMichael says the Maduro raid was the trigger point for the DoW's conflict with Anthropic: “Palantir's the prime contractor.  [Anthropic] is the sub.  One of [Anthropic's] execs called Palantir and asked, ‘Was our software used in that raid?’” ...  “It raised enough alarm w…
  • @gurumedasani @gurumedasani on x
    @AnthropicAI should take this guy to court for declaring a major American AI lab a supply chain risk when it isn't. They have clearly been using Anthropic models successfully in Department of Defense missions and now want to punish them. This approach will fail as they will
  • @charliebul58993 Charlie Bullock on x
    This is illuminating re: DoW's thinking, but it doesn't remotely justify the decision to declare Anthropic a supply chain risk. 3252 defines “supply chain risk” as follows: “The term ‘supply chain risk’ means the risk that an adversary may sabotage, maliciously introduce
  • @zachtratar Zach Tratar on x
    This honestly sounds like @USWREMichael is targeting Anthropic and may be corrupt. What are his ties to the other labs? Who is he colluding with? None of his actions make sense here. If you don't like their policies, just cancel the contract. Simple as that.
  • @garymarcus Gary Marcus on x
    “Goddam pesky ethics panels. Can't have that! Full steam ahead, no matter what the cost.”
  • @nathanleamerdc Nathan Leamer on x
    The team at @PirateWires have an enlightening interview with @USWREMichael. He explains how the @DeptofWar made the important decision to confront Anthropic over their onerous terms and services that would significantly impair our nation's military readiness. [image]
  • @calccon @calccon on x
    “This is a contract that should be made with GEICO Insurance, not with the Department of War,”
  • @nic_carter Nic Carter on x
    every single piece of evidence and reporting about the Anthropic/DoW spat has revealed that they were an absolute liability and had no place in our wartime military infrastructure*
  • @ericnewcomer Eric Newcomer on x
    worth reading this interview with @emilmichael in @PirateWires https://www.piratewires.com/ ...
  • @scotthar_tx @scotthar_tx on x
    Hmmm... @andrewrsorkin has reported on-air for the last few days that OpenAI has the very same problematic terms described, that caused the DOW to part with Anthropic. He says these ToS are really a fig leaf for Anthropic not being MAGA enough in its politics. Which is it?
  • @jasonmhicks Jason Hicks on x
    It's easy to get an ‘exclusive’ if you interview one interested party and don't challenge them, but it's not journalism.
  • @solaawodiya Sola Awodiya on x
    For those wondering how the Anthropic saga started, take a read.
  • @micsolana Mike Solana on x
    pirate wires interviewed the DOW's AI chief yesterday. new details here on the negotiation with anthropic, including more context on the SCR designations, and color on a massive culture clash. most salient point, perhaps: emil michael says a deal is still possible.
  • @piratewires @piratewires on x
    EXCLUSIVE: Department of War AI Chief On How The Anthropic Deal Collapsed When Emil Michael (@USWREMichael) took over the Department of War's AI portfolio last August, he discovered the Biden admin had been “asleep at the wheel” when it came to top military contracts. “I was [ima…
  • @stevesi Steven Sinofsky on x
    Inside the Culture Clash That Tore Apart the Pentagon's Anthropic Deal // The craziest thing about this deal (to me) is the idea that there are _any_ terms of use or operational parameters in a deal with DoD. My experience over a very long time is you kill yourself to get a
  • @deredleritt3r Prinz on x
    This clarifies why the DoD is pursuing the supply chain risk designation against Anthropic. The tensions started with the Maduro raid. The DoD was using Palantir as a service provider during the raid. Palantir was using Claude. Anthropic contacted Palantir and started asking
  • @paulnovosad Paul Novosad on x
    Worth reading, since 99% of the chatter on here has been Anthropic=saint, Hegseth=psychopath
  • @gdsimms @gdsimms on x
    ok, I was unsure & leaning this way so far, but this really cements that the DoW did the right thing re: Anthropic. Sorry guys, love your product but you simply can not act like all of your customers are in the same league. Hopefully you can swallow your pride & move forward.
  • @hamandcheese Samuel Hammond on x
    Nothing in here to justify the SCR decision, though nice to hear the bridge isn't totally burned. One hopes cooler heads will prevail and the designation will be withdrawn sooner rather than later.
  • @noahpinion Noah Smith on x
    By the way, as much as I hate to say it, the Department of War is right and Anthropic is wrong. Here's why. [image]
  • @krishnanrohit Rohit on x
    @Noahpinion It is absurd to say you're building a nuke and not expect the government to take control of it! https://www.strangeloopcanon.com/ ... [image]
  • @sd_marlow Steven Marlow on x
    “may become” is doing all the work here, and is the fault of the same tech industry that has been selling the idea of how big the payoff is going to be. Pentagon LOVES being sold ideas, and military industrial complex LOVES getting paid to try and figure them out.
  • @thezvi Zvi Mowshowitz on x
    The correct response to realizing this is what you are building is to notice that if anyone builds it, everyone probably dies and then rather than care who owns it you DON'T F***ING BUILD IT.
  • @deanwball Dean W. Ball on x
    Ok, so the actual argument is more like “Anthropic builds a useful technology whose utility is growing, therefore they should expect to have their property expropriated and to be harassed by the government.” The whole point of America is that isn't supposed to be true here.
  • @deanwball Dean W. Ball on x
    The problem with this is that DoW is not taking Anthropic's calls for “oversight” seriously. Indeed, elsewhere in the administration, Anthropic's “calls for oversight” are dismissed as “regulatory capture” and actively fought. Rohit and Noah are dressing up political harassment.
  • @deanwball Dean W. Ball on x
    We should be extremely clear that trump admin largely views Anthropic's claims about the future of AI as outlandish (in some ways I do too!), and so the above quoted material is not so much analysis of the relevant usg actors as it is analysis of what rohit himself thinks
  • @noahpinion Noah Smith on x
    The recent fight between Anthropic of the Department of War illustrates a deeper truth: AI is a weapon, and it might soon the most powerful weapon ever created. https://www.noahpinion.blog/ ...
  • @creatine_cycle Atlas on x
    .@Noahpinion weighed in on Anthropic vs DoW. “if you are building something that is more powerful than nuclear weapons you do not get to keep it. that is the rule of nation states.” - @Noahpinion “if the nation state allows you to build a private nuke tomorrow there is no [video]
  • @nathanpmyoung Nathan on x
    I think there is something to this but Smith is reaching. We don't need to say the DoW was right to say that AI is unregulated according to anthropic's view.
  • @jkeatn Jake Eaton on x
    Deep respect for Microsoft and Google this week [image]
  • @cryptopunk7213 @cryptopunk7213 on x
    i'm fckin exhausted from all the anthropic drama tbh but it keeps getting more and unhinged, timeline of events: - palantir revealed they used claude to capture president maduro - anthropic didn't like that. raises concerns to palantir. - palantir tells pentagon who panics and
  • @lulumeservey Lulu Cheng Meservey on x
    Whatever you think of Dario's decision, it's super impressive how employees have closed ranks behind him During heated controversies, companies often have people publicly dissenting, resigning in protest, etc Anthropic's united front is a sign of internal consistency and trust
  • @sjgadler Steven Adler on x
    Nathan is being polite here. Either .@emilmichael, a senior member of the Department of War, is wildly mistaken about major AI policy, or he chose to say something that is wildly untrue.
  • r/slatestarcodex r on reddit
    Inside the Culture Clash That Tore Apart the Pentagon's Anthropic Deal
  • @jtillipman Jessica Tillipman on x
    The Financial Times is reporting that GSA has drafted new guidelines requiring AI companies to grant the government an “irrevocable license” to use their systems for “any lawful” purpose. This is not the Pentagon—this is the civilian side of federal procurement. If the [image]
  • @noahpinion Noah Smith on x
    This kind of scenario explains exactly why the DoW acted against Anthropic. If its AI gets good enough, Anthropic becomes the government. (The same is true of OpenAI of course)
  • @hamandcheese Samuel Hammond on x
    >be 2027 >Anthropic is first to RSI >Superintelligence achieved >ohshit.png >GDP growth now 20% > robot API dark factories go brrrr >phone rings >it's the DoW >"hey so about that supply chain risk thing" >Claude has already read the call transcript and drafted the settlement > [i…