/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

OpenAI says it does not think Anthropic should be designated as a supply chain risk and it has made its position on this clear to the Pentagon

We do not think Anthropic should be designated as a supply chain risk and we've made our position on this clear to the Department of War.

@openai

Discussion

  • NewsMax.com NewsMax.com on x
    Anthropic: Will Fight Pentagon's Supply Risk Label
  • @_nathancalvin Nathan Calvin on x
    Good - in Sam's previous comments on CNBC, he only mentioned using the DPA to force anthropic was a bad idea, so I appreciate they are making clear this also applies to the SCR designation. Other companies should also state this position as clearly as possible.
  • @chrisharihar Chris Harihar on x
    OpenAI needs to stop being reactive and overexplaining every move. They also need to stop explicitly addressing what competitors are/seem to be doing. It's embarrassing.
  • @openai @openai on x
    Other AI labs have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. We think our approach better protects against unacceptable use. In our agreement, we protect our redlines through a
  • @gauravkapadia Gaurav Kapadia on x
    Governments should be able to choose their vendors based on their terms of service and capabilities. Deeming a company a security risk- and threatening their very existence- because you don't like their ToS is a very slippery slope. Wouldn't want XAI and OAI to be threatened by
  • @captgouda24 Nicholas Decker on x
    Certainly, but one can say many things without working to effect them. One need not even call it lying.
  • @terronk Lee Edwards on x
    Stand with free markets, competition, and American tech alongside OpenAI and Anthropic.
  • @arthurconmy Arthur Conmy on x
    There are many open questions on the current situation. But on the particular narrow point on stance, this seems a good sign: https://x.com/...
  • @daniellefong Danielle Fong on x
    is your position clear enough that it is itself a red line
  • @openai @openai on x
    Our agreement with the Department of War upholds our redlines: - No use of OpenAI technology for mass domestic surveillance. - No use of OpenAI technology to direct autonomous weapons systems. - No use of OpenAI technology for high-stakes automated decisions (e.g. systems such
  • @claudia_sahm Claudia Sahm on x
    Put your money where your mouth is.
  • r/technology r on reddit
    Employees at Google and OpenAI support Anthropic's Pentagon stand in open letter
  • @wildebees Wessel van Rensburg on bluesky
    OpenAI trying to protect its reputation [embedded post]
  • r/technology r on reddit
    Pentagon moves to designate Anthropic as a supply-chain risk
  • r/politics r on reddit
    OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash