/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Anthropic updates its Responsible Scaling Policy, including separating the safety commitments it will make unilaterally and its industry recommendations

Anthropic, the wildly successful AI company that has cast itself as the most safety-conscious of the top research labs …

Time Billy Perrigo

Discussion

  • @anthropicai @anthropicai on x
    We're updating our Responsible Scaling Policy to its third version. Since it came into effect in 2023, we've learned a lot about the RSP's benefits and its shortcomings. This update improves the policy, reinforcing what worked and committing us to even greater transparency.
  • @deredleritt3r Prinz on x
    Anthropic's initial Risk Report under its new RSP: “We believe that AI models could, in the next few years, have a broad range of capabilities that exceed human capabilities. In particular, most or all of the work needed to advance research and development in key domains - from […
  • @anthropicai @anthropicai on x
    We're now separating the safety commitments we'll make unilaterally and our recommendations for the industry. We're also committing to publish new Frontier Safety Roadmaps with detailed safety goals, and Risk Reports that quantify risk across all our deployed models.
  • @cpetersen-cs Chris Petersen on bluesky
    I can't completely fault the “unilateral disarmament won't fix the industry” line of reasoning, but that's quite a walk-back for the Amodeis.  #AI “safety” was supposed to be their raison d'etre...  [embedded post]
  • r/technology r on reddit
    Time Exclusive: Anthropic Drops Flagship Safety Pledge
  • r/ClaudeAI r on reddit
    TIME: Anthropic Drops Flagship Safety Pledge
  • r/ArtificialInteligence r on reddit
    EXCLUSIVE: Anthropic Drops Flagship Safety Pledge
  • r/artificial r on reddit
    Anthropic Drops Flagship Safety Pledge
  • r/nottheonion r on reddit
    Hegseth warns Anthropic to let the military use the company's AI tech as it sees fit, AP source says
  • r/fednews r on reddit
    Hegseth warns Anthropic to let the military use company's AI tech as it sees fit, AP source says
  • @sunnysangwan Sunny on x
    Ahem... [image]
  • @maskedtorah Drake Thomas on x
    Anthropic's RSP v3 is out! TLDR: unilateral commitments to specific mitigations for predefined capability thresholds are mostly out, in favor of commitments to much more detailed transparency around both safety roadmaps and risk reports. Also new threat models, new commitments
  • @corsaren @corsaren on x
    I think this is Anthropic admitting that actually holding themselves to a safe ASL-4 standard would be a severe, competitively-insurmountable handicap. Their choices are A) moderate on safety, or B) lose. In the limit, models cannot simultaneously be Safe, Open, and Powerful. [im…
  • r/BetterOffline r on reddit
    Anthropic Drops Flagship Safety Pledge
  • r/neoliberal r on reddit
    Anthropic Drops Flagship Safety Pledge
  • r/antiai r on reddit
    Anthropic Drops Flagship Safety Pledge
  • r/technews r on reddit
    Anthropic Dials Back AI Safety Commitments |  Company says competitive pressure prompts it to pivot away from a more-cautious stance