/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Anthropic says it has fixed three recent Claude Code quality issues: reduced default reasoning, a caching bug, and a system prompt change to reduce verbosity

Anthropic

Discussion

  • @claudedevs @claudedevs on x
    Over the past month, some of you reported Claude Code's quality had slipped. We investigated, and published a post-mortem on the three issues we found. All are fixed in v2.1.116+ and we've reset usage limits for all subscribers.
  • @firstadopter Tae Kim on x
    The most ridiculous part of this is that Anthropic employees were repeatedly in people's X mentions, gaslighting customers and telling them they were wrong.
  • @claudedevs @claudedevs on x
    We're making changes to catch these types of issues earlier, including more internal dogfooding with configs that exactly match those of our users and creating a broader set of evals and running them against isolated system prompt changes
  • @theo @theo on x
    Confirmed that Claude Code got dumber, not Claude. They shipped slop and it made the models worse. [image]
  • @claudedevs @claudedevs on x
    The issues stemmed from Claude Code and the Agent SDK harness, which also impacted Cowork since it runs on the SDK. The models themselves didn't regress, and the Claude API was not affected.
  • @bcherny Boris Cherny on x
    We've been looking into recent reports around Claude Code quality issues, and just published a post-mortem on what we found.
  • @edzitron Ed Zitron on x
    it's wild that they did not acknowledge this the entire time - Boris even went around saying nothing was wrong! Laughable company, be very curious to see if this fixes stability issues
  • @carnage4life Dare Obasanjo on bluesky
    Anthropic has tracked down and fixed why Claude seems to have gotten dumber in the past month.  —  1. Reduced default reasoning effort (from High to Medium).  —  2. A caching bug that accidentally deleted Claude's memory mid-session.  —  3. A system prompt change for verbosity th…
  • r/Anthropic r on reddit
    Official: An update on recent Claude Code quality reports
  • r/ItalyInformatica r on reddit
    Non erano paranoie: Claude era peggiorata davvero e Anthropic lo conferma
  • r/ClaudeAI r on reddit
    Claude Code has big problems and the Post-Mortem is not enough
  • r/ClaudeCode r on reddit
    An update on recent Claude Code quality reports
  • r/LocalLLaMA r on reddit
    Anthropic admits to have made hosted models more stupid, proving the importance of open weight, local models
  • @ns123abc Nik on x
    we don't degrade our mod— [image]