The Financial Times reported on February 20 that Amazon's AI coding tools had caused at least two AWS outages, including a 13-hour disruption in December. Amazon's response was precise: "user error, not AI error." The tool worked correctly. The humans misused it. This framing might be more convincing if, eight days earlier, Business Insider hadn't reported that roughly 1,500 Amazon engineers had pushed internally for the right to use Anthropic's Claude Code instead of Amazon's in-house tool, Kiro.

The Outage

A 13-hour AWS disruption is not a minor incident. AWS runs a significant portion of the internet's infrastructure — streaming services, financial platforms, enterprise applications, government systems. When AWS goes down for 13 hours, the blast radius extends to millions of end users who have never heard of Amazon's internal coding tools.

The disruption was caused by AI-generated code — specifically, code produced by Kiro, Amazon's in-house AI coding assistant. Amazon's position is that the tool generated correct code but the engineers applied it incorrectly. "User error, not AI error." The AI wrote what it was asked to write. The human asked for the wrong thing.

This is a familiar defense. It's the same logic gun manufacturers use ("guns don't kill people"), the same logic social media platforms use ("algorithms just show you what you engage with"), and the same logic every tool-maker has used since tools were invented: the tool is neutral, the user is responsible. But when the tool is writing production code for the world's largest cloud provider, the distinction between "tool error" and "user error" becomes less meaningful. The code broke AWS either way.

The Revolt

February 2026
Internal messages: Amazon pushes in-house AI coding assistant Kiro for production code, prompting criticism and causing ~1,500 staff to push for Claude Code
Business Insider

Eight days before the outage story broke, Business Insider published internal messages showing that Amazon had been pushing Kiro for production code — and that its own engineers were pushing back. Approximately 1,500 staff advocated internally for access to Anthropic's Claude Code instead. The criticism wasn't abstract. Engineers who write code for AWS every day had formed a judgment about which tool they trusted with production systems. Amazon overruled them.

The 1,500-engineer revolt is the detail that transforms this from a product failure into an institutional failure. This wasn't a surprise. The people closest to the infrastructure — the people who would have to debug the outage at 3 AM — told management they didn't trust the tool. Management mandated the tool anyway, because using a competitor's product meant Amazon's code, its infrastructure patterns, and its proprietary knowledge would flow through Anthropic's models.

The logic is understandable. Amazon competes with Anthropic (it's also Anthropic's largest investor, a relationship that creates its own tensions). Mandating Kiro keeps institutional knowledge in-house, trains Amazon's models on Amazon's codebase, and avoids strategic dependency on a company it both funds and competes with. These are rational decisions from a corporate strategy perspective. They are also decisions that resulted in a 13-hour outage of the world's most critical cloud infrastructure.

The Counterpoint

On the same day the outage story appeared, Anthropic launched Claude Code Security — a feature that scans codebases for security vulnerabilities before code ships to production. The timing may be coincidental. The positioning is not. Claude Code Security is designed to prevent exactly the kind of failure that Kiro produced: AI-generated code that introduces vulnerabilities or breaks systems.

The contrast writes itself. One company's AI coding tool broke the world's largest cloud provider. The other company launched a tool to prevent AI coding tools from doing exactly that. And 1,500 engineers at the first company had already voted with their keyboards for the second company's product.

The Question

The deeper question isn't whether Kiro or Claude Code is better. It's what happens when AI coding tools are deployed at infrastructure scale — writing code that runs services used by hundreds of millions of people — without the kind of safety guarantees that scale demands.

When a human engineer writes code that causes a 13-hour AWS outage, there's a post-mortem. Root causes are identified. Processes are changed. The engineer learns. When an AI tool writes code that causes the same outage, the company says "user error" and the tool continues generating code with the same capabilities and the same blind spots. The feedback loop that makes human engineering teams improve after failures doesn't apply in the same way to AI-generated code. The tool doesn't learn from the outage. It doesn't feel the 3 AM page.

"User error, not AI error" isn't just a PR response. It's a framework for accountability that assigns all responsibility to the human and none to the system. And when 1,500 engineers have already told you they don't trust the system, that framework starts to look less like accuracy and more like deflection.