On March 10, 2026, the Financial Times published an internal Amazon memo from SVP Dave Treadwell directing junior and mid-level developers to stop using AI for code changes, citing a "trend of incidents" linked to AI-generated code. The same day, Anthropic launched Code Review for Claude Code — an AI agent that reviews developer pull requests for exactly the kind of problems Amazon described. And the New York Times reported that a top Senate administrator had given aides the green light to use Microsoft's Copilot. One institution pulling back. One building review infrastructure. One just getting started. Together, they map the end of the naive phase of AI-assisted programming.
Thirty-Five Percent
The arc begins in June 2021, when Microsoft and OpenAI announced GitHub Copilot — an AI tool that suggested code as developers typed. Within months, GitHub reported that 30% of new code on its network was written with Copilot assistance. By March 2022, Wired put the number at 35%. The promise was simple: AI makes developers more productive.
The promise delivered. Developers adopted AI coding tools faster than any previous developer technology. GitHub's 2023 surveys showed the vast majority of respondents were using AI assistants. Startups built entire companies around AI-assisted development — Graphite, Cursor, Replit. By February 2026, Karpathy observed that AI coding agents had made "a huge leap forward since December," completing complex projects with minimal oversight. "Programming," he wrote, "is becoming unrecognizable."
What nobody measured, until the incidents started, was what happened to the code after it shipped.
The Amazon Arc
Amazon's trajectory tells the story in four months.
- Nov 2025 Internal memo: Amazon asks engineers to use its in-house AI coding tool Kiro, steering them away from third-party tools.
- Feb 2026 Internal messages: Amazon steers teams more aggressively toward Kiro. Some engineers push back.
-
MAR 1Bloomberg reports AI coding agents are fueling "productivity panic" among executives — the tool that promised easier development has "kicked off a high-stakes reckoning."
-
MAR 10Amazon SVP Dave Treadwell tells junior and mid-level developers to stop using AI for code changes after a "trend of incidents."
Push. Push harder. Panic. Ban. In four months, Amazon went from mandating AI coding tools to restricting who could use them. The incidents weren't hypothetical. They were in production.
The Distinction
The ban applies to junior and mid-level engineers. Not senior engineers. That distinction contains the insight.
AI coding tools generate plausible code. The code compiles. It passes basic tests. It looks right. But "looks right" and "is right" diverge in ways that require deep system knowledge to detect — edge cases in distributed systems, subtle race conditions, security implications of seemingly innocuous changes. Senior engineers have the judgment to spot these. Junior engineers, by definition, are still building that judgment.
The AI coding promise was that these tools would let junior developers punch above their weight — write code that looked like a senior engineer wrote it. What Amazon's incidents revealed is that AI tools let junior developers ship code that looked like a senior engineer wrote it without the understanding that would have prevented the bugs a senior engineer would have caught.
AI coding tools don't replace engineering judgment. They generate code that requires more of it.
This is the productivity paradox. The code is written faster. The review is harder. The bugs are subtler. And the developer who used the tool has less understanding of what the code does than if they'd written it themselves. An Anthropic experiment in January found that while AI tools helped people do parts of their job faster, there were open questions about whether developers were building the skills needed to evaluate AI output.
Three Responses
March 10 produced three institutional responses to the same problem, each revealing a different theory of what went wrong.
Amazon's response: restrict. If the problem is that junior developers lack the judgment to review AI output, remove the tool from junior developers. This preserves the tool's value for senior engineers who can use it safely, while preventing the class of incidents caused by unreviewed AI code.
Anthropic's response: review. If the problem is that AI code isn't being reviewed properly, build AI to review it. Code Review for Claude Code uses AI agents to review pull requests — catching the patterns that cause incidents before they ship. The answer to bad AI code is better AI review.
The Senate's response: adopt. The leaked memo giving Senate aides the green light for Copilot suggests the government is just now entering the adoption curve that Amazon is already retreating from. The Senate is at the beginning of the arc whose end Amazon just reached.
The Curve
Every major developer tool follows the same adoption curve: euphoria, overadoption, incidents, governance. AI coding tools compressed the curve into five years.
| Phase | Period | Signal |
|---|---|---|
| Euphoria | 2021-2022 | Copilot writes 35% of code. "The future of programming." |
| Overadoption | 2023-2025 | Every developer tool adds AI. Vast majority of developers use assistants. |
| Incidents | Late 2025-2026 | Amazon's "trend of incidents." Bloomberg's "productivity panic." |
| Governance | 2026- | Amazon restricts. Anthropic builds review. Bugbot fixes AI bugs. |
The governance phase is where AI coding gets interesting. In July 2025, Anysphere — the company behind Cursor, one of the most popular AI coding tools — launched Bugbot, describing the next era of AI-assisted development as "bug fixing." The company that helped create the AI coding wave was building tools to manage its consequences.
The curl founder warned about this in January 2024, when he described AI-generated bug reports flooding his open-source project — reports that "looked right" but missed fundamental context. The pattern he identified at the project level is what Amazon now confronts at the enterprise level.
What Changed
In 2021, the question was: can AI write code? By 2024, the answer was unambiguously yes — often faster and more fluently than many human developers. In 2026, the question has changed. Amazon's ban, Anthropic's reviewer, and the Senate's adoption are three institutions answering the same new question: if AI can write code, who is responsible for understanding it?
The answer Amazon arrived at is blunt: only people with enough experience to catch what the AI gets wrong. The answer Anthropic shipped is recursive: another AI. The answer the Senate hasn't confronted yet is the one Amazon learned the hard way.
Eleven days before Amazon's ban, Karpathy wrote that programming was "becoming unrecognizable." He was right. It's just not the transformation he meant. Programming is becoming unrecognizable because the person who wrote the code increasingly isn't the person who understands it — and at Amazon, that gap shipped to production.