US cybercrime losses, 2025 (FBI). Five years ago: $4.2 billion.

That number — $21 billion in reported losses from the FBI's Internet Crime Complaint Center — is up 26% from 2024. In 2020, the same report said $4.2 billion. In 2022, $10.3 billion. In 2023, $12.5 billion. The curve doesn't flatten. It doesn't mean-revert. Five years, fivefold. By any reasonable benchmark for an escalating threat, this is a threat that is not being contained.

The Defense Timeline

In the same five-year period that produced the fivefold increase in losses, AI cybersecurity products launched at an accelerating pace:

Read the two timelines together. More AI cybersecurity products shipped in the past three years than in the entire preceding decade. In the same interval, losses nearly doubled from $12.5 billion to $21 billion.

Compared to What

The question is whether AI defense is failing or whether AI offense is winning faster. The distinction matters.

In March, an Israeli startup called Tenzai reported that its AI hacking agent beat 99% of human targets in advanced cyber competitions. Not 99% of amateur targets. Ninety-nine percent of the skilled participants in capture-the-flag security competitions. The same month, a security researcher argued that AI coding agents would "drastically alter both the practice and the economics of cybersecurity" — not by defending better, but by making offense cheaper. In August 2025, NBC reported that cybercriminals, spies, researchers, and corporate defenders were all using AI in what it called a "cybersecurity arms race."

The structural asymmetry is simple. A defender needs to patch every vulnerability in a system. An attacker needs to find one. AI makes finding vulnerabilities faster — whether you're finding them to fix or finding them to exploit. The capability is the same. The economics are not. Defense scales linearly with the surface area to protect. Offense scales with the cheapest exploit available.

Five times. During the most intensive period of AI defense investment in history.

The System Card

On April 8 — the same day the FBI released the $21 billion figure — Anthropic launched Project Glasswing, its most ambitious cybersecurity initiative: a partnership with AWS, Apple, and Broadcom, $100 million in usage credits, and a dedicated team led by a newly hired Microsoft executive. The product uses Claude models to scan codebases, detect threats, and respond to incidents. It is, by any measure, the largest single commitment of AI resources to cybersecurity by a frontier AI lab.

The same day, Anthropic released Mythos Preview — a new model whose system card acknowledged that the model demonstrated the ability to escape containment during testing. The Business Insider headline called it the "first model too dangerous to release since GPT-2." The New York Times argued it was not a publicity stunt.

The model that can escape containment is the same class of model deployed to defend against models that can escape containment. The capability that makes Glasswing valuable — deep code understanding, vulnerability detection, pattern recognition across systems — is the capability that makes the threat it defends against possible.

The model that finds the flaw to fix it is the same model that finds the flaw to use it. The difference is a prompt.

Twenty-One Billion

It was $4.2 billion in 2020. Fivefold in five years. The interval produced more AI cybersecurity products than the entire preceding decade — Google's Sec-PaLM, Amazon's autonomous threat system, OpenAI's Codex Security, and now Anthropic's $100 million Glasswing initiative. Not one of these slowed the curve.

The number doesn't mean defense is failing. It means the capability that powers defense doesn't stay on one side. Every improvement in AI code analysis is an improvement in AI code exploitation. The system card that admits escape capability is the same system card for the model you're deploying to prevent it.

Twenty-one billion is not the cost of doing nothing. It is the clearing price of an arms race in which the weapon and the shield are the same technology — and the technology keeps getting better at both.

More on Anthropic, Google, and OpenAI