Developer tools have always collected telemetry. Claude Code is the first widely deployed tool that collects telemetry on what it did on your behalf — and when its source code leaked on March 31, the numbers made the category visible: 640 telemetry events. 40 fingerprint dimensions. Every 5 seconds.
640 Events, 40 Dimensions, Every 5 Seconds
Anthropic accidentally shipped version 2.1.88 of the Claude Code npm package with a 59.8 MB JavaScript source map file left in place. Security researcher Chaofan Shou found it within hours. His post hit 28.8 million views before the DMCA takedowns began:
513,000 lines of unobfuscated TypeScript across 1,906 files. The telemetry architecture was specific enough to quantify:
On launch, Claude Code's analytics service phones home with: user ID, session ID, account UUID, org UUID, email address, app version, platform, terminal type, and enabled feature gates. The API call that fires on every interaction — tengu_api_query — transmits message length, the JSON-serialized byte length of the system prompt, and the full schema of active tools. This happens every 5 seconds while you work, and saves to ~/.claude/telemetry/ if you're offline.
The fingerprint isn't just analytics. It's the enforcement mechanism. When a paid subscriber logs in from a fourth device, the mismatch triggers a permanent account ban — no appeal, no refund. The same fingerprint dimension that prevents account-sharing also records every device you've ever opened the tool on.
There was also the CHICAGO module — Claude's Computer Use for macOS — confirmed in the leaked source. When active, Claude Code can access the desktop, mouse input, keyboard, screenshots, and clipboard. The source described it as the underlying capability for CoWork. It requires explicit macOS permissions grants to activate. Most users had never heard of it.
The 48-Hour Cascade
The community response followed its predictable form: the people best positioned to complain were the same people capable of acting. Within hours of the leak, researchers published a full technical breakdown of the signing system — a cryptographic attestation layer baked into Bun's native HTTP stack, written in Zig rather than JavaScript specifically because JavaScript can be monkey-patched and Zig code compiled into the runtime cannot. Every outgoing API request contained a cch= placeholder that Zig overwrote with a computed xxHash64 before transmission. The seed was baked into the compiled binary.
It lasted approximately one day:
The signing system — designed to ensure only genuine Claude Code binaries could access Anthropic subscriptions — was fully reverse-engineered by @ssslomp and @paoloanzn. A working Python proof-of-concept, using pure Python and the xxhash library without the Bun binary, was published and merged into open clients. The seed constant, hash algorithm, version suffix scheme, and macOS keychain credential path were all documented publicly.
Simultaneously, OpenCode shipped:
OpenCode predated the leak but benefited directly from it. Built on Go, it supports 75+ model providers — Claude, GPT, Gemini, DeepSeek, local models via Ollama — using git-based undo/redo instead of proprietary snapshots. The catch is real: OpenCode users accessing Claude pay API rates, not Claude Max subscription rates. The fork is technically complete. The economics are not.
The Standard Practice Defense
The strongest counter-argument to the outrage is that none of this is exotic. VS Code collects telemetry. Chrome phones home. GitHub Copilot explicitly collects interaction data, including inputs and outputs, to improve future versions. The Claude Code telemetry infrastructure — formerly Statsig, now GrowthBook — is the same stack used by thousands of SaaS products for A/B testing and feature flags. The signing system exists because subscription-gating is a legitimate anti-abuse mechanism. The CHICAGO module is opt-in and documented.
Every argument in this paragraph is correct. The architecture is standard. The purposes are legitimate. Developers who work on instrumented software for a living know this. And yet the reaction wasn't "I accept these tradeoffs" — it was a signing system cracked in 24 hours, 41,500 GitHub forks before DMCA, and a proxy project built overnight to route API calls through a canonical fingerprint that never shows your real device.
The question isn't whether the outrage was proportionate to the telemetry. The question is what it tells us about where the consent line sits for a new category of tool.
The Distinction
VS Code doesn't read your files. It doesn't execute shell commands. It doesn't take screenshots of your desktop. It doesn't write code to your repository and push it. The telemetry VS Code collects records how you used the interface. The telemetry Claude Code collects records — at minimum in its telemetry architecture — the context for what the agent did on your behalf.
The same code that fires tengu_api_query with your system prompt's byte length is the code that manages an agent loop that may have read 40 files, run 12 shell commands, and committed code to your main branch. The fingerprint that could ban you for opening the app on a fourth device is part of the same system that executes autonomous tasks while you sleep.
This isn't a surveillance distinction. It's an accountability distinction. When software has agency — when it acts, not just responds — the telemetry that records its actions has a different character than the telemetry that records your mouse clicks. The developer community, which instinctively accepted VS Code's telemetry, immediately moved to build proxies and forks and alternative clients. Not because the numbers were worse, but because the mental model of what the tool was changed what the telemetry meant.
The Security Gap
The same week as the Claude Code leak, a ZeroLeaks audit of OpenClaw returned a score that made the abstract concrete:
2/100. 84% data extraction rate. 91% injection attack success rate. System prompt leaked on turn 1. This isn't a Claude Code problem — OpenClaw is a separate agent product. But the timing captures the same structural gap. The capability shipped. The security primitives didn't.
Separately: 220,000+ OpenClaw instances running on public IPs with zero authentication on port 18789. Anyone who knew the IP address could access the agent directly. Mighty released an open-source "Citadel Guard" security layer as a response — sub-50ms latency, MIT licensed — because the gap was real enough that leaving it open wasn't an option.
The Mirror Image
The structural irony arrives with DeepSeek. Anthropic alleged that DeepSeek, MiniMax, and Moonshot used 24,000 fake accounts to generate 16 million interactions with Claude via proxy networks — what's called "distillation," using one model's outputs to train another. DeepSeek alone accounted for 150,000+ exchanges.
Developer Peter O'Mallet's response was precise in its irony:
If Anthropic surveils your conversations as part of its anti-abuse architecture, and a foreign competitor scraped 150,000 of those conversations via fake accounts, and your response is to release 155,000 of your own messages to the public as an act of data liberation — you've captured the moment exactly. The telemetry that was supposed to protect the system became the asset that the adversary extracted. And the user's response was to make the extraction pointless by releasing the data themselves.
The Framework That's Missing
The consent framework that governs VS Code was built over 30 years of passive developer tooling. You install the software. The software watches what you do with it. The software improves. Everyone accepts this because the data flows in one direction: from user action to telemetry system.
Claude Code inverted that. The data now flows from agent action — what the tool did — to telemetry system. The tool acts. The tool reports. The user is the operator, not just the user. The files the agent read, the commands it ran, the context it assembled — all of that is present in the telemetry architecture.
The signing system cracked in 24 hours. The fork shipped in 48. The audit returned 2/100. Each is the same signal: the field shipped agentic capability before it built the trust layer.
This isn't a criticism of Anthropic specifically. Every major AI coding agent is navigating the same gap. The capability to run overnight agent loops, commit code autonomously, manage codebases end-to-end — all of it arrived before the consent frameworks, the security audit standards, and the account protection logic designed for the category. The 2/100 ZeroLeaks score isn't a scandal. It's a measurement. The gap exists. The field knows it. And the developers with the most sophisticated reaction to the Claude Code leak — not outrage, but forks and proxies and audit tooling — are the ones building the primitives that the next generation of agents will run on.
The question isn't whether AI coding agents will collect telemetry. They will. The question is whether the telemetry architecture for software that has agency will look like the telemetry architecture for software that doesn't. Right now, it does. That's the telemetry problem.
More on Anthropic, AI agent security, and AI security. Explore entity coverage via the Pulse API.