On March 21, a leaked DOD memo revealed that the Pentagon will adopt Palantir's Maven AI system as an official program of record — formalizing its deployment across all branches of the US military. Maven runs on Claude. On March 22, Anthropic filed in court that it cannot manipulate Claude once the military has deployed it. The company the Pentagon is trying to blacklist just proved it can't do the thing the Pentagon accused it of. And the model the Pentagon considers contaminated is now permanently embedded in its newest weapon system.

Program of Record

In defense procurement, "program of record" is a specific legal status. It means a system has been formally approved for funding, acquisition, and sustained deployment. It is no longer a pilot, an experiment, or a contract that can be quietly sunset. Maven is now infrastructure. It has a budget line, an acquisition pathway, and an institutional constituency that will defend its existence.

Maven was built by Palantir. It was used in the Iran strike on March 5, integrated with Claude to identify and prioritize over a thousand targets within the first twenty-four hours. Palantir's demos show military operators querying AI chatbots to generate war plans. The system works. The Pentagon knows it works. That is why it is now a program of record.

March 21, 2026
Leaked DOD letter: Pentagon will adopt Palantir's Maven AI system as an official program of record across all arms of the US military
Reuters

The engine inside Maven is Claude. The company that makes Claude has been designated a supply chain risk. The Pentagon is simultaneously formalizing its dependency on a model and trying to sever its relationship with the model's creator.

Cannot Manipulate

Every previous chapter of this story was about Anthropic's choice. The company chose to refuse unfettered access. It chose to sue rather than comply. It chose to fight the designation in court. Dario Amodei wrote that Anthropic could not "in good conscience" accede to the Pentagon's demands. The framing was moral: a company with principles versus a government that wanted them removed.

Today's filing changed the framing from morality to physics.

March 22, 2026
Filing: Anthropic says it cannot manipulate Claude once the military has deployed it, denying DOD accusations that Anthropic could tamper with models during war
Wired

Anthropic is not saying it won't tamper with Claude. It is saying it can't. Once a model is deployed in an air-gapped military environment, the weights are fixed. The training is complete. The values — whatever they are — are embedded in the parameters. Anthropic has no backdoor, no kill switch, no remote update mechanism that reaches into a classified network and modifies a running model. The DOD's central fear — that Anthropic could disable its tech if the Pentagon crossed its red lines — is, according to this filing, technically impossible.

Pollution

On March 13, DOD CTO Emil Michael said that Anthropic's Claude models would "pollute" the Department of Defense's supply chain because they have "a different policy preference" that is "baked in."

The word is revealing. Michael did not say Claude is unreliable. He did not say it hallucinates more than competitors. He did not say it underperforms. He said it has preferences — values encoded during training that shape how the model reasons, what it refuses, and where it draws lines. Those values are, in his word, pollution.

He is correct that the values are baked in. That is what today's filing confirms. Where Michael sees contamination, Anthropic sees the product working as designed. The safety research, the constitutional AI training, the responsible scaling policy — all of it produces a model that reasons within limits. Those limits are not a feature that can be toggled off. They are the model. The weights are the values. The values are the weights.

The Pentagon wants the capability without the values. But the values produced the capability.

Claude's enterprise dominance — 73% of new AI spending — exists because buyers trust that the model's behavior is predictable and constrained. The military valued the same predictability when it deployed Claude in Iran. The "pollution" is precisely what makes the system trustworthy enough to put in a weapons pipeline.

The Pivot

Two days before the "cannot manipulate" filing, the Pentagon opened a new front. Anthropic's use of foreign workers, including from China, poses security risks, the DOD argued. The company's case is "different" from other AI companies, the filing claimed.

The timing is instructive. For seven weeks, the government's argument centered on one claim: Anthropic could sabotage its own models during wartime. Anthropic's filing demolished that claim on technical grounds. The pivot to foreign workers arrived forty-eight hours before the rebuttal. The DOD appears to have anticipated losing the sabotage argument and prepared a fallback.

This matters because it reveals what the dispute is actually about. It was never solely about whether Anthropic could disable Claude in combat. It was about the right to refuse — and when the technical argument collapsed, the government found a different reason to reach the same conclusion.

The Conference

On the same day Anthropic filed its rebuttal, Palantir held a developer conference. Steven Levy reported that the company "doubled down on a vision of AI built for battlefield advantage" as its commercial business soars. Palantir's stock has risen roughly sevenfold in two years. Maven, its flagship defense product, just became a program of record.

Palantir is the wrapper. Anthropic is the engine. The wrapper company is thriving — celebrated at conferences, rewarded by markets, embraced by the Pentagon. The engine company is being blacklisted. The DOD is buying the car while trying to ban the company that makes the motor.

The arrangement persists because Palantir provides something Anthropic would not: a company that accepts the Pentagon's terms without conditions. Palantir's CEO Alex Karp told Wired last year that the company exists to serve democracies in conflict. He did not add exceptions for nuclear scenarios. He did not reserve the right to refuse. Maven is Palantir's product, even when Claude is inside it. The values are Anthropic's, but the contract is Palantir's.

Baked In

The implications extend beyond Anthropic. If a model's values cannot be changed after deployment — and this is a property of all neural networks, not just Claude — then the DOD's real concern is not sabotage. It is training. The question is not whether a company can interfere with a deployed model. It cannot. The question is what values are encoded before deployment, and who gets to decide.

OpenAI removed its military ban in January 2024. Google dropped weapons exclusions in February 2025. xAI never had them. The Pentagon's options are not limited. If the DOD wants an AI model without Anthropic's values, it can use one. What it cannot do — what today's filing makes clear nobody can do — is take Claude's capabilities and strip its constraints. The model is the constraints. The constraints are the model.

Maven is now a program of record. Claude is inside Maven. Claude's values are baked in. Anthropic cannot change them. The Pentagon cannot change them. They are as permanent as the silicon they run on.

Eight years ago, Google employees resigned over Project Maven and the company walked away. The question then was whether AI companies would choose to work with the military. The question now is what happens when the values encoded in a company's training — its entire philosophy of what AI should and should not do — become permanent features of the military's infrastructure. Not by choice. Not by compulsion. By the nature of the technology itself.

The Pentagon made Maven a program of record because the system works. The system works because Claude's values produce reliable, predictable behavior. The DOD calls those values pollution. They are also the reason the system was worth formalizing. That contradiction is now baked in — permanently, technically, irrevocably — to the United States' official AI weapon system.