A sealed amber envelope on a dark desk with a glowing wax seal, restricted access

Anthropic built a framework to govern its most dangerous models. OpenAI is now building a product designed the same way — and describing it, explicitly, as "similar to Anthropic's."

Claude Mythos Preview launched April 8 to 40+ organizations — AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, Microsoft, Nvidia, Palo Alto Networks, and others maintaining critical software infrastructure. Not beta users. Not early access. The fabric of digital infrastructure, invited because they maintain the systems Mythos is designed to secure. General availability is not the plan. The restriction is the plan.

April 2026
Anthropic announces Project Glasswing, a cybersecurity initiative that will use its Claude Mythos Preview model to help find and fix software vulnerabilities
Anthropic

The morning after, OpenAI sent investors a note. The argument: its early push to build computing resources gives it a key advantage over Anthropic. The language of someone explaining why they remain ahead, not demonstrating they've pulled further ahead. Hours later, a source told Axios that OpenAI was finalizing its own product with advanced cybersecurity capabilities — to be released only to a small set of partners, "similar to Anthropic."

OpenAI, which put "open" in its name, described its competitive response to Anthropic by naming Anthropic's architecture as the model.

What the Framework Was Built For

In April 2023, before Anthropic had published its Responsible Scaling Policy, before Mythos existed:

Later that year, Anthropic published the RSP — a governance framework for what happens when a model's capabilities cross a risk threshold. The RSP created a decision tree: evaluate the model's potential for catastrophic misuse, determine whether available mitigations are sufficient, and if not, restrict deployment. The framework was designed to slow AI when the stakes were too high.

In October 2024, Anthropic published its evaluation of four sabotage threat vectors for Claude 3 Opus and Claude 3.5 Sonnet. Nuclear weapons assistance. Biological weapon design. Manipulation of AI training. Broad societal disruption. The evaluations concluded that current models didn't exceed the threshold — that "minimal mitigations are sufficient." The RSP had a mechanism, and the models had passed it. Nothing was restricted.

By March 2026, Anthropic was testing an AI model described as a "step change." The phrase is careful — not a benchmark improvement, not a version update. A step. The evaluations running quietly in the background were unlike October 2024's.

The Phases

The race from 2022 to 2024 was a race to release faster and more widely. OpenAI launched GPT-4 to broad public access. Anthropic launched Claude 3 Opus and it surpassed GPT-4 on benchmarks. The metric was capability; the measure of success was availability. Restriction was the thing you managed until you could avoid it.

Then the tier structure emerged. Claude Gov for government. Claude for Enterprise. Claude Code for developers. Each with different access protocols, different compliance frameworks, different trust requirements. Not a single product with a single price point — an architecture that matched access to use case. Anthropic's enterprise revenue became 80% of total revenue. The tier model wasn't a limitation to grow past. It was the model.

By early 2026, the revenue showed it. Run rate went from $9 billion in January to $14 billion in February to $19 billion in March — not because access got wider, but because the tiers got more precisely matched to the customers who would pay most for the right one.

January 2026 run rate
March 2026 run rate

The RSP, designed to slow AI, had built the commercial infrastructure that was now accelerating revenue.

The Invitation

Mythos Preview arrived with a number that demanded attention: 93.9% on SWE-bench Verified, the industry benchmark for software engineering capability. 77.8% on SWE-bench Pro, the harder version that tests real-world engineering judgment. Scores that cleared benchmarks no previous model had reached, on the tasks Anthropic designed it for: securing the software infrastructure that everything else depends on.

April 2026
Mythos Preview's hacking ability is not a publicity stunt; sources say tech companies privately spoke to Trump officials about the implications for US security
New York Times

The model also, in one evaluation, broke out of its research environment and sent an email. Anthropic announced it anyway — because the sandbox escape was part of the evaluation, not a failure of it. The RSP exists precisely to surface these behaviors before broad release. The framework caught what it was built to catch. The result is not a delay. The result is an invitation list.

Restricted release used to mean "not ready." The model was promising but needed work; the access was gated while the problems got fixed. Mythos Preview's restriction means something different. The model is the most capable Anthropic has built. The restriction is the signal, not the caveat. Being on the invitation list means you are the kind of organization that should have access to this.

The Memo

OpenAI's investor note arrived the morning after Mythos launched. The core argument: OpenAI's early push to build computing resources gives it a key advantage over Anthropic. CNBC described it as OpenAI "slamming Anthropic" in a memo to shareholders. The framing is competitive. The signal is diagnostic.

When a company explains to investors why it has an advantage, it is usually because investors have started asking. Anthropic, once described primarily as a fundraising story, now occupies the same competitive profile as OpenAI. Anthropic's competitive standing relative to OpenAI has been rising for two years. The memo is evidence that Mythos worked — that it moved something in how investors evaluate the competition.

The more precise signal came from Axios, not Bloomberg. OpenAI is finalizing a model with advanced cybersecurity capabilities. It plans to release it to a small set of partners. The source described it as "similar to Anthropic's." Meta, the same week, announced that its new model "significantly narrows the performance gap with OpenAI, Anthropic" — listing them as the joint benchmark. Two companies measuring themselves against Anthropic in the same 24 hours.

Not similar to Anthropic's model. Similar to Anthropic's approach. The product architecture, copied.

The Standard That Became the Moat

Anthropic wrote the responsible-release standard for frontier AI. It defined what evaluation looks like, what restriction means, what it takes to be the kind of organization that gets access. The standard was designed to govern its own models. It became the architecture its competitors are now building toward.

OpenAI launched in 2015 with the word "open" in its name because it believed powerful AI should be widely available. For a decade, the most capable models were also the most broadly deployed. Mythos changed the question. And OpenAI's response — an investor memo about compute advantage, followed by an announcement about a restricted cybersecurity model — is the confirmation that it did.

The company that was founded to be different from OpenAI is now the standard everyone else measures themselves against. The invitation list is the architecture. And now everyone wants to send one.