An empty stadium at dawn with runners' starting blocks in the foreground and figures sprinting away into amber mist

OpenAI's 2015 founding charter argued that a single benevolent lab was safer than a concentrated race. On April 18, 2026, the benevolent lab lost three executives to competing labs in a single day — and a four-month-old startup founded by ex-DeepMind and OpenAI engineers announced it had raised $500 million at a $4 billion valuation.

The charter has spent eleven years producing the outcome it forbade.

What the Charter Was

In December 2015, a consortium of investors led by Sam Altman, Elon Musk, Reid Hoffman, and Peter Thiel announced OpenAI, a new nonprofit for AI research with an initial $1 billion commitment. The founding logic was specific, and it was aimed at a specific fear. In 2015, most serious researchers believed that artificial general intelligence, if it arrived, would arrive first at Google DeepMind. DeepMind had been acquired by Google the previous year. The concentration of talent and compute at a single commercial lab was, to the people who founded OpenAI, the worst version of what might happen.

The solution was counter-concentration. One more lab, nonprofit, committed to publishing its research, staffed by senior researchers who would otherwise have gone to DeepMind or stayed at academic institutions. The premise was that two labs, both committed to safety, would be safer than one lab regardless of its safety posture. The charter didn't anticipate that OpenAI would become a commercial entity, sign a $13 billion Microsoft deal, launch ChatGPT, reach a $500 billion valuation, or announce ad-supported consumer products. None of those outcomes would have been acceptable to the 2015 signatories. All of them happened anyway.

What the charter did correctly predict was that a single concentrated lab would be dangerous. What it did not predict was that OpenAI itself would become the source of the concentration, and then — through its own internal centrifugal force — the source of the proliferation.

Phase One: The Ideological Splits

The first departure was ideological. In early 2021, Dario and Daniela Amodei and several senior safety researchers left OpenAI to found Anthropic. The departure was documented as a safety-vs-speed disagreement: Anthropic's founders believed OpenAI's commercial trajectory was accelerating deployment faster than safety work could keep up. Anthropic was explicitly the answer to that concern. The reason was the product.

The pattern repeated with reasons. In 2023, Musk founded xAI after a public break with Altman over safety posture and governance. In 2024, Ilya Sutskever left to found SSI — Safe Superintelligence — with a mission statement that was, in its concision, a rebuke of OpenAI's commercial direction. In 2024, Mira Murati unveiled Thinking Machines. Andrej Karpathy left in 2024 and founded Eureka Labs. John Schulman, an OpenAI co-founder, joined Anthropic in 2025.

In each case, there was a reason. The reason was in the pitch deck. Safety. Alignment. Governance. Speed. Architecture. Something ideological, something the founder could say to an investor to differentiate from OpenAI, and that the investor could relay to LPs as a coherent thesis. The first three years of the split pattern looked like what the 2015 charter had anticipated: internal disagreement produced external competition. It was the mechanism the founders thought they had designed around, now running in reverse.

Phase Two: The Reason Drops Off

On April 18, 2026, three OpenAI executives exited in a single day.

Kevin Weil, former CPO and then VP of OpenAI for Science, announced he was leaving. The science product he had built, Prism, was shuttered and folded into Codex. Bill Peebles, who led Sora, left along with Srinivas Narayanan, OpenAI's CTO of enterprise applications. None of the three departures had a safety narrative. Weil ran a product unit that was being wound down. Peebles was the Sora lead at a company that had just killed Sora's standalone product and was no longer investing in video-first consumer applications. Narayanan ran enterprise infrastructure.

April 2026
Bill Peebles, the researcher behind Sora, is leaving OpenAI, along with Srinivas Narayanan, OpenAI's CTO of enterprise applications
TechCrunch

The reason was operational, not ideological. OpenAI was consolidating around ChatGPT, Codex, and a narrower set of bets. The side quests were being killed. The people who ran the side quests were leaving because the side quests were what they did.

That's a normal corporate pattern. What's new is what happens to them next.

Phase Three: The Capital Is Already There

The same day OpenAI lost three executives, the Financial Times reported that Recursive Superintelligence — a lab founded by ex-DeepMind and OpenAI engineers, four months old — had raised more than $500 million at a $4 billion valuation. The lead investors were GV (Google's venture arm) and Nvidia. The startup's stated direction was "self-teaching AI."

Read that specification carefully. Not safety. Not alignment. Not governance. Not a rebuttal to OpenAI's commercial posture. A technical direction — "self-teaching AI" — that could have been a project inside any of the existing frontier labs. Four months from founding to a half-billion-dollar round at a unicorn-scale valuation. No differentiated safety thesis. A founding team defined by where its members used to work, and a capital stack provided by Google's venture arm and the company that sells every frontier lab its GPUs.

This is not the pattern the 2015 charter was written to address. In 2015, the danger was one lab with too much concentration. In 2021, the countervailing force was principled defection over safety disagreements. In 2026, the pattern is different: senior researchers and product leaders leave OpenAI and walk into capital markets that have already priced the bet. The reputation transfers. The founding story doesn't need to be written yet. The investors are not underwriting an ideology. They are underwriting the career arc.

Recursive is not uniquely symptomatic. It's the fastest example of a now-standard path. Thinking Machines raised at a $10 billion valuation on a similar timeline. Mira Murati's fundraising pace through 2024 and 2025 was widely reported as unprecedented. The speed isn't the point. The speed is the consequence of the structural change. The change is that any senior frontier-lab exit now has a Series A round waiting.

The Starting Line

The 2015 charter was written to prevent a proliferating race to AGI. It proposed one more lab as the solution. The structural assumption was that one more lab would stabilize the field by providing a safety-aligned counter-weight to DeepMind. That assumption only holds if labs are hard to start.

Labs are no longer hard to start. The talent is distributed, the capital is abundant, the infrastructure is rentable, and the story a founder has to tell an investor has collapsed from "here is why this lab exists" to "here is where I used to work."

What OpenAI was designed to be — the one lab that would ensure AGI benefits humanity — has reversed into the mechanism by which frontier capability is distributed across a market. Researchers who want to build the next system don't try to displace OpenAI from outside. They join OpenAI, stay long enough to accumulate credibility, and leave to raise a round. The org is the signal. The signal is portable. The portability is the race.

Shareholders are starting to notice. On the same day of the three departures, The Wall Street Journal reported that some OpenAI shareholders were questioning whether Altman was the right leader to take the company through an IPO and had floated Bret Taylor as a potential successor. The piece read as trial-balloon reporting. It also read as a symptom — of a company whose most valuable senior talent has discovered that their market price outside is higher than their equity price inside.

The 2015 charter said: one lab, benevolent, counter-concentrated, as a safer alternative to the race. The 2026 reality is: one lab, concentrated, then decentralized through its own exits, as the origin of the race.

The charter forbade a concentrated race. The org became the starting line.