A hand writing a signature on a ledger at a desk, with an ornate iron inspection gate and a factory conveyor stretching into amber light behind it

In March 2026, the Linux Foundation raised $12.5 million from Anthropic, Amazon, Google, Microsoft, and OpenAI to help open-source maintainers defend against low-quality AI-generated code. Three weeks later, the Linux kernel — the project those maintainers maintain, the codebase those companies depend on — accepted AI-generated contributions.

Both things are true. The defense money and the open door arrived in the same month, from the same ecosystem, for the same codebase.

The Machine

In June 2021, Microsoft and OpenAI announced GitHub Copilot. The pitch was "AI pair programming." The Register's headline, the same day, was more precise: "GitHub Copilot is AI pair programming where you, the human, still have to do the work." By October, GitHub reported that 30% of new code on its platform was written with Copilot's assistance. The number was startling in 2021. Nobody knew what it meant.

What it meant: the barrier to generating code had dropped to nearly zero. Not the barrier to writing good code — the barrier to producing syntactically valid, plausibly functional code that could be submitted as a pull request. The distinction between "generated" and "good" would take five years to resolve. The kernel's April 12 policy is the resolution.

The Pushback

In January 2024, Daniel Stenberg — founder of curl, one of the most-used pieces of software on earth — published a blog post about what AI-generated contributions were doing to his project. The title: "The I in LLM Stands for Intelligence." The complaint was specific: LLMs had made it trivially easy to generate bug reports that looked plausible but were wrong. Each one required a maintainer's time to evaluate, reproduce, and reject. The cost of rejection was now higher than the cost of submission. The asymmetry was unsustainable.

January 2024
Daniel Stenberg, founder of open-source project curl, says easy access to LLMs is resulting in junk AI-assisted bug reports, wasting developer time and energy
daniel.haxx.se

That April, Linus Torvalds spoke at the Open Source Summit. He called AI "hype" — but added that he was an "AI optimist" who expected it would eventually produce better tools. A PC Gamer headline summarized his position: "Today's AI may just be autocorrect." The framing mattered. Torvalds didn't say AI code was unacceptable. He said the current generation wasn't good enough yet. The word "yet" carried the future in it.

The Flood

By early 2026, the flood had arrived. Projects like VLC and Blender reported a measurable decline in the average quality of contributions — likely, TechCrunch reported, because AI coding tools had lowered the barrier to entry so far that contributors who previously couldn't write a kernel patch were now generating ones that looked right but weren't. The ecosystem was adapting, project by project, to a volume of submissions it hadn't designed for.

In April, the New York Times reported that companies were scrambling to review and secure the massive volume of AI-generated code their own developers were producing. The problem had moved from open-source projects to enterprise codebases. The flood wasn't external anymore. It was coming from inside the building.

And then, on March 18, the defense money arrived. The Linux Foundation announced that five AI companies — Anthropic, Amazon, Google, Microsoft, and OpenAI — had contributed $12.5 million in grants to help FOSS maintainers handle AI-generated security issues. The companies that made the tools were now funding the infrastructure to manage what the tools produced. The framing, per The Register: "AI slop defense."

March 2026
The Linux Foundation says Anthropic, Amazon, Google, Microsoft, and OpenAI gave $12.5M in grants to help FOSS maintainers handle AI-generated security findings
The Register

The Policy

Three weeks later, the kernel said yes.

April 2026
The Linux Kernel Organization now lets developers submit AI-generated code, as long as it complies with the guidelines, licensing, and attribution requirements
XDA Developers

The Linux Kernel Organization announced on April 12 that developers could submit AI-generated code — as long as it complied with the project's existing guidelines, licensing requirements, and attribution standards. The community, the policy stated, would treat AI-generated code as the submitter's own contribution. Not the AI's contribution. Not the AI company's. The human's.

The policy doesn't mention trust. It doesn't say AI code is good enough. It doesn't comment on quality at all. What it says is simpler: if you submit code, you own it. If it breaks something, you broke it. If it violates a license, you violated it. The AI is a tool. You are the author.

The Standard

This is the structural move that matters. The kernel's quality standard — the review process, the testing requirements, the "Signed-off-by" attestation that every contributor makes — did not change on April 12. The bar is exactly where it was on April 11. What changed is who is allowed to approach the bar.

For 33 years, the implicit assumption was that "contributor" meant "human who wrote this code." The new definition: "human who is responsible for this code." The gap between those two definitions is the entire AI coding revolution. A developer who uses Claude Code to generate a kernel patch and submits it with their Signed-off-by is, in the kernel's governance model, the author. The tool doesn't exist in the governance layer. The human absorbs it entirely.

The kernel absorbed AI by making it invisible to the governance model — not by trusting the tool, but by extending the accountability of the person who uses it.

This is the same structural adaptation that every institution is making right now. The US Treasury didn't regulate Claude Mythos directly — it summoned bank CEOs and told them to test it themselves. France isn't regulating AI in government — it's switching to Linux so French humans control the stack. UK regulators aren't banning Mythos — they're warning banks to prepare. In each case, the institution governs AI by governing the humans who deploy it. The AI disappears from the accountability model. The human remains.

The kernel is the purest case because its governance model is the most explicit. Every contribution since 2004 has carried a "Signed-off-by" line — a legal certification under the Developer Certificate of Origin that the contributor has the right to submit the code and agrees to its licensing terms. When that contributor used Copilot or Claude Code, the Signed-off-by still bears their name. The governance model already handled this. It just hadn't said so out loud.

What Changed

In 2021, when 30% of new GitHub code involved AI assistance, the question was: "Is AI-generated code good enough?" In 2024, when curl's maintainer was drowning in junk AI bug reports, the question was: "Can open-source projects survive the flood?" In early 2026, when the defense money arrived, the question was: "Who pays for the infrastructure to manage this?"

On April 12, the kernel answered a different question entirely. Not "is AI code good enough?" — the review process handles that. Not "can we survive the flood?" — the $12.5M handles that. The question the kernel answered is: "Who is the author?"

The answer: you are. If you submitted it, you wrote it. The tool you used is your business. The code you produce is the project's. The gap between the two — between the tool and the product, between the process and the output — is where the human stands, and has always stood, and now stands for AI output too.

The standard didn't move. The word "author" got larger. And the most conservative codebase on earth is the one that said so first.

More on open source and AI code. Explore entity coverage via the Pulse API.