Samsung is one of three companies that manufacture the world's memory chips. This week, it raised the price of the Galaxy S25 Edge, the Z Flip 7, the Tab S11 Ultra, and nearly a dozen other devices — by up to $280. The reason is the same component Samsung's own semiconductor division sells at record margins: memory.
On the same day, Microsoft raised Surface laptop and tablet prices by up to $500 from their 2024 launch, citing "higher memory and component costs." The Verge called it RAMageddon. Bloomberg called it a memory crunch. Windows Central described it as a crisis. All three were describing the same structural event: AI data centers have consumed so much of the world's memory supply that there isn't enough left for the devices people actually carry.
The Appetite
Every AI model runs on memory. Not the metaphorical kind — the physical kind. The chips that hold data while a GPU processes it. The chips that determine how large a model can be loaded, how many queries it can serve, how fast the answers arrive. The type that matters most for AI is called HBM — High Bandwidth Memory — and the competition for it started in 2024, when Nvidia's demand for its H100 and Blackwell GPUs set off a race among Samsung, SK Hynix, and Micron to supply it.
HBM is a specialized product. It isn't the same chip that goes into a laptop or a phone. But it's made in the same fabrication facilities, on the same production lines, by the same three companies. When Samsung's memory division shifts capacity toward HBM — higher margins, guaranteed customers, multiyear contracts — it shifts capacity away from the DDR5 RAM that goes into a Galaxy Tab and the NAND flash that goes into an iPhone. The factory is the same. The allocation is the choice.
TrendForce estimated in January 2026 that data centers will consume more than 70% of all high-end memory chips produced globally this year. Not 70% of AI-specific memory. Seventy percent of high-end memory, period — the same category that includes the chips in laptops, phones, tablets, and gaming consoles. The remaining 30% serves everyone else.
The Spill
The shortage announced itself in the price. Between May and June 2025, DRAM prices doubled. Not over a year — over two months. The spike coincided with reports that CXMT, a major Chinese memory manufacturer, had shifted production from DDR4 to DDR5 and HBM, removing a significant source of consumer-grade supply from the global market at the exact moment AI demand was accelerating.
By October 2025, Tom's Hardware reported that the memory shortage "may last a decade." By November, Dell, HP, and other major OEMs were publicly warning of shortages in the coming year. In December, Framework — the modular laptop company that sells RAM directly to customers — raised DDR5 prices to $10 per gigabyte, the second increase in a month, citing "substantially higher costs" driven by the AI boom.
The shortage wasn't abstract. It had a supply chain and a chain of consequences. Ars Technica reported in January 2026 that the memory crunch had expanded beyond RAM into GPUs, high-capacity SSDs, and even hard drives. GPU manufacturers were prioritizing more profitable models. Large SSDs were becoming difficult to source.
The Ceiling
Micron, which has committed to a $200 billion US expansion of its manufacturing capacity, told the Wall Street Journal in February 2026 that it can currently meet about 50% to 66% of demand for some key customers. Half. One of three global memory manufacturers, with a $200 billion expansion underway, meeting half of what its customers need.
IDC estimated that PC shipments could shrink by up to 9% in 2026 because of the shortage. Not because people don't want PCs. Because the PCs cost too much to build at the old price. The gaming industry was absorbing the same hit — the Steam Deck OLED went out of stock, PlayStation 6 timelines were reportedly under review, and console manufacturers were evaluating delays. The memory that was designed to be cheap and abundant — the commodity that made personal computing affordable — had been repriced by a customer that valued it more.
That customer was the data center. And the data center's appetite was not decreasing.
The Pass-Through
The Microsoft Surface Laptop 7 launched in 2024. This week, it costs $500 more. The Surface Pro 11, the same. Microsoft cited memory and component costs. The midrange devices now start above $1,000. Flagships start at $1,500. Bloomberg's headline was precise: "Microsoft Raises Surface Prices Sharply in Face of Memory Crunch."
Samsung's increases were quieter. PhoneArena reported them as a list: the Galaxy S25 Edge, the S25 FE, the Z Flip 7, the Tab S11, the Tab S11 Ultra. The 1TB Tab S11 Ultra jumped by $280. Samsung didn't issue a press release explaining why. It didn't need to. Samsung's semiconductor division reported record HBM revenue in the same quarter its consumer electronics division raised prices. The company is on both sides of the ledger — profiting from the shortage in one division, paying for it in another.
The Precedent
In January 2018, Ars Technica reported that the rise of cryptocurrency mining had created a global shortage of high-end graphics cards. Gamers couldn't buy GPUs. Nvidia asked retailers to prioritize gamers over miners. Prices tripled. The structural dynamic was simple: a new use case for a shared component outbid the existing users, and the existing users absorbed the cost.
The crypto shortage resolved in months. Bitcoin crashed, miners sold their GPUs, and prices normalized. The AI memory shortage is not following the same path. Cryptocurrency mining was speculative — the demand evaporated when the price dropped. AI infrastructure is contractual — the $1.1 trillion in cloud backlog, the multiyear capacity commitments, the federal agencies requesting model access. The demand is not speculative. It is pre-sold.
Micron's $200 billion expansion won't produce new capacity until 2027 at the earliest. Samsung's next-generation HBM facilities are under construction. SK Hynix is expanding in Indiana. The supply response is underway, but fabrication facilities take years to build and years more to reach full output. Between now and then, the allocation is fixed: 70% to data centers, 30% to everyone else.
The AI boom made software smarter. It made hardware more expensive. The tax doesn't appear on any invoice. It appears on every price tag.
The Weight
The cloud was supposed to be weightless. The AI that runs on it was supposed to be invisible — something that happened on someone else's server, in someone else's building, powered by someone else's electricity. The consumer-facing product was the interface: the chat window, the search result, the autocomplete. The infrastructure was abstracted away.
But the memory chips that power a data center in Virginia are the same ones that go into a Surface laptop in a Best Buy. The abstraction holds until the supply runs out. Then the invisible infrastructure becomes very visible — in the $500 you didn't pay last year for the same device, in the $280 Samsung added to a tablet, in the 9% of PCs that won't ship because they cost too much to build. The cloud has no weight. The shortage does.