Microsoft unveils the Maia 200, its second-generation AI accelerator built on TSMC's 3nm process, deploying today in its Azure US Central data center region
The Maia 200 chip is starting to roll out to Microsoft's data centers today. … Microsoft is announcing a successor to its first in-house AI chip today, the Maia 200.
The Verge Tom Warren
Related Coverage
- Maia 200: The AI accelerator built for inference Microsoft · Scott Guthrie
- Maia 200 Architecture Overview Microsoft Tech Community · Sdighe
- Microsoft launches its second generation AI inference chip, Maia 200 Computerworld · Taryn Plumb
- Microsoft unveils Maia 200 AI chip, claiming performance edge over Amazon and Google GeekWire · Todd Bishop
- Microsoft Takes On AWS, Google And Nvidia With Maia 200 AI Chip Launch CRN · Dylan Martin
- Microsoft Unveils Maia 200 AI Accelerators To Boost Cloud AI Independence HotHardware · Zak Killian
- Microsoft Raises the AI Inference Bar with Maia 200 HPCwire · Alex Woodie
- Microsoft's New AI Chip: MSFT to Move Ahead of NVDA, GOOGL? Watcher Guru · Jaxon Gaines
- Does This New Chip Threaten Nvidia? Motley Fool · Daniel Sparks
- Microsoft Unveils Maia 200, A New Inference Accelerator To Cut AI Token Costs Pulse 2.0 · Amit Chowdhry
- Microsoft's Maia 200 AI chip launch leaves Nvidia stock little changed Investing.com
- Microsoft's new AI silicon is here — is this what will stop OpenAI's $14 billion bonfire? Windows Central · Sean Endicott
- Microsoft's New Maia 200 Chip Steps Up to Make AI Responses Cheaper and Faster TechEBlog · Jackson Chung
- Microsoft launches Maia 200 as custom AI silicon accelerates Constellation Research · Larry Dignan
- Microsoft Maia 200 The AI Inference Game Changer TwoBitDaVinci on YouTube · TwoBitDaVinci
- Microsoft says its newest AI chip Maia 200 is 3 times more powerful than Google's TPU and Amazon's Trainium processor LiveScience · Roland Moore-Colyer
- Microsoft reveals second generation of its AI chip in effort to bolster cloud business CNBC · Jordan Novet
- Microsoft announces powerful new chip for AI inference TechCrunch · Lucas Ropek
- Microsoft's Latest AI Chip to Reduce Reliance on Nvidia Bloomberg · Matt Day
- Microsoft's Maia 200 targets cheaper AI inference The Deep View · Nat Rubio-Licht
- Microsoft unveils Maia 200, its ‘powerhouse’ accelerator looking to unlock the power of large-scale AI TechRadar · Mike Moore
- Microsoft introduces AI accelerator for US Azure customers ComputerWeekly.com · Cliff Saran
- 𝐌𝐢𝐜𝐫𝐨𝐬𝐨𝐟𝐭'𝐬 𝐌𝐚𝐢𝐚 200 𝐢 … Pradeep Sanyal
- The infrastructure behind AI matters most when it disappears from view. — That's what makes progress like Microsoft's Maia 200, so important. … Yusuf Mehdi
- Microsoft just announced the Maia 200 AI chip, the first purpose-built chip powering Microsoft's AI future. Stay tuned to hear more about how we're advancing AI infrastructure at scale. Microsoft Azure
- Great momentum in advancing Azure AI infrastructure. — We built Maia 200 to expand our heterogeneous AI infrastructure … Arun Ulag
- Meet Maia 200, Microsoft's next‑generation AI accelerator that helps you run AI workloads 3× faster and get ~30% better performance‑per‑dollar compared to today's systems. … Jessica Hawk
- Microsoft's Latest AI Chip Claims Performance Edge Over Amazon and Google Slashdot · BeauHD
- Maia 200 explained: Microsoft's custom chip for AI acceleration Digit · Vyom Ramani
- Microsoft Begins Deploying Next-Gen AI Chip Silicon UK · Matthew Broersma
- Microsoft (MSFT) Stock: New AI Chip Targets Nvidia's Cloud Computing Empire Blockonomi · Trader Edge
- Nvidia Stock Gains. What Microsoft's New AI Processor Means for the Chip Maker. Barron's Online · Adam Clark
Discussion
-
@satyanadella
Satya Nadella
on x
Our newest AI accelerator Maia 200 is now online in Azure. Designed for industry-leading inference efficiency, it delivers 30% better performance per dollar than current systems. And with 10+ PFLOPS FP4 throughput, ~5 PFLOPS FP8, and 216GB HBM3e with 7TB/s of memory bandwidth i…
-
@mustafasuleyman
Mustafa Suleyman
on x
Our Maia 200 inference chip, announced today, is most performant first party silicon of any hyperscaler. 3x the FP4 performance of the Amazon Trainium v3, and FP8 performance above Google's TPUv7. [image]
-
@mustafasuleyman
Mustafa Suleyman
on x
It's a big day. Our Superintelligence team will be the first to use Maia 200 as we develop our frontier AI models.
-
@patrickmoorhead
Patrick Moorhead
on x
Maia 200 solved as much of a systems problem as it did a chip problem. Ethernet scale-up inference. Two-tier topology. 2nd gen cooling sidecars. Scale across. Maia SDK with Triton compiler and Pytorch integration.
-
@benbajarin
Ben Bajarin
on x
Strategically optimized for token per dollar per watt of specific workloads. Inference-focused, as the trend will be in custom silicon variants going forward. Interesting, they point out it MAIA 200 is already powering GPT 5.2. https://blogs.microsoft.com/ ...
-
@gnukeith
Keith
on x
Thank you Satya for doing the biggest advertisement for Linux in years, thank you.
-
@stocksavvyshay
Shay Boloor
on x
$MSFT unveiled its Maia 200 AI inference chip built on $TSM 3nm process with deployments starting this week in its U.S. Central data center region. This marks another step toward Microsoft owning more of the AI inference stack end-to-end. [image]
-
@ryanshrout
Ryan Shrout
on x
Microsoft just announced the deployment of Maia 200, its next generation custom silicon for AI. Inference economics are still clearly a silicon feature, not just a software problem. @Signal_65 has been working with Microsoft on Maia 200, and we will have more to share in the
-
@danielnewmanuv
Daniel Newman
on x
The right take on $MSFT MAIA 200 is a solid step function in its homegrown silicon, which will augment its compute and is already being used for inference on ChatGPT 5.2 The wrong take is this is going to replace $NVDA or $AMD. It continues to be AND with compute. Not or. 💪🏻👏🏻
-
@unusual_whales
@unusual_whales
on x
BREAKING: Microsoft, $MSFT unveils its second generation AI chip, the MAIA 200 AI inference chip, built on TSMC's 3nm process, per Bloomberg.
-
@patrickmoorhead
Patrick Moorhead
on x
Years ago, I said Microsoft needed its own silicon to be cost competitive not only for IaaS but for PaaS and SaaS as well. Microsoft's latest inference chip, Maia 200, looks like a step-function improvement over its predecessor. Years ago, I said Microsoft needed its own silicon …
-
@kobeissiletter
@kobeissiletter
on x
BREAKING: Microsoft, $MSFT, announces the launch of its Maia 200 AI chip to “reduce reliance on Nvidia.” The chip is being produced by Taiwan Semiconductor Manufacturing Co., $TSM, and is being launched in Microsoft data centers in Iowa.
-
@tomwarren
Tom Warren
on x
Microsoft is announcing its own Maia 200 AI chip today. It goes head-to-head in performance against Google and Amazon's AI chips, and Microsoft is using Maia 200 to host GPT-5.2 and others for Microsoft Foundry and Microsoft 365 Copilot. Details here 👇 https://www.theverge.com/..…
-
@patrickmoorhead
Patrick Moorhead
on x
“FP4 throughput is now Blackwell-class territory. Microsoft quotes 10+ petaFLOPS FP4 per chip, which puts it in the same conversation as NVIDIA B200 generation inference compute.” $MSFT
-
@arunulag
Arun Ulag
on x
Great momentum in advancing Azure AI infrastructure. Maia 200 expands our heterogeneous AI infrastructure, working alongside NVIDIA and AMD so customers have the right compute for every model and workload.
-
@azure
@azure
on x
Introducing Maia 200: our next-generation AI accelerator delivering 30% better performance per dollar. Purpose-built for inference at the silicon level.
-
@stocksavvyshay
Shay Boloor
on x
@VanquishTrader Really important for people to realize $MSFT Maia is arriving roughly a decade after $GOOGL custom silicon push and about seven years behind $AMZN Trainium.
-
@theaustinlyons
Austin Lyons
on x
“Right systems for right workloads.” Maia 200 is a neat example of thoughtful design decisions for inference at scale. I chatted with the Maia team, and they talked about working backward from customer workloads to arrive at an inference chip that deliberately isn't a GPU.
-
@benitoz
Ben Pouladian
on x
Microsoft's Maia 200 is real: 10 PF FP4, 216GB HBM3e, 7 TB/s. But NVIDIA's Vera Rubin (H2 2026): 50 PF FP4, 288GB HBM4, 13 TB/s. Hyperscalers build for today's inference costs. NVIDIA builds for tomorrow's ceiling. $NVDA $msft [image]
-
@scottgu
Scott Guthrie
on x
Maia 200 is an AI inference powerhouse. Our most performant first‑party silicon from any hyperscaler, delivering 30% better performance per dollar than the latest hardware in our fleet. Built for efficient large‑scale inference and integrated into Azure.
-
@msft365insider
@msft365insider
on x
Huge step forward for the future of AI infrastructure.
-
@highyieldyt
@highyieldyt
on x
Maia 200 is a massive chip with ~825mm² die size, that's close to the reticle limit! N3P is a guess, but I doubt Microsoft is using N3E in 2026. My initial analysis👇 [image]
-
@jamesaltonsanders.com
James Sanders
on bluesky
The press photos of this chip appear to be a mockup, not a genuine sample of the Maia 200. — It could be very close to the real thing, but it's not typical to use physical mockups for press. — Renders? Yes. Real chips? Yes. — Mockups? Not often. [embedded post]
-
@tomwarren.co.uk
Tom Warren
on bluesky
Microsoft is announcing its own Maia 200 AI chip today. It goes head-to-head in performance against Google and Amazon's AI chips, and Microsoft is using Maia 200 to host GPT-5.2 and others for Microsoft Foundry and Microsoft 365 Copilot. Details here 👇 www.theverge.com/news/867…
-
r/microsoft
r
on reddit
Maia 200: The AI accelerator built for inference - The Official Microsoft Blog
-
r/hardware
r
on reddit
Maia 200: The AI accelerator built for inference - The Official Microsoft Blog