/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Microsoft unveils the Maia 200, its second-generation AI accelerator built on TSMC's 3nm process, deploying today in its Azure US Central data center region

The Maia 200 chip is starting to roll out to Microsoft's data centers today. … Microsoft is announcing a successor to its first in-house AI chip today, the Maia 200.

The Verge Tom Warren

Discussion

  • @satyanadella Satya Nadella on x
    Our newest AI accelerator Maia 200 is now online in Azure.  Designed for industry-leading inference efficiency, it delivers 30% better performance per dollar than current systems.  And with 10+ PFLOPS FP4 throughput, ~5 PFLOPS FP8, and 216GB HBM3e with 7TB/s of memory bandwidth i…
  • @mustafasuleyman Mustafa Suleyman on x
    Our Maia 200 inference chip, announced today, is most performant first party silicon of any hyperscaler. 3x the FP4 performance of the Amazon Trainium v3, and FP8 performance above Google's TPUv7. [image]
  • @mustafasuleyman Mustafa Suleyman on x
    It's a big day. Our Superintelligence team will be the first to use Maia 200 as we develop our frontier AI models.
  • @patrickmoorhead Patrick Moorhead on x
    Maia 200 solved as much of a systems problem as it did a chip problem. Ethernet scale-up inference. Two-tier topology. 2nd gen cooling sidecars. Scale across. Maia SDK with Triton compiler and Pytorch integration.
  • @benbajarin Ben Bajarin on x
    Strategically optimized for token per dollar per watt of specific workloads. Inference-focused, as the trend will be in custom silicon variants going forward. Interesting, they point out it MAIA 200 is already powering GPT 5.2. https://blogs.microsoft.com/ ...
  • @gnukeith Keith on x
    Thank you Satya for doing the biggest advertisement for Linux in years, thank you.
  • @stocksavvyshay Shay Boloor on x
    $MSFT unveiled its Maia 200 AI inference chip built on $TSM 3nm process with deployments starting this week in its U.S. Central data center region. This marks another step toward Microsoft owning more of the AI inference stack end-to-end. [image]
  • @ryanshrout Ryan Shrout on x
    Microsoft just announced the deployment of Maia 200, its next generation custom silicon for AI. Inference economics are still clearly a silicon feature, not just a software problem. @Signal_65 has been working with Microsoft on Maia 200, and we will have more to share in the
  • @danielnewmanuv Daniel Newman on x
    The right take on $MSFT MAIA 200 is a solid step function in its homegrown silicon, which will augment its compute and is already being used for inference on ChatGPT 5.2 The wrong take is this is going to replace $NVDA or $AMD. It continues to be AND with compute. Not or. 💪🏻👏🏻
  • @unusual_whales @unusual_whales on x
    BREAKING: Microsoft, $MSFT unveils its second generation AI chip, the MAIA 200 AI inference chip, built on TSMC's 3nm process, per Bloomberg.
  • @patrickmoorhead Patrick Moorhead on x
    Years ago, I said Microsoft needed its own silicon to be cost competitive not only for IaaS but for PaaS and SaaS as well. Microsoft's latest inference chip, Maia 200, looks like a step-function improvement over its predecessor. Years ago, I said Microsoft needed its own silicon …
  • @kobeissiletter @kobeissiletter on x
    BREAKING: Microsoft, $MSFT, announces the launch of its Maia 200 AI chip to “reduce reliance on Nvidia.” The chip is being produced by Taiwan Semiconductor Manufacturing Co., $TSM, and is being launched in Microsoft data centers in Iowa.
  • @tomwarren Tom Warren on x
    Microsoft is announcing its own Maia 200 AI chip today. It goes head-to-head in performance against Google and Amazon's AI chips, and Microsoft is using Maia 200 to host GPT-5.2 and others for Microsoft Foundry and Microsoft 365 Copilot. Details here 👇 https://www.theverge.com/..…
  • @patrickmoorhead Patrick Moorhead on x
    “FP4 throughput is now Blackwell-class territory. Microsoft quotes 10+ petaFLOPS FP4 per chip, which puts it in the same conversation as NVIDIA B200 generation inference compute.” $MSFT
  • @arunulag Arun Ulag on x
    Great momentum in advancing Azure AI infrastructure. Maia 200 expands our heterogeneous AI infrastructure, working alongside NVIDIA and AMD so customers have the right compute for every model and workload.
  • @azure @azure on x
    Introducing Maia 200: our next-generation AI accelerator delivering 30% better performance per dollar. Purpose-built for inference at the silicon level.
  • @stocksavvyshay Shay Boloor on x
    @VanquishTrader Really important for people to realize $MSFT Maia is arriving roughly a decade after $GOOGL custom silicon push and about seven years behind $AMZN Trainium.
  • @theaustinlyons Austin Lyons on x
    “Right systems for right workloads.” Maia 200 is a neat example of thoughtful design decisions for inference at scale. I chatted with the Maia team, and they talked about working backward from customer workloads to arrive at an inference chip that deliberately isn't a GPU.
  • @benitoz Ben Pouladian on x
    Microsoft's Maia 200 is real: 10 PF FP4, 216GB HBM3e, 7 TB/s. But NVIDIA's Vera Rubin (H2 2026): 50 PF FP4, 288GB HBM4, 13 TB/s. Hyperscalers build for today's inference costs. NVIDIA builds for tomorrow's ceiling. $NVDA $msft [image]
  • @scottgu Scott Guthrie on x
    Maia 200 is an AI inference powerhouse. Our most performant first‑party silicon from any hyperscaler, delivering 30% better performance per dollar than the latest hardware in our fleet. Built for efficient large‑scale inference and integrated into Azure.
  • @msft365insider @msft365insider on x
    Huge step forward for the future of AI infrastructure.
  • @highyieldyt @highyieldyt on x
    Maia 200 is a massive chip with ~825mm² die size, that's close to the reticle limit! N3P is a guess, but I doubt Microsoft is using N3E in 2026. My initial analysis👇 [image]
  • @jamesaltonsanders.com James Sanders on bluesky
    The press photos of this chip appear to be a mockup, not a genuine sample of the Maia 200.  —  It could be very close to the real thing, but it's not typical to use physical mockups for press.  —  Renders?  Yes.  Real chips?  Yes.  —  Mockups?  Not often.  [embedded post]
  • @tomwarren.co.uk Tom Warren on bluesky
    Microsoft is announcing its own Maia 200 AI chip today.  It goes head-to-head in performance against Google and Amazon's AI chips, and Microsoft is using Maia 200 to host GPT-5.2 and others for Microsoft Foundry and Microsoft 365 Copilot.  Details here 👇 www.theverge.com/news/867…
  • r/microsoft r on reddit
    Maia 200: The AI accelerator built for inference - The Official Microsoft Blog
  • r/hardware r on reddit
    Maia 200: The AI accelerator built for inference - The Official Microsoft Blog