/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Microsoft unveils the Maia 200, its 2nd-generation AI accelerator built on TSMC's 3nm process, deploying today in its Azure US Central data center region

The Maia 200 chip is starting to roll out to Microsoft's data centers today. … Microsoft is announcing a successor to its first in-house AI chip today, the Maia 200.

The Verge Tom Warren

Discussion

  • @satyanadella Satya Nadella on x
    Our newest AI accelerator Maia 200 is now online in Azure. Designed for industry-leading inference efficiency, it delivers 30% better performance per dollar than current systems. And with 10+ PFLOPS FP4 throughput, ~5 PFLOPS FP8, and 216GB HBM3e with 7TB/s of memory bandwidth [vi…
  • @mustafasuleyman Mustafa Suleyman on x
    Our Maia 200 inference chip, announced today, is most performant first party silicon of any hyperscaler. 3x the FP4 performance of the Amazon Trainium v3, and FP8 performance above Google's TPUv7. [image]
  • @azure @azure on x
    Introducing Maia 200: our next-generation AI accelerator delivering 30% better performance per dollar. Purpose-built for inference at the silicon level.
  • @mustafasuleyman Mustafa Suleyman on x
    It's a big day. Our Superintelligence team will be the first to use Maia 200 as we develop our frontier AI models.
  • @scottgu Scott Guthrie on x
    Maia 200 is an AI inference powerhouse. Our most performant first‑party silicon from any hyperscaler, delivering 30% better performance per dollar than the latest hardware in our fleet. Built for efficient large‑scale inference and integrated into Azure.
  • @patrickmoorhead Patrick Moorhead on x
    “FP4 throughput is now Blackwell-class territory. Microsoft quotes 10+ petaFLOPS FP4 per chip, which puts it in the same conversation as NVIDIA B200 generation inference compute.” $MSFT
  • @ryanshrout Ryan Shrout on x
    Microsoft just announced the deployment of Maia 200, its next generation custom silicon for AI. Inference economics are still clearly a silicon feature, not just a software problem. @Signal_65 has been working with Microsoft on Maia 200, and we will have more to share in the
  • @stocksavvyshay Shay Boloor on x
    $MSFT unveiled its Maia 200 AI inference chip built on $TSM 3nm process with deployments starting this week in its U.S. Central data center region. This marks another step toward Microsoft owning more of the AI inference stack end-to-end. [image]
  • @tomwarren.co.uk Tom Warren on bluesky
    Microsoft is announcing its own Maia 200 AI chip today.  It goes head-to-head in performance against Google and Amazon's AI chips, and Microsoft is using Maia 200 to host GPT-5.2 and others for Microsoft Foundry and Microsoft 365 Copilot.  Details here 👇 www.theverge.com/news/867…
  • @benbajarin Ben Bajarin on x
    Strategically optimized for token per dollar per watt of specific workloads. Inference-focused, as the trend will be in custom silicon variants going forward. Interesting, they point out it MAIA 200 is already powering GPT 5.2. https://blogs.microsoft.com/ ...
  • @benitoz Ben Pouladian on x
    Microsoft's Maia 200 is real: 10 PF FP4, 216GB HBM3e, 7 TB/s. But NVIDIA's Vera Rubin (H2 2026): 50 PF FP4, 288GB HBM4, 13 TB/s. Hyperscalers build for today's inference costs. NVIDIA builds for tomorrow's ceiling. $NVDA $msft [image]
  • @jamesaltonsanders.com James Sanders on bluesky
    The press photos of this chip appear to be a mockup, not a genuine sample of the Maia 200.  —  It could be very close to the real thing, but it's not typical to use physical mockups for press.  —  Renders?  Yes.  Real chips?  Yes.  —  Mockups?  Not often.  [embedded post]
  • r/microsoft r on reddit
    Maia 200: The AI accelerator built for inference - The Official Microsoft Blog
  • @theaustinlyons Austin Lyons on x
    “Right systems for right workloads.” Maia 200 is a neat example of thoughtful design decisions for inference at scale. I chatted with the Maia team, and they talked about working backward from customer workloads to arrive at an inference chip that deliberately isn't a GPU.
  • @unusual_whales @unusual_whales on x
    BREAKING: Microsoft, $MSFT unveils its second generation AI chip, the MAIA 200 AI inference chip, built on TSMC's 3nm process, per Bloomberg.
  • @kobeissiletter @kobeissiletter on x
    BREAKING: Microsoft, $MSFT, announces the launch of its Maia 200 AI chip to “reduce reliance on Nvidia.” The chip is being produced by Taiwan Semiconductor Manufacturing Co., $TSM, and is being launched in Microsoft data centers in Iowa.