A UK trial by the National Grid, Nvidia, and others finds AI data centers can operate without using peak power continuously, adjusting consumption when asked
Analysis: Claude Code currently authors 4% of all public GitHub commits and is on track to cross 20% of all daily commits by the end of 2026
Analysis: Claude Code currently authors 4% of all public GitHub commits and is on track to cross 20% of all daily commits by the end of 2026
Analysis: Colossus 2, one of the world's largest AI datacenters, uses roughly the same water resources required to supply 2.5 typical In-N-Out Burger locations
It turns out, about twice that of a Burger restaurant. …
The US EPA rules that xAI acted illegally by using dozens of methane gas turbines to power its Colossus 1 and Colossus 2 data centers in the Memphis area
Win for Memphis activists who say ‘Colossus’ facilities add extra pollution to already overburdened communities
Analysis: Colossus 2, one of the world's largest AI datacenters, uses roughly the same water resources required to supply 2.5 typical In-N-Out Burger locations
A different perspective on the datacenter water debate, forget tokens/watt or tokens/dollar, it's about tokens/burger …
Highlights from IEDM 2025: 3D NAND is suddenly relevant again, interconnect metals beyond copper are emerging, 2D materials that could replace silicon, and more
IEDM 2025 Round-Up — It's an odd time in the chipmaking industry. On one hand, we are ramping into the biggest supercycle ever seen.
A detailed look at the Apple-TSMC relationship: Apple's annual spend at TSMC rose 12x, from $2B in 2014 to $24B in 2025, and once made up 25% of TSMC's revenue
Wafer Demand Model, Node Economics, and the shifting power dynamics as AI reshapes the foundry landscape
A deep dive into co-packaged optics, long promised to transform data center connectivity, covering benefits, challenges, architecture, key companies, and more
SemiAnalysis : LinkedIn: Igor Elkanovich LinkedIn: Igor Elkanovich : See below an excellent paper on Co-Packaged Optics (CPO). A conclusion is that Scale Up network is a killer application and TSMC'...
A deep dive into co-packaged optics, long promised to transform data center connectivity, covering benefits, challenges, architecture, key companies, and more
Scale-out and Scale-up CPO, CPO TCO and Power Budgets, DSP Transceivers vs LPO vs NPO vs CPO, TSMC COUPE, MZM vs MRM vs EAM Modulator Deep Dive … LinkedIn: Igor Elkanovich LinkedIn: Igor Elkanovich : ...
How AI labs are deploying on-site gas generators for power as the US electric grid struggles to keep pace with the growing demands of AI infrastructure
SemiAnalysis :
A technical deep dive into Amazon's Trainium3 accelerator, including its server SKUs' specifications, silicon design, power budget, and bill of materials
Step-Function Software & System Improvements, “Amazon Basics” GB200 NVL36x2, NL72x2/NL32x2 Scale Up Rack Architecture, Optimized Perf per TCO, Trainium4
An in-depth look at TPUv7 Ironwood, and how the latest Google TPU generation positions Google as the most threatening challenger to Nvidia's AI chip dominance
Fascinating article. They argue that the reason for NVIDIA's circular investment deals is to intertwine their own fate with that of the big labs, to keep themselves on top — OpenAI saved 30% on the...
An analysis of Google TPU v6e vs AMD MI300X vs Nvidia H100/B200: Nvidia achieves a ~5x tokens-per-dollar advantage over TPU v6e and 2x advantage over MI300X
@artificialanlys :
An analysis of Google TPU v6e vs AMD MI300X vs Nvidia H100/B200: Nvidia achieves a ~5x tokens-per-dollar advantage over TPU v6e and 2x advantage over MI300X
Google TPU v6e vs AMD MI300X vs NVIDIA H100/B200: Artificial Analysis' Hardware Benchmarking shows NVIDIA achieving a ~5x tokens-per-dollar advantage over TPU v6e (Trillium), and a ~2x advantage over ...
A deep dive into Microsoft's AI strategy: its OpenAI deal, data center investments, neocloud renting, GitHub Copilot, its MAI models, its Maia chip, and more
SemiAnalysis :
SemiAnalysis launches InferenceMAX, an open-source benchmark that automatically tracks LLM inference performance across AI models and frameworks every night
vendor-neutral suite runs nightly and tracks performance changes over time Tae Kim / Barron's Online : Nvidia Touts Software Advantage in Beating Rivals Like AMD Dion Harris / NVIDIA : NVIDIA Blackwel...