An interview with SemiAnalysis CEO Dylan Patel on logic, memory, and power bottlenecks in scaling AI compute, Nvidia securing TSMC N3 allocation early, and more
Plus, why an H100 is worth more today than 3 years ago — Dylan Patel, founder of SemiAnalysis, provides a deep dive …
Dwarkesh Podcast Dwarkesh Patel
Related Coverage
- Dylan Patel — The Single Biggest Bottleneck to Scaling AI Compute Dwarkesh Patel on YouTube · Dwarkesh Patel
- TSMC's N3 logic wafer capacity has become one of the AI industry's biggest constraints, which could push customers to explore greater foundry diversification SemiAnalysis
- TSMC Widens Foundry Lead Over Samsung Electronics The Chosun Daily · Rora Oh
- 3 Top Tech Stocks That Could Make You a Millionaire Motley Fool · James Brumley
- TSMC widens lead over Samsung in foundry market: report Tech in Asia · Minh Le
- TSMC nets nearly 70% of 2025 foundry market Taipei Times
- AI chips are pushing everything else off TSMC's most advanced production lines The Decoder · Maximilian Schreiner
- Taiwan Semiconductor Now Commands 70% Of Global Foundry Market Benzinga · Anusuya Lahiri
- TSMC rides AI wave to net nearly 70% of global foundry market in 2025 The Economic Times
- SemiAnalysis put out one of the better breakdowns I've seen of what's actually happening in the silicon supply chain. … Bill Conrades
Discussion
-
@dwarkesh_sp
Dwarkesh Patel
on x
.@dylan522p gives a deep dive on the 3 big bottlenecks to scaling AI compute: logic, memory, and power. And walks through the economics of labs, hyperscalers, foundries, and fab equipment manufacturers. Learned a ton about every single level of the stack. 0:00:00 - Why an H100 [v…
-
@sarahdingwang
Sarah Wang
on x
Narrative violation from Dylan on Dwarkesh: H100s are worth *more* today than they were 3 years ago. There's a sentiment that data center buildouts are priced into the risk of rapidly depreciating GPUs. But the models want to learn. Token prices are falling so fast that you can […
-
@dwarkesh_sp
Dwarkesh Patel
on x
.@dylan522p lays out how we know the hard upper bound on how much compute can be produced annually by 2030: around 200 GW/year. That's a crazy number (there's about 20 GW of AI deployed in the world right now), but it's nowhere near enough to satisfy Sam/Elon/Dario/Demis's [video…
-
@dwarkesh_sp
Dwarkesh Patel
on x
The AI supply chain has the craziest value cascade of any industry in the world. thinks that over the next five years, the biggest bottleneck to deploying AI will be EUV machines. ASML sells EUV machines for $300-400 million. You need about three and a half machines, so $1.2 [vid…
-
@southernvalue95
@southernvalue95
on x
This podcast with @dylan522p is a terrific rebuttal to the Citrini doomer scenario by playing through the real world constraints of a fast-ish takeoff (I know it wasn't intended as such). The constraints to producing enough AI tokens to be disruptive to society will slow it down
-
@rdominguezibar
Ruben
on x
Every AI bubble argument assumes the compute requirements keep going up forever. The actual trend is going the other way. GPT-4 required cutting-edge H100s to run at scale. Newer models at the same or better quality level run on hardware that is two to three generations older.
-
@poof_eth
@poof_eth
on x
If you've been wanting to understand AI inference and hardware economics... this is the best single place for a current take. It's long, but highly recommended for those interested!
-
@pythiar
@pythiar
on x
We've gone from $ASML is doomed and peak litho to 2030 Litho bottleneck can't get enough litho
-
@parmita
Parmita Mishra
on x
Ok @dylan522p this is amazing! Everyone watch!
-
@kellangrenier
Kellan Grenier
on x
Excellent listen. Key takeaways: 1) $ASML caps at 200GW by 2030 2) Memory eating 30% of Big 7 CapEx 3) H100s appreciate vs. depreciate 4) Neoclouds (and their agents such as $GLXY) control bottleneck 5) Early contracts ($CRWV 98% locked) print vs 50% spot markup @dylan522p 🐐
-
@peteskomoroch
Pete Skomoroch
on x
This matches my current world model. An H100 GPU is worth more today than 3 years ago, not less. People viewing GPUs as rapidly depreciating assets are missing that an older GPUs can and will do economically valuable work. The total economic value of work that GPU can do for you
-
@alex_intel_
Alex
on x
The logic squeeze that is happening (Dylan thinks it's going to get worse) ensures that Intel after its N2P orders are filled for Nova Lake will be forced back to internal manufacturing The idea of Intel flexing ~20% to TSMC doesn't work in this world. That's good news
-
@cryptopunk7213
@cryptopunk7213
on x
this completely fucking breaks the AI Bubble narrative. a 3 year-old gpu is MORE valuable today because it serves higher-quality ai tokens FOR CHEAPER. translation: gpt 5.4 runs BETTER on an OLD GPU than gpt-fucking-FOUR read that again. a newer, better model runs more
-
@iruletheworldmo
@iruletheworldmo
on x
if you're interested in the race to agi you have to watch this. much more in depth on the super cluster build out. how much compute / gw's can you get online quickly. sam (the dealmaker) altman's conviction and acceleration is paying off again. whilst anthropics sbf
-
@aniketapanjwani
Aniket Panjwani
on x
The people I find most insightful/interesting on the economics of AI are non-economists. On hardware/macro topics, it is very hard to beat Dwarkesh and Dylan - I'm always enlightened by listening to them and humbled by their breadth and depth of knowledge. Definitely going to