Sam Altman says OpenAI was forced to stagger GPT-4.5's rollout because it is “out of GPUs”; the model is wildly expensive, costing $75 per million input tokens
Hopefully the output is worth it? 🤔 — Oh... 😥 — www.theverge.com/news/620021/ ... [embedded post] X: Ed Zitron / @edzitron : Also, $1.30 per hour per GPU is the Microsoft discount rate for OpenAI. Safe to assume there are other costs but raw compute for GPT 4.5 is massive and committing such resources at this time is truly fatalistic. suggests Altman has no other cards to play https://www.theinformation.com/ ... Robert Scoble / @scobleizer : OpenAI basically said the same thing too. It doesn't have enough NVIDIA. Of course its stock went down 8.5% today in reaction. Ed Zitron / @edzitron : Sam Altman is talking about bringing online “tens of thousands” and then “Hundreds of thousands” of GPUs. 10,000 GPUs costs them $113 million a year, 100k $1.13bn, so this is Sam Altman committing to billions of dollars to an expensive model that lacks any real new use cases. [image] Jeremy Howard / @jeremyphoward : # The *actual* LLM scaling law. Adding more compute and data to LLMs makes them: - Linearly more expensive, and - Logarithmically more useful. Therefore: - Scaling becomes less useful the more you do it (once you reach a point where cost is non-trivial). Casper Hansen / @casper_hansen_ : GPT 4.5 pricing is unhinged. If this doesn't have enormous models smell, I will be disappointed [image] Bindu Reddy / @bindureddy : TBH, we should thank OpenAI for dropping the API even when they are GPU constrained THANKS, OAI! 🙏🙏 Still don't have Grok 3 🤷 Farzad / @farzyness : Why didn't OpenAI wait to get 4.5 down to reasonable pricing before they showed incremental improvement? My guess is because they need to stay relevant in public discourse + with investors due to competition models being just as good, if not better. Ina Fried / @inafried : Some details in this post from @sama including a big GPU influx needed - and coming - to serve GPT 4.5 Forums: r/NvidiaStock : OpenAI CEO Sam Altman says the company is ‘out of GPUs’ | TechCrunch r/singularity : OpenAI CEO Sam Altman says the company is ‘out of GPUs’ r/nvidia : OpenAI CEO Sam Altman says the company is ‘out of GPUs’ | TechCrunch r/technology : OpenAI CEO Sam Altman says the company is ‘out of GPUs’ | TechCrunch BeauHD / Slashdot : OpenAI Sam Altman Says the Company Is ‘Out of GPUs’ Msmash / Slashdot : OpenAI Rolls Out GPT-4.5
Also, $1.30 per hour per GPU is the Microsoft discount rate for OpenAI. Safe to assume there are other costs but raw compute for GPT 4.5 is massive and committing such resources at this time is truly fatalistic. suggests Altman has no other cards to play https://www.theinformatio…
Sam Altman is talking about bringing online “tens of thousands” and then “Hundreds of thousands” of GPUs. 10,000 GPUs costs them $113 million a year, 100k $1.13bn, so this is Sam Altman committing to billions of dollars to an expensive model that lacks any real new use cases. [im…
# The *actual* LLM scaling law. Adding more compute and data to LLMs makes them: - Linearly more expensive, and - Logarithmically more useful. Therefore: - Scaling becomes less useful the more you do it (once you reach a point where cost is non-trivial).
Why didn't OpenAI wait to get 4.5 down to reasonable pricing before they showed incremental improvement? My guess is because they need to stay relevant in public discourse + with investors due to competition models being just as good, if not better.