/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

Sam Altman says OpenAI was forced to stagger GPT-4.5's rollout because it is “out of GPUs”; the model is wildly expensive, costing $75 per million input tokens

Hopefully the output is worth it?  🤔  —  Oh... 😥  —  www.theverge.com/news/620021/ ...  [embedded post] X: Ed Zitron / @edzitron : Also, $1.30 per hour per GPU is the Microsoft discount rate for OpenAI. Safe to assume there are other costs but raw compute for GPT 4.5 is massive and committing such resources at this time is truly fatalistic. suggests Altman has no other cards to play https://www.theinformation.com/ ... Robert Scoble / @scobleizer : OpenAI basically said the same thing too. It doesn't have enough NVIDIA. Of course its stock went down 8.5% today in reaction. Ed Zitron / @edzitron : Sam Altman is talking about bringing online “tens of thousands” and then “Hundreds of thousands” of GPUs. 10,000 GPUs costs them $113 million a year, 100k $1.13bn, so this is Sam Altman committing to billions of dollars to an expensive model that lacks any real new use cases. [image] Jeremy Howard / @jeremyphoward : # The *actual* LLM scaling law. Adding more compute and data to LLMs makes them: - Linearly more expensive, and - Logarithmically more useful. Therefore: - Scaling becomes less useful the more you do it (once you reach a point where cost is non-trivial). Casper Hansen / @casper_hansen_ : GPT 4.5 pricing is unhinged. If this doesn't have enormous models smell, I will be disappointed [image] Bindu Reddy / @bindureddy : TBH, we should thank OpenAI for dropping the API even when they are GPU constrained THANKS, OAI! 🙏🙏 Still don't have Grok 3 🤷 Farzad / @farzyness : Why didn't OpenAI wait to get 4.5 down to reasonable pricing before they showed incremental improvement? My guess is because they need to stay relevant in public discourse + with investors due to competition models being just as good, if not better. Ina Fried / @inafried : Some details in this post from @sama including a big GPU influx needed - and coming - to serve GPT 4.5 Forums: r/NvidiaStock : OpenAI CEO Sam Altman says the company is ‘out of GPUs’ |  TechCrunch r/singularity : OpenAI CEO Sam Altman says the company is ‘out of GPUs’ r/nvidia : OpenAI CEO Sam Altman says the company is ‘out of GPUs’ |  TechCrunch r/technology : OpenAI CEO Sam Altman says the company is ‘out of GPUs’ |  TechCrunch BeauHD / Slashdot : OpenAI Sam Altman Says the Company Is ‘Out of GPUs’ Msmash / Slashdot : OpenAI Rolls Out GPT-4.5

TechCrunch Kyle Wiggers

Discussion

  • @lemmk Karsten Lemm on bluesky
    GPT-4.5 “is wildly expensive, costing $75 per million input token.”  —  Hopefully the output is worth it?  🤔  —  Oh... 😥  —  www.theverge.com/news/620021/ ...  [embedded post]
  • @edzitron Ed Zitron on x
    Also, $1.30 per hour per GPU is the Microsoft discount rate for OpenAI. Safe to assume there are other costs but raw compute for GPT 4.5 is massive and committing such resources at this time is truly fatalistic. suggests Altman has no other cards to play https://www.theinformatio…
  • @scobleizer Robert Scoble on x
    OpenAI basically said the same thing too. It doesn't have enough NVIDIA. Of course its stock went down 8.5% today in reaction.
  • @edzitron Ed Zitron on x
    Sam Altman is talking about bringing online “tens of thousands” and then “Hundreds of thousands” of GPUs. 10,000 GPUs costs them $113 million a year, 100k $1.13bn, so this is Sam Altman committing to billions of dollars to an expensive model that lacks any real new use cases. [im…
  • @jeremyphoward Jeremy Howard on x
    # The *actual* LLM scaling law. Adding more compute and data to LLMs makes them: - Linearly more expensive, and - Logarithmically more useful. Therefore: - Scaling becomes less useful the more you do it (once you reach a point where cost is non-trivial).
  • @casper_hansen_ Casper Hansen on x
    GPT 4.5 pricing is unhinged. If this doesn't have enormous models smell, I will be disappointed [image]
  • @bindureddy Bindu Reddy on x
    TBH, we should thank OpenAI for dropping the API even when they are GPU constrained THANKS, OAI! 🙏🙏 Still don't have Grok 3 🤷
  • @farzyness Farzad on x
    Why didn't OpenAI wait to get 4.5 down to reasonable pricing before they showed incremental improvement? My guess is because they need to stay relevant in public discourse + with investors due to competition models being just as good, if not better.
  • @inafried Ina Fried on x
    Some details in this post from @sama including a big GPU influx needed - and coming - to serve GPT 4.5
  • r/NvidiaStock r on reddit
    OpenAI CEO Sam Altman says the company is ‘out of GPUs’ |  TechCrunch
  • r/singularity r on reddit
    OpenAI CEO Sam Altman says the company is ‘out of GPUs’
  • r/nvidia r on reddit
    OpenAI CEO Sam Altman says the company is ‘out of GPUs’ |  TechCrunch
  • r/technology r on reddit
    OpenAI CEO Sam Altman says the company is ‘out of GPUs’ |  TechCrunch