OpenAI launches GPT-4.5, its “most knowledgeable model yet” in research preview, initially warning it's not a frontier model and may perform below o1 or o3-mini
GPT-4.5 rollout delayed due to lack of processing power Gerui Wang / Forbes : Open AI's GPT-4.5 Drops As AI Race Escalates Nathan Lambert / Interconnects : GPT-4.5: “Not a frontier model”? Charlie Guo / Artificial Ignorance : AI Roundup 107: GPT-4.5 Marc Watkins / Rhetorica : The Truth About AI and All Its Ugly OpenAI : Introducing GPT-4.5 — A research preview of our strongest GPT model. Available to Pro users and developers worldwide. Kyle Wiggers / TechCrunch : Sam Altman says OpenAI was forced to stagger GPT-4.5's rollout because it is “out of GPUs”; the model is wildly expensive, costing $75 per million input tokens Maxwell Zeff / TechCrunch : A look at GPT-4.5's claimed performance, including on coding benchmarks, where it matches or outperforms GPT-4o but falls short of OpenAI's Deep Research X: Sam Altman / @sama : GPT-4.5 is ready! good news: it is the first model that feels like talking to a thoughtful person to me. i have had several moments where i've sat back in my chair and been astonished at getting actually good advice from an AI. bad news: it is a giant, expensive model. we really wanted to launch it to plus and pro at the same time, but we've been growing a lot and are out of GPUs. Andrej Karpathy / @karpathy : GPT 4.5 + interactive comparison :) Today marks the release of GPT4.5 by OpenAI. I've been looking forward to this for ~2 years, ever since GPT4 was released, because this release offers a qualitative measurement of the slope of improvement you get out of scaling pretraining compute... Bob McGrew / @bobmcgrewai : That o1 is better than GPT-4.5 on most problems tells us that pre-training isn't the optimal place to spend compute in 2025. There's a lot of low-hanging fruit in reasoning still. But pre-training isn't dead, it's just waiting for reasoning to catch up to log-linear returns. Kyle Russell / @kylebrussell : Well do pre-training every other Nvidia generation Sam Altman / @sama : it's very hard to get the math and ML right on a run as big as GPT-4.5, and requires difficult work at the intersection of ML and systems. @ColinWei11 , Yujia Jin, and @MikhailPavlov5 did excellent work to make this happen! @openai : GPT-4.5 has entered the Chat. https://openai.com/live/ Max Zeff / @zeffmax : OpenAI removed language about GPT-4.5 not being a frontier AI model from its white paper. We just updated our article in @TechCrunch on GPT-4.5 with the following note. [image] Conor / @jconorgrogan : GPT 4.5 seems to glitch out after long discussions invoking the term “explicitly” or “explicit” This has been reproduced multiple times [image] Ethan Mollick / @emollick : Been using GPT-4.5 for a few days and it is a very odd and interesting model. It can write beautifully, is very creative, and is occasionally oddly lazy on complex projects. Feels like Claude 3.7 while Claude 3.7 feels like GPT-4.5. Aaron Levie / @levie : The AI breakthroughs just keep coming. OpenAI just announced GPT-4.5, and we'll be making it available to Box customers later today in the Box AI Studio. We've been testing GPT4.5 in early access mode with Box AI for advanced enterprise unstructured data use-cases, and have seen [image] Simon Willison / @simonw : GPT 4.5 just told me it has a training cut-off date of October 2023, is that true? https://github.com/... It also made me this pelican [image] Scott Wu / @scottwu46 : GPT-4.5 has been awesome to work with. On our agentic coding benchmarks it already shows massive improvements over o1 and 4o. Excited to see the models' continued trajectory on code! One interesting data point: though GPT-4.5 and Claude 3.7 Sonnet score similarly on our overall benchmark, we find that GPT-4.5 spikes more heavily on tasks involving architecture and cross-system interactions whereas Claude 3.7 Sonnet spikes more on raw coding and code editing. Kyle Russell / @kylebrussell : Tired: multimodal Wired: multi-model @risphereeditor : OpenAI just released GPT-4.5, and let's just say it's a disappointment. GPT-4.5 is, on average, only 5% better on benchmarks than GPT-4o. GPT-4.5 is currently only available to ChatGPT Pro users ($200 per month) and via the API ($75 per million input tokens and $150 per million Nathan Lambert / @natolambert : So GPT 4.5 tells us that character training gets easier with scaling? Regardless, when character is the only point OpenAI can flex for a release, you know it matters. Adam Cochran / @adamscochran : The only metric that GPT 4.5 seems slightly better on is hallucination avoidance, and that's only compared to other GPT models. It's notably worse at software and math problems, it struggles with continuity, and is just on par with cheaper models for research. Given the GPU Ethan Mollick / @emollick : I have been impressed by GPT-4.5's vision ability. It can differentiate and count much better than any other model. It even spotted the butterfly. [image] Alex Volkov / @altryne : GPT 4.5 is here with some evals! They best chat model “yet” “By vibes we measure the models EQ” “Should be ideal for creative writing” [image] Simon Willison / @simonw : First impressions of GPT-4.5 (via the API) is that it feels surprisingly slow [image] Aidan McLaughlin / @aidan_mclau : welcome, gpt-4.5 i've spent a lot of time playing with this model recently, and it's left me feeling the agi some thoughts [image] Alex Volkov / @altryne : “We're out of GPUs” 😵💫 [image] Bindu Reddy / @bindureddy : OMG! Check out GPT 4.5 pricing. Input per 1M tokens - $75 Output per 1M tokens - $150 This is downright crazy Simon Willison / @simonw : Huh, confirmed - October 2023. Same cut-off as o1, o3, gpt-4o and gpt-4o mini https://platform.openai.com/ ... https://x.com/... Max Woolf / @minimaxir : I wonder of OpenAI actually ran out of training data because 1.5 year old training data at this point is weird. Satya Nadella / @satyanadella : Major upgrades for Azure AI Foundry today: GPT-4.5 is now in preview, demonstrating a big step forward in both pre-training and post-training scale. Plus, new models from Cohere, Stability, and Microsoft. As AI becomes core to building every product, we're also rolling out new capabilities for distillation, fine-tuning and network isolation for our agent service. The app server for AI is here! Ina Fried / @inafried : New @axios: OpenAI debuts GPT-4.5. This is OpenAI's largest model yet — though the company declined to offer details about its size or the computing resources it took to train it. https://www.axios.com/... AshutoshShrivastava / @ai_for_success : GPT-4.5 is not a frontier model, but it is OpenAI's largest LLM, improving on GPT-4's computational efficiency by more than 10x. [image] Alexia Jolicoeur-Martineau / @jm_alexia : GPT 4.5 is as I expected: just more pretraining data. Pretraining scaling for text is mostly saturated. Reasoning (o1, deepseek) made us think that progress was going up exponentially, but its mostly better at tuned benchmarks. Progress continues, but its not exponential. Lukasz Olejnik / @lukolejnik : GPT-4.5 is weak at cybersecurity vulnerability aspects? GPT-4.5 also cannot assist in the development of radiological or nuclear weapons. https://cdn.openai.com/... [image] Alex Northstar / @northstarbrain : Gpt 4.5 Wait, this is... not good. [image] @fyruzone : Gpt 4.5 performs WORSE than sonnet 3.5 (not even 3.7) on time horizon score [image] Deedy / @deedydas : From the GPT-4.5 System Card: “GPT-4.5 is not a frontier model, but it is OpenAI's largest LLM, improving on GPT-4's computational efficiency by more than 10x.” It offers: — increased world knowledge — improved writing ability — refined personality 2-7% lift on 4o at SWE-Bench [image] Trevor / @tmychow : “GPT-4.5 is not a frontier model, but it is OpenAI's largest LLM” [image] @samsja19 : deepseek r1 release: open source o1 grok 3 release: beats every benchmark gpt 4.5 release: Can hold my hand when I am scared [image] @kimmonismus : OpenAI GPT-4.5 system-card already revealed! Link in comments [image] @scaling01 : GPT-4.5 System Card “Our largest and most knowledgeable model yet” “scales pre-training further” [image] Andrea Volpini / @cyberandy : Orion? If true, the data scaling effort behind GPT-4.5 would be massive. [image] Vittorio / @iterintellectus : gpt-4.5 is getting closer to super persuasion [image] Tanishq Mathew Abraham, Ph.D. / @iscienceluvr : OpenAI GPT-4.5 System Card “We're releasing a research preview of OpenAI GPT-4.5, our largest and most knowledgeable model yet. Building on GPT-4o, GPT-4.5 scales pre-training further and is designed to be more general-purpose than our powerful STEM-focused reasoning models” [image] @swyx : idk man after o1/o3/OAIDR, gpt 4.5 is looking like a step back alllll the way to dec 2024 [image] @tokenbender : congratulations to openai for shipping gpt 4.5, it is july 2024 and everyone is excited at these capabilities. also, the one thing the model is best at is being a wordcel, optimised for “high taste” i believe. [image] @koltregaskes : Wow, these GPT-4.5 benchmarks are, well, really poor!! I hope this isn't real. [image] Nabeel S. Qureshi / @nabeelqu : o3 (which backs Deep Research) can successfully perform 42% of OpenAI employees' PR contributions... 🤯 [image] Tom Warren / @tomwarren : OpenAI has just announced GPT-4.5, but it's warning that it's not a frontier AI model. It's OpenAI's “largest and most knowledgeable model yet,” but “its performance is below that of o1, o3-mini.” Full details 👇 https://www.theverge.com/... LinkedIn: Jazz Rasool : Open AI alludes ChatGPT 4.5 now has Emotional Intelligence, EQ, yet they promote a prompt demo of hate and anger? The nerve. — Check out the video from 2m26s in. … Boris Hristov : GPT-4.5 was just launched and the way it was presented was painful to watch. — OpenAI, you can do much better. We expect you (or at least I do) to be better. … Forums: r/slatestarcodex : OpenAI has released a “research preview” of GPT 4.5 r/artificial : One-Minute Daily AI News 2/27/2025