Thoughts on AI progress and why AI labs' actions hint at a worldview in which AI models will continue to fare poorly at generalization and on-the-job learning
Why I'm moderately bearish in the short term, and explosively bullish in the long term — What are we scaling? X: @sriramk , @_simonsmith , @dwarkesh_sp , @emollick , @dwarkesh_sp , @dwarkesh_sp , @mikeknoop , @bobmcgrewai , @dwarkesh_sp , @dhruvboruah , and @arikagan_ X: Sriram Krishnan / @sriramk : really good read from @dwarkesh_sp on current state of AI. also worth thinking about how to think about “timelines” broadly - and why in my personal view, AGI is not a very useful concept when thinking about how these systems develop in the near future. https://www.dwarkesh.com/... Simon Smith / @_simonsmith : Huge respect for Dwarkesh, and he sparks some great conversations, but one thing I disagree with is how general most human employees are. Over a 25-year career I've managed dozens of people and worked directly with hundreds. The idea that the average person can learn to do any Dwarkesh Patel / @dwarkesh_sp : I totally buy that AI has made you more productive. And I buy that if other lawyers were more agentic, they could also get more productivity gains from AI. But I think you're making my point for me. The reason it takes lawyers all this schlep and agency to integrate these models [image] Ethan Mollick / @emollick : Interesting post & agree AI has missing capabilities, but I also think this perspective (common in AI) undervalues the complexity of organizations. Many things that make firms work are implicit, unwritten & inaccessible to new employees (or AI systems). Diffusion is actually hard [image] Dwarkesh Patel / @dwarkesh_sp : @deredleritt3r I think we might actually agree on the state of models. For example, the reason I haven't hired a research assistant is that I think it would be hard to find someone who is such an improvement on these models in answering my questions that the greater hassle and latency of Dwarkesh Patel / @dwarkesh_sp : Robert Conquest had 3 famous laws of politics. The first: “Everyone is conservative about what they know best.” In the same vein: “Everyone thinks that the median job is far easier to automate than their own job.” Mike Knoop / @mikeknoop : Great post. It's unfortunate that shifting AGI goalposts is associated with luddism. Pointing out flaws and building theories is how to drive progress — in fact it's a strong bull signal as we get more humans studying the real issues, increasing the likelihood we solve them. Bob McGrew / @bobmcgrewai : “Models keep getting more impressive at the rate the short timelines people predict, but more useful at the rate the long timelines people predict.” Good post. Dwarkesh Patel / @dwarkesh_sp : New post: Thoughts on AI progress (Dec 2025) 1. What are we scaling? [image] Dhruv Boruah / @dhruvboruah : The power/speed of humans to learn & process the million changes in the context window is just incredible. Less memory constraint is when we will get to AGI as @karpathy mentioned! Great analysis by the boss @dwarkesh_sp . I got nanobanana to give me a summary for my brain to [image] Ari Kagan / @arikagan_ : I don't see why this has to be true. Isn't it at least possible that RL is a necessary step to build the scaffolding for learning on the job? Sure, humans don't rehearse every software task, but we also don't pop out of the womb fully baked either! [image]