A gap in understanding AI is growing, as casual users cite flaws in old free models while power users point to new models' staggering gains in technical domains
Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. I think a lot of people tried the free tier of ChatGPT somewhere last year and allowed it to inform their views on AI a little too much. This is
You need to use frontier models with giant context and actually have systems that give them the right context at the right time to understand what's happening now in AI. Everyone else is guessing. There is both massive cost (a $20/mo sub is not going to unlock the awesomeness)
Exactly right. If you are using AI for anything technical, you are flabbergasted by the advancement in its capabilities. If you are using it for anything else, not so much. Although I've also been increasingly using it for legal/business/professional use cases with great amount
I get this, of course, but I think this dismisses some underlying valid criticism that even laypeople have. And we can't just move the standard every 2 months by saying “well, this model is *so* 2025, so your experience with it can't carry much weight”. The faults with every
Someone recently suggested to me that the reason OpenClaw moment was so big is because it's the first time a large group of non-technical people (who otherwise only knew AI as synonymous with ChatGPT as a website) experienced the latest agentic models.
After building with bleeding edge AI I get this separation that @karpathy lays out deeply. Family and friends have no idea how good the bleeding edge is. Completely uneducated about AI.
tldr: models are astonishingly good at coding, kind of bad at a lot of other tasks. I think this should make people at least a little more skeptical about the idea that we're heading toward “AGI.”