OpenAI says its new o3 and o4-mini AI models hallucinate more often than its previous reasoning and traditional models, and the company doesn't know why
OpenAI's internal tests show o3 hallucinated on 33% of person-related questions, double the rate of previous models. Even worse, o4-mini hit 48%. Mastodon: Aulia Masna / @aulia@mementomori.social : “...
Meta VP of Generative AI Ahmad Al-Dahle denies a rumor that the company trained Llama 4 Maverick and Scout on test sets, saying that Meta “would never do that”
but the EU doesn't get everything Pascale Davies / Euronews : From a political shift to a more powerful AI: Everything to know about Meta's Llama 4 models Jay Bonggolto / Android Central : Meta is com...
When asked in an AMA if OpenAI would release model weights and research, Sam Altman said “we are discussing” and “it's also not our current highest priority”
www.theguardian.com/commentisfre... [image] Mastodon: Bryan Lawrence / @bnlawrence@mastodon.nz : “For us little people, the choice seems to be between being data-jacked and screwed over by the undemo...