2025-04-20
TechCrunch
17 related
OpenAI says its new o3 and o4-mini AI models hallucinate more often than its previous reasoning and traditional models, and the company doesn't know why
OpenAI's internal tests show o3 hallucinated on 33% of person-related questions, double the rate of previous models. Even worse, o4-mini hit 48%. Mastodon: Aulia Masna / @aulia@mementomori.social : “...
2025-03-14
TechCrunch
15 related
OpenAI calls DeepSeek “state-controlled” and recommends that the US ban “PRC-produced equipment and models that violate user privacy and create security risks”
https://techcrunch.com/... Threads: Vishvanand Subramanian / @vishvanands : trying hard to steelman this position from openai but unless it's possible to hide malware in the model weights, what exactl...
Loading articles...