Study: OpenAI's o1 correctly diagnosed 67% of emergency room patients using electronic records and a few sentences from nurses, vs. 50-55% for triage doctors
Researchers say results mark a ‘profound change in technology that will reshape medicine’ — From George Clooney in ER …
The Guardian Robert Booth
Related Coverage
- Performance of a large language model on the reasoning tasks of a physician Science
- Study: AI Shows Potential in Diagnosing Emergency Room Patients Breitbart · Lucas Nolan
- In real-world test, an AI model did better than doctors at diagnosing patients NPR · Will Stone
- AI outperforms doctors in ER diagnoses Semafor · Tom Chivers
- Harvard Researchers Say Your Next ER Diagnosis May Come From AI—and It Could Be More Accurate Than Human Doctors Inc.com · Amaya Nichole
- AI can reason like a doctor, study says Mashable · Rebecca Ruiz
- AI nailed emergency diagnoses better than doctors in Harvard trials Digital Trends · Vikhyaat Vivek
- AI ‘better at diagnosing patients than doctors’ Telegraph · Cameron Henderson
- New AI Model Beats Doctors at Clinical Reasoning, Diagnosis MedPage Today · Rachael Robertson
- AI Just Beat Doctors at Diagnosing ER Patients. Don't Get All Excited Gizmodo · Matthew Phelan
- AI Outperforms ER Doctors in Diagnostic Cases, Study Points to Collaborative Care CNET · Macy Meyer
- In Real-World Test, an AI Model Did Better Than ER Doctors At Diagnosing Patients Slashdot · BeauHD
- AI outperforms doctors in diagnosis tests — but don't expect it to replace your doctor Straight Arrow News · Devin Pavlou
- AI Is Starting To Outperform Doctors. Here's Why Doctors Are Needed Now More Than Ever Forbes · Omer Awan
- In Harvard study, AI offered more accurate diagnoses than emergency room doctors TechCrunch · Anthony Ha
- OpenAI's o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors Hacker News
Discussion
-
Vox
Dylan Scott
on x
A major new study found AI outperformed doctors in ER diagnosis — but there's a catch
-
@mukund
M Mohan
on x
Doctors got beat by AI in emergency diagnosis. Not by a little. By a lot. 67% accuracy vs ~50%. And when given more data? AI went to 82%. Emergency medicine is chaos. Incomplete data. Time pressure. High stakes. Humans rely on experience → pattern recognition → intuition.
-
@emollick
Ethan Mollick
on x
New paper (on an old AI) tests o1 against doctors on medical benchmarks & real ER cases: “across a variety of scenarios and applications, the large language model outperformed both human physicians and older models” The potential suggests an “urgent need for prospective trials.” …
-
M Mohan
M Mohan
on linkedin
For everyone that thinks the Harvard University study on AI will lower Healthcare costs. NO. — Unlikely. …
-
Gilles Frydman
Gilles Frydman
on linkedin
First of several posts on the new Science paper claiming LLMs have eclipsed the benchmarks of clinical reasoning. …
-
Bill Faruki
Bill Faruki
on linkedin
Harvard, in Science today: OpenAI's o1 reasoning model out-diagnosed expert ER physicians on real triage cases — 67% vs. 50-55% — and beat them on management planning 89% to 34%. …
-
@carnage4life
Dare Obasanjo
on bluesky
A Harvard study comparing OpenAI's reasoning model to human doctors found that the AI identified the exact or very close diagnosis in 67% of cases, beating the human doctors, who were right only 50%-55% of the time. — This is essentially a pattern recognition problem which is w…
-
@dropthet
Mr. Herbert Garrison
on bluesky
FYI, this model is almost 2 years old now. Studies take at least this long to catch up. [embedded post]