/
Navigation
Chronicles
Browse all articles
Explore
Semantic exploration
Research
Entity momentum
Nexus
Correlations & relationships
Story Arc
Topic evolution
Drift Map
Semantic trajectory animation
Posts
Analysis & commentary
Pulse API
Tech news intelligence API
Browse
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
Concept Search
Semantic similarity search
High Impact Stories
Top coverage by position
Sentiment Analysis
Positive/negative coverage
Anomaly Detection
Unusual coverage patterns
Analysis
Rivalry Report
Compare two entities head-to-head
Semantic Pivots
Narrative discontinuities
Crisis Response
Event recovery patterns
Connected
Search: /
Command: ⌘K
Embeddings: large
TEXXR

Chronicles

The story behind the story

days · browse · Enter similar · o open

A test of ChatGPT Health and Claude for Healthcare with data from Apple Health finds the chatbots provided questionable and inconsistent responses

ChatGPT now says it can answer personal questions about your health using data from your fitness tracker and medical records.

Washington Post Geoffrey A. Fowler

Discussion

  • @geoffreyfowler Geoffrey A. Fowler on x
    You can now connect ChatGPT to an Apple Watch. So I imported 29 mil steps and 6 mil heartbeats into the new ChatGPT Health. It graded my heart an F. Cardiologist @erictopol called it “baseless.” Any bot claiming to give health insights shouldn't be this clueless. Even in beta. 🧵 …
  • @mweinbach Max Weinbach on x
    I'm sorry but ChatGPT Health is worthless It gave me an F because my RHR is higher than usual (it's not) and I don't do enough steps (I do) I have no idea what's wrong with it. I like ChatGPT a lot but this an absurd. I think OpenAI should pull it down for a bit. [image]
  • @erictopol Eric Topol on x
    The performance of the newly released ChatGPT Health, via a thorough assessment by @geoffreyfowler with his health data, is very disappointing gift link https://www.washingtonpost.com/ ... [image]
  • @hypervisible.blacksky.app @hypervisible.blacksky.app on bluesky
    “...when it comes to your fitness tracker and some health records, the new Dr. ChatGPT seems to be winging it.  That fits a disturbing trend: AI companies launching products that are broken, fail to deliver or are even dangerous.”
  • @four4thefire Andrew Donaldson on bluesky
    The hardest thing about medicine is every single human being is a special, unique, unreplicatible case.  —  LLM search engines are uniquely designed to be particularly terrible at the one thing everyone needs from their medical provider: personalized care www.washingtonpost.com/t…
  • @geoffreyfowler Geoffrey A. Fowler on x
    I asked @EricTopol to look at ChatGPT's analysis. His view: “This is not ready for any medical advice.” The bot leaned heavily on Apple Watch VO₂ max estimates — which independent studies show can run ~13% low on average — and treated fuzzy metrics like hard facts.
  • @geoffreyfowler Geoffrey A. Fowler on x
    The more I used ChatGPT Health, the worse its answers got. When I asked it the same heart-health question repeatedly, its analysis changed. My grade bounced back and forth between F and a B. Same data, same body. Different answers. [image]
  • @jaredrosenblum Jared Rosenblum on x
    Spot on, @geoffreyfowler —this highlights the core risk of AI in medicine: opaque black boxes where more data might refine or degrade outputs unpredictably, with no transparency to verify. At Neurosimplicity, we prioritize rigor and reproducibility in neuroscience imaging,