/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Jared Rosenblum

@jaredrosenblum
1 posts
2026-01-27
Spot on, @geoffreyfowler —this highlights the core risk of AI in medicine: opaque black boxes where more data might refine or degrade outputs unpredictably, with no transparency to verify. At Neurosimplicity, we prioritize rigor and reproducibility in neuroscience imaging,
2026-01-27 View on X
Washington Post

A test of ChatGPT Health and Claude for Healthcare with data from Apple Health finds the chatbots provided questionable and inconsistent responses

ChatGPT now says it can answer personal questions about your health using data from your fitness tracker and medical records.