/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Ruben Hassid

@rubenhssd
5 posts
2025-06-09
BREAKING: Apple just proved AI “reasoning” models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well. Here's what Apple discovered: (hint: we're not as close to AGI as the hype suggests) [image]
2025-06-09 View on X
Marcus on AI

Apple researchers detail the limitations of top LLMs and large reasoning models, including on classic problems like the Tower of Hanoi, which AI solved in 1957

LLM “reasoning” is so cooked they turned my name into a verb  —  Quoth Josh Wolfe, well-respected venture capitalist at Lux Capital:

What do you think? Is Apple just “coping” because they've been outpaced in AI developments over the past two years? Or is Apple correct? Comment below and I'll respond to all.
2025-06-09 View on X
Marcus on AI

Apple researchers detail the limitations of top LLMs and large reasoning models, including on classic problems like the Tower of Hanoi, which AI solved in 1957

LLM “reasoning” is so cooked they turned my name into a verb  —  Quoth Josh Wolfe, well-respected venture capitalist at Lux Capital:

2024-09-13
I just tested ChatGPT-5 (o1). I can't believe the length of the answers. There is no way an LLM is capable of this much strategizing. My prompting was absolutely garbage, and I have an entire strategy. [video]
2024-09-13 View on X
Simon Willison's Weblog

OpenAI's o1 models aren't as simple as the next step up from GPT-4o as they introduce major cost and performance trade-offs in exchange for improved “reasoning”

OpenAI released two major new preview models today: o1-preview and o1-mini (that mini one is also a preview …

I just tested ChatGPT-5 (o1). I can't believe the length of the answers. There is no way an LLM is capable of this much strategizing. My prompting was absolutely garbage, and I have an entire strategy. [video]
2024-09-13 View on X
TechCrunch

OpenAI claims that in a qualifying exam for the International Mathematics Olympiad, o1 correctly solved 83.3% of the problems, while GPT-4o solved only 13.4%

Sam Altman says it “doesn't constitute AGI” Poulami Saha / Financial Express : OpenAI makes big AI breakthrough, ChatGPT can now think and reason: Details Emilia David / VentureBea...

2024-08-11
Everyone is talking about Flux: the new open-source image generator. Some say it's the end of Midjourney reign. But no one shows you how to use it. Here's a step-by-step guide & some of the most realistic examples I've seen: [video]
2024-08-11 View on X
Tom's Guide

Flux, an open-source AI image generator from the startup Black Forest Labs, goes viral for creating ultra-realistic images of people

Ryan Morrison / Tom's Guide :