/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Karol Hausman

@hausman_k
8 posts
2025-12-20
In https://pi.website/..., we show that, at the scale of robot data, human data acts as another embodiment. This animation shows how human and robot data align with enough robot data diversity 🤯 Full thread: https://x.com/... [video]
2025-12-20 View on X
Physical Intelligence

AI robotics startup Physical Intelligence says it saw improvements in its vision-language-action model by including human video data in the fine-tuning process

human data Mehdi / @bettercallmedhi : Physical Intelligence may have just triggered the real inflection point for robotics π0.5 shows emergent human to robot transfer: once the mod...

2024-03-13
🚨 Big news 🚨 Together with a set of amazing folks we decided to start a company that tackles one of the hardest and most impactful problems - Physical Intelligence In fact, we even named our company after that: https://physicalintelligence.company/ or Pi (π) for short 🧵
2024-03-13 View on X
Bloomberg

Source: Physical Intelligence, a startup making an AI model for “any robot or physical device”, raised $70M from Thrive, Khosla, Lux, OpenAI, Sequoia, and more

Physical Intelligence (Pi, π, @physical_int). We're focused on bringing the amazing recent breakthroughs of AI and foundation models into the physical world. Ashlee Vance / @ashlee...

2023-09-01
AI drone racer finally beats human world champions! Recipe: • state estimation with KF + gate-based measurements as additional KF updates • small deep RL policy trained in sim • residual controllers on top Nature article: https://www.nature.com/... [video]
2023-09-01 View on X
Ars Technica

University of Zürich and Intel researchers reveal Swift, an autonomous drone trained via deep reinforcement learning to beat human champions in FPV drone racing

University creates the first autonomous system capable of beating humans at drone racing.

2023-07-30
PaLM-E or GPT-4 can speak in many languages and understand images. What if they could speak robot actions? Introducing RT-2: https://robotics-transformer2.github.io / our new model that uses a VLM (up to 55B params) backbone and fine-tunes it to directly output robot actions! [video]
2023-07-30 View on X
New York Times

Google launches RT-2 or Robotics Transformer 2, a “vision-language-action” model trained on text and images from the web that can output robotic actions

Our sneak peek into Google's new robotics model, RT-2, which melds artificial intelligence technology with robots.

Multiple conclusions from these experiments: (1) it turns out with a little bit of robot data we can transfer semantic concepts from vision language web-scale data to robot actions (2) the best VLMs might be the most generalizable robotic controllers
2023-07-30 View on X
New York Times

Google launches RT-2 or Robotics Transformer 2, a “vision-language-action” model trained on text and images from the web that can output robotic actions

Our sneak peek into Google's new robotics model, RT-2, which melds artificial intelligence technology with robots.

2023-07-29
Multiple conclusions from these experiments: (1) it turns out with a little bit of robot data we can transfer semantic concepts from vision language web-scale data to robot actions (2) the best VLMs might be the most generalizable robotic controllers
2023-07-29 View on X
New York Times

Google launches RT-2 or Robotics Transformer 2, a “vision-language-action” model trained on text and images from the web that can output robotic actions

Our sneak peek into Google's new robotics model, RT-2, which melds artificial intelligence technology with robots.

PaLM-E or GPT-4 can speak in many languages and understand images. What if they could speak robot actions? Introducing RT-2: https://robotics-transformer2.github.io / our new model that uses a VLM (up to 55B params) backbone and fine-tunes it to directly output robot actions! [video]
2023-07-29 View on X
New York Times

Google launches RT-2 or Robotics Transformer 2, a “vision-language-action” model trained on text and images from the web that can output robotic actions

Our sneak peek into Google's new robotics model, RT-2, which melds artificial intelligence technology with robots.

2023-02-07
Google's response to ChatGPT: Bard It looks like there are many more tools coming soon. While it's exciting to see Google share its tech more broadly, I hope that all the parties in the AI race can remain responsible. https://twitter.com/... https://twitter.com/...
2023-02-07 View on X
The Verge

Google debuts a ChatGPT rival named Bard and says the “experimental conversational AI service” will be “more widely available to the public in the coming weeks”

It's official: Google is working on a ChatGPT competitor named Bard.