/
Navigation
C
Chronicles
Browse all articles
C
E
Explore
Semantic exploration
E
R
Research
Entity momentum
R
N
Nexus
Correlations & relationships
N
~
Story Arc
Topic evolution
S
Drift Map
Semantic trajectory animation
D
P
Posts
Analysis & commentary
P
Browse
@
Entities
Companies, people, products, technologies
Domains
Browse by publication source
Handles
Browse by social media handle
Detection
?
Concept Search
Semantic similarity search
!
High Impact Stories
Top coverage by position
+
Sentiment Analysis
Positive/negative coverage
*
Anomaly Detection
Unusual coverage patterns
Analysis
vs
Rivalry Report
Compare two entities head-to-head
/\
Semantic Pivots
Narrative discontinuities
!!
Crisis Response
Event recovery patterns
Connected
Nav: C E R N
Search: /
Command: ⌘K
Embeddings: large
VOICE ARCHIVE

Chelsea Finn

@chelseabfinn
3 posts
2024-03-13
I'm really excited to be starting a new adventure with multiple amazing friends & colleagues. Our company is called Physical Intelligence (Pi or π, like the policy). A short thread 🧵
2024-03-13 View on X
Bloomberg

Source: Physical Intelligence, a startup making an AI model for “any robot or physical device”, raised $70M from Thrive, Khosla, Lux, OpenAI, Sequoia, and more

Physical Intelligence (Pi, π, @physical_int). We're focused on bringing the amazing recent breakthroughs of AI and foundation models into the physical world. Ashlee Vance / @ashlee...

2023-07-30
Vision-language ➡️ vision-language-action model By using a pre-trained VLM (e.g. PaLI-X), RT-2 enables robots to generalize to new objects & instructions RT-2 also shows basic reasoning capabilities. (e.g. “place orange in matching bowl") Paper+videos: https://robotics-transformer2.github.io / [image]
2023-07-30 View on X
New York Times

Google launches RT-2 or Robotics Transformer 2, a “vision-language-action” model trained on text and images from the web that can output robotic actions

Our sneak peek into Google's new robotics model, RT-2, which melds artificial intelligence technology with robots.

2023-07-29
Vision-language ➡️ vision-language-action model By using a pre-trained VLM (e.g. PaLI-X), RT-2 enables robots to generalize to new objects & instructions RT-2 also shows basic reasoning capabilities. (e.g. “place orange in matching bowl") Paper+videos: https://robotics-transformer2.github.io / [image]
2023-07-29 View on X
New York Times

Google launches RT-2 or Robotics Transformer 2, a “vision-language-action” model trained on text and images from the web that can output robotic actions

Our sneak peek into Google's new robotics model, RT-2, which melds artificial intelligence technology with robots.