Danny Driess

169 posts

Danny Driess

Danny Driess

@DannyDriess

Research Scientist @physical_int. Formerly Google DeepMind

Katılım Ağustos 2021
337 Takip Edilen4K Takipçiler
Sabitlenmiş Tweet
Danny Driess
Danny Driess@DannyDriess·
How to build vision-language-action models that train fast, run fast & generalize? In our new paper, we formalize & analyze the approach of our π-0.5 model & further improve it with a single stage recipe. Blog: pi.website/research/knowl… Paper: pi.website/download/pi05_…
English
6
26
219
19.3K
Danny Driess
Danny Driess@DannyDriess·
and we see strong cross embodiment generalization for dexterous tasks
Danny Driess tweet media
English
2
0
0
133
Danny Driess
Danny Driess@DannyDriess·
The most exciting aspect of modern machine learning, in my opinion, is that one can train models that just work for many tasks, without finetuning. π0.7 is a major step in that direction for robots
Physical Intelligence@physical_int

Our newest model, π0.7, has some interesting emergent capabilities: it can control a new robot to fold shirts for which we had no shirt folding data, figure out how to use an appliance with language-based coaching, and perform a wide range of dexterous tasks all in one model!

English
3
2
40
7.3K
Danny Driess retweetledi
Physical Intelligence
Physical Intelligence@physical_int·
We developed an RL method for fine-tuning our models for precise tasks in just a few hours or even minutes. Instead of training the whole model, we add an “RL token” output to π-0.6, our latest model, which is used by a tiny actor and critic to learn quickly with RL.
English
38
292
2.2K
422.9K
Danny Driess retweetledi
Kyle Vedder
Kyle Vedder@KyleVedder·
this robustness allows the policy to do diverse, long horizon tasks in unseen environments for example, the demo kitchen was built *after* the potatoes policy was fully trained — I just wrote the high level prompt to tell it where to go look for items and it did the rest
Kyle Vedder tweet media
English
1
2
33
2.6K
Danny Driess retweetledi
Marcel Torné
Marcel Torné@marceltornev·
We equipped PI policies with memory! And taught our robots to do long-horizon real world tasks such as preparing the items for a recipe, cooking a grilled cheese and cleaning the kitchen!
Physical Intelligence@physical_int

We’ve developed a memory system for our models that provides both short-term visual memory and long-term semantic memory. Our approach allows us to train robots to perform long and complex tasks, like cleaning up a kitchen or preparing a grilled cheese sandwich from scratch 👇

English
7
15
89
9.7K
Danny Driess retweetledi
Karl Pertsch
Karl Pertsch@KarlPertsch·
This one has been a long time coming: today we’re introducing MEM, an approach for giving VLAs short-term and long-term memory. Memory is such an obvious capability, but adding it isn’t easy (most VLAs today are memory-less). A short thread on challenges, solutions, and the new capabilities MEM unlocks for us.
English
8
10
110
9.3K
Danny Driess
Danny Driess@DannyDriess·
One aspect I am particularly excited about is that memory enables the model to adapt its strategy while solving the task, something we can coin “in-context adaptation”. In this example, it is unclear from a single image whether the fridge opens from the left or the right. Hence, a model without memory (left) might fail to open the fridge repeatedly. In contrast, with memory (right), our model learns “in-context” that the fridge opens differently, and adjusts its strategy accordingly.
English
0
0
3
347
Danny Driess
Danny Driess@DannyDriess·
The key idea behind Multi-Scale Embodied Memory (MEM): use different modalities to represent memory at different time scales. 📹 For short horizon memory, we developed an efficient video encoder that lets the model remember fine-grained details about its recent interactions. 📜 For long horizon memory, we train the model to summarize events in text, allowing it to remember events for up to 15 min.
Danny Driess tweet media
English
1
0
3
440
Danny Driess
Danny Driess@DannyDriess·
Many real-world tasks require memory to be successful. Yet, most robots don’t have any form of memory. Today, we are going to change that. We developed a system called MEM that introduces memory into VLAs on multiple scales
Physical Intelligence@physical_int

We’ve developed a memory system for our models that provides both short-term visual memory and long-term semantic memory. Our approach allows us to train robots to perform long and complex tasks, like cleaning up a kitchen or preparing a grilled cheese sandwich from scratch 👇

English
5
12
64
5.5K
Danny Driess retweetledi
Physical Intelligence
Physical Intelligence@physical_int·
General-purpose AI models are behind some of the most exciting applications we now can't live without. We envision that an analogous “physical intelligence layer” built with models like π0.6 will similarly spur a new wave of applications for the physical world. We’ve recently begun working with a handful of companies that have deployed their robots to do real-world, useful things. pi.website/blog/partner/?…
English
9
91
744
174K
Danny Driess
Danny Driess@DannyDriess·
What I like about this: If I want to explain someone how to solve a task, I rarely use language alone, I might point at things, wave in the air, without restricting myself to only one interface to communicate my intent. This work brings this idea into VLAs.
English
1
0
1
236
Danny Driess
Danny Driess@DannyDriess·
Check out our latest work on steerable policies. Instead of having only language as the interface to a VLA, steerable policies follow point queries, motion traces, atomic subtasks and more, which allows us to make better use of VLMs controlling them. More in @verityw_'s thread
Danny Driess tweet media
Will Chen@verityw_

How can robot policies be trained to best leverage VLMs' CoT reasoning and in-context learning for generalization? The key is Steerable Policies: vision-language-action models that can be flexibly controlled in many ways! steerable-policies.github.io 1/9

English
1
1
9
688
Danny Driess
Danny Driess@DannyDriess·
The idea behind significantly improving the performance on hard real-world tasks is to train a value function, condition the model on advantages computed from the value function, and running an iterative improvement loop where the model learns from it’s own data.
Danny Driess tweet media
English
1
0
6
328