David Dobáš

43 posts

David Dobáš

David Dobáš

@DDobas

Physics, AI, Robotics

Katılım Ağustos 2014
107 Takip Edilen126 Takipçiler
Sabitlenmiş Tweet
David Dobáš
David Dobáš@DDobas·
Last week, we ran 8 humanoid robots live at DJ Wukong’s Chinese New Year show. 5 on stage. 3 with VIPs. I ran the full pipeline, from mocap to sim to real deployment. Here’s what worked (and what didn’t).
English
3
7
24
7.3K
David Dobáš
David Dobáš@DDobas·
@theo_michel42 Nano banana and manually edited. The first image was quite good, but then trying to edit with nano banana as well was pain 😁
English
0
0
0
34
Théo
Théo@theo_michel42·
@DDobas How did you do the graphics?
English
1
0
0
39
David Dobáš
David Dobáš@DDobas·
DreamZero vs LeWorldModel? DreamZero has very impressive results, but I still believe we will rather be predicting dynamics in compact latent spaces, and avoid expensive pixel prediction. That will allow to run models on-edge. LeWorldModel paper seems to have solved one of the biggest hurdles. So why is training latent world models hard? Naively, you would just encode your observations to a latent space, predict dynamics in latent space and compute loss between the precicted next latent and next embedded observation. But that would lead to collapse. The encoder and predictor can cheat, for example encoder will just predict latent all zeros for all observations, then predictor predicts next state zero as well. And since the embedding of the next state sould be zero as well, you have zero loss, the model is happy, but completely useless. So that’s why people spent a lot of time figuring out, how to regularize the latent space to avoid collapse. Several losses were proposed. LeWorldModel proposes only one additional loss, based on projecting latents to randomly chosen axis and forcing thise projections to follow Normal distribution. Ellegant and seems to work! What do you think — pixel dreaming or latent dynamics?
David Dobáš tweet media
English
2
0
8
272
Chris Paxton
Chris Paxton@chris_j_paxton·
This is the first time that I have seen Skild doing live demos in public! Demonstrating neural nets for precision manufacturing
English
19
51
402
106.1K
David Dobáš
David Dobáš@DDobas·
Happy to have led the workshops at UFB Ghost Trials — covering video motion retargeting, training BeyondMimic in mjlab, latest papers like Sonic/BFM-Zero, plus team support, sim evals & real-robot deployments. Thrilled seeing more students get hands-on with humanoid training! 🚀🤖
Vitaly Bulatov@vitl2907

Yesterday we held the UFB Ghost Trials at @CalHacks Hack Nights — hands-on trials where students learned to retarget motions from videos, train motion imitation policies (BeyondMimic) in mjlab, then competed in sim + real robot deployments! This is about making advanced humanoid training accessible to students everywhere — not just labs. Top teams impressed, making progress towards permanent placement in the league. Congrats to team Molt, Justin and Alice, for taking the win 🚀🤖 Shoutout to @kevin_zakka (creator of mjlab — the backbone of our sim training) and massive props to @UFBots tech team: @emerson @_patrickrose and @DDobas for powering this and pushing the frontier forward! Video of sim showdowns, real deployments, and winners below 👇

English
0
1
7
367
David Dobáš
David Dobáš@DDobas·
Just deployed @nvidia Sonic on the G1 with @UFBots . Zero-shotted our custom move, no training needed. Everything is just one policy. Impressively stable.
English
27
49
324
22.8K
David Dobáš
David Dobáš@DDobas·
Also made our custom controller through websocket connection, so that we can control the robot from phone
English
2
0
12
706
David Dobáš
David Dobáš@DDobas·
We only saw the stage a few days in advance. Luckily: carpet. When I saw confetti everywhere, I expected at least one slip. Somehow, they all held.
English
1
0
0
165
David Dobáš
David Dobáš@DDobas·
Last week, we ran 8 humanoid robots live at DJ Wukong’s Chinese New Year show. 5 on stage. 3 with VIPs. I ran the full pipeline, from mocap to sim to real deployment. Here’s what worked (and what didn’t).
English
3
7
24
7.3K
Hansen Lillemark
Hansen Lillemark@hansenlillemark·
State of the art World Models still lack a unified world memory for representing and predicting dynamics out of their field of view. Why is that, and how can we fix it? Introducing Flow Equivariant World Models: models with memory capable of predicting out of view dynamics!🧵⬇️
English
17
104
751
89.1K