Tl;dr After 10+ years pushing the boundaries of RL at @GoogleDeepMind , @Google Brain and then Gemini, I’m looping back to startup mode and joining @UMA_Robots to push the frontier of human-centric robotics. (1/4)🧵
One of the first AgiBot X2 humanoid robots just landed in our lab 🤖
I like its articulated neck, back handle, battery location, anthropomorphic wrist and white soft cover in a sort of Polyurethane foam.
Impressive delivery speed and support from the @AGIBOTofficial team!
I joined UMA, starting less than 36 hours after my last day at Tesla. It’s one of the most exciting opportunities in robotics today, combining a world-class team, a unique go-to-market positioning, and startup velocity.
What we learned from 3 months of pilots with robotic data companies.
Data collection companies want:
- to manage their own cloud
- more sensors
- the ability to write custom code
We got rid of the cloud, added wrist cams, will release the first open-source Pi friendly data collection SDK 🛟
Introducing OMGrab: a wearable device and platform that trains robotics world models
Our device captures egocentric vision and motion data in real time. In the cloud, we reconstruct interactions and scene structure in 3D, then train domain-specific world models from that data. These models enable robots to reason about physical interactions over long horizons.
Excited to announce our $4.2M seed round led by @Initialized and the release of our state-of-the art reranker zerank-1.
zerank-1 was trained using a novel ELO-score inspired training pipeline, that treats query-document relevance like a ranking game (literally, just like Chess! ♛).
zerank-1 outperforms rerankers twice its size, and even general purpose LLMs prompted for reranking.
Here is exactly how we trained it, and where you can try it:
🧵👇