Darius Foodeei

51 posts

Darius Foodeei banner
Darius Foodeei

Darius Foodeei

@dariusfdi

ML @ ETH Zurich prev. EPFL | ETHRC Humanoid lead

Zurich, Switzerland Katılım Aralık 2025
248 Takip Edilen43 Takipçiler
Darius Foodeei retweetledi
Andreas Klinger 🦾
Andreas Klinger 🦾@andreasklinger·
A student in Zurich sleeps in a camper van outside a hangar every night. Just to have more hours to build a humanoid robot. He’s obsessive. He's not alone. 2,500 students across Europe are getting into robotics right now. And now they united and just launched: ESRA, the European Student Robotics Association. 🇪🇺🦾 They are bringing together highly talented young people, give them space, give them resources and let them build. By now already 13 robotics clubs. 8 countries. 2,500+ students. I visited several of them over the last weeks to get to know them and let them tell their stories. We've also been helping behind the scenes where we can, because this is exactly what Europe needs. Several multiple billion dollar companies will come out of the ESRA network. Right here in Europe. If the Bay Area had a student robotics network like this, they would never shut up about it. Time we do the same. 😤🔥🇪🇺 It only needs a few crazy ones to fix a continent. Turns out they're already building. 🇪🇺
English
50
158
1.2K
108.1K
Darius Foodeei retweetledi
ETH Robotics Club
ETH Robotics Club@ethroboticsclub·
Humanoid Robot Boxing at the ETH Robotics Club REAL STEEL - HUMAN TELEOPERATED FIRST ROBOT BOXING TOURNAMENT IN EUROPE Thanks to our partners at @virtuals_io and @ZHAW for providing us with the robots for this epic showdown
ETH Robotics Club tweet mediaETH Robotics Club tweet mediaETH Robotics Club tweet media
English
4
7
53
6.4K
Darius Foodeei retweetledi
Cheng Chi
Cheng Chi@chichengcc·
Excited for my talk at ETH!
Oier Mees@oier_mees

Excited to welcome @chichengcc from @sundayrobotics for a Guest Spotlight at @ETH today! Who better to follow up on my lecture on generative models than the lead of Diffusion Policy & UMI? He'll cover "Robotics: Beyond Algorithms" and practical tips hard to learn in academia

English
3
9
135
12.3K
Darius Foodeei
Darius Foodeei@dariusfdi·
Find a hackathon with this many G1s. I’ll wait. @ethroboticsclub HACK2026 stay tuned for some real steel action tomorrow.
Darius Foodeei tweet media
English
1
1
4
131
Zhanyi Sun
Zhanyi Sun@s_zhanyi·
We find that RL post-training can substantially improve BC policies without teaching them anything fundamentally new. So what is RL doing? In DICE-RL, it contracts a broad behavior prior toward high-value modes. (1/n) zhanyisun.github.io/dice.rl.2026/
English
6
43
264
24.6K
0x796F
0x796F@0x796F·
You can now train @physical_int style robots in 1 day for only $5k. Anvil’s devkits have all the hardware, software, controls, cameras, and more ready-to-go. (1/5)
English
21
71
562
320.5K
Harrison Kinsley
Harrison Kinsley@Sentdex·
testing robot policies has never been so much fun
English
26
35
492
53.2K
Darius Foodeei retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.3K
5.1M
Darius Foodeei retweetledi
Zi-ang Cao
Zi-ang Cao@ziang_cao·
So excited to share what we've been building! 🚀 We designed SONIC to push generalist humanoids to do real work, handling real-time, whole-body loco-manipulation. But honestly, the best part is that it’s well-documented open-source. We put a lot of effort into making the codebase accessible and ready to use. Clone the repo, run it, break it, and let me know what you build! 👇
Yuke Zhu@yukez

We have seen rapid progress in humanoid control — specialist robots can reliably generate agile, acrobatic, but preset motions. Our singular focus this year: putting generalist humanoids to do real work. To progress toward this goal, we developed SONIC (nvlabs.github.io/GEAR-SONIC/), a Behavior Foundation Model for real-time, whole-body motion generation that supports teleoperation and VLA inference for loco-manipulation. Today, we’re open-sourcing SONIC on GitHub. We are excited to see what the community builds upon SONIC and to collectively push humanoid intelligence toward real-world deployment at scale. 🌐 Paper: arxiv.org/abs/2511.07820 📃 Code: github.com/NVlabs/GR00T-W…

English
1
4
46
4K
Darius Foodeei retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
This work makes a humanoid robot do simple parkour moves by looking with a depth camera and choosing the right move on the fly. The big deal is that it turns lots of small human moves into long, real-time robot behavior, without hand-coding every transition or retraining for each new course. A humanoid robot is usually good at steady walking, but it often fails when it has to do fast moves like jumping up, vaulting, or rolling, and then keep going to the next obstacle. The hard part is that you cannot easily collect training data for every possible obstacle shape, distance, and mistake, so robots end up learning a few moves that only work in a narrow setup. This work starts from short clips of real human parkour moves, like stepping over, vaulting, climbing, and rolling. It uses motion matching, which is basically a smart “pick the next clip that fits best right now” search, to stitch those short clips into a long, smooth plan that looks like a human doing a whole course. Then it trains a controller with reinforcement learning (RL), which means the robot learns by trial and error to copy that plan while staying balanced and not falling. After training separate expert controllers for different moves, it compresses them into 1 controller that uses only onboard depth sensing and a simple “go this fast in this direction” command. In real tests on a Unitree G1 humanoid, it can clear multiple obstacles in a row, adapt when obstacles get moved, and climb a wall up to 1.25m.
English
11
15
88
37K
Darius Foodeei
Darius Foodeei@dariusfdi·
These are steel watch casings btw we are trying to manipulate
English
0
0
1
42
Darius Foodeei
Darius Foodeei@dariusfdi·
Testing out @nvidia gr00t full body control teleop today. Much smoother out of the box and allows for joystick locomotion reducing noisy foot movement.
English
1
0
3
140
Dominique Paul
Dominique Paul@DominiqueCAPaul·
@dariusfdi @ZeYanjie What are the general capabilities of the twist policy? Can you specifiy a command with text or how does it work?
English
2
0
0
136
Darius Foodeei
Darius Foodeei@dariusfdi·
We’ve started experimenting with TWIST2 from @ZeYanjie @ Stanford. Some more calibration needed and we’ll start recording data.
English
1
0
8
341
Darius Foodeei
Darius Foodeei@dariusfdi·
Important note when using the G1 with Dex 3-1 is the default policies are not trained using these hands but the rubber ones. Do not run the default policies like dancing or kung-fu or they will damage the upper leg section like this.
Darius Foodeei tweet media
English
0
0
3
140
Mustafa
Mustafa@oprydai·
@dariusfdi the reply i was waiting for. hell yeahhh
English
1
0
1
39
Mustafa
Mustafa@oprydai·
what comes to your mind when you see this picture?
Mustafa tweet media
English
16
0
26
3.5K