Brent Yi
172 posts

Brent Yi retweetledi
Brent Yi retweetledi

@brenthyi who worked on FPO/FPO++ is finishing his PhD and going on the job market 😭✨
He is also the person behind viser, pyroki, egoallo, jaxls, tyro and more!
I can't express how amazing it is to have Brent on your team..! Any team would be incredibly lucky to have him!!
Angjoo Kanazawa@akanazawa
FPO++! We got RL on flow policies working on real robot tasks. Sim2real on humanoids trained from scratch + manipulation finetuning in sim with action chunking. Excited about this direction because we can now use RL with expressive policies to discover new behaviors!
English
Brent Yi retweetledi

New in mjlab from the amazing @ki_ki_ki1: 8 new terrains and a viser-based terrain visualizer 😎
English
Brent Yi retweetledi

We trained diffusion models on a billion LLM activations, and we want you to use them!
New preprint: Learning a Generative Meta-Model of LLM Activations
Joint work with @feng_jiahai, @trevordarrell, @AlecRad, @JacobSteinhardt.
More in thread 🧵
English
Brent Yi retweetledi

One of my favorite robot clips (filmed Oct 2025).
You can train any crazy full-body motions like this with our open-source stack without changing any parameters.
whole_body_tracking: github.com/HybridRobotics…
mjlab: github.com/mujocolab/mjla…
English

and DexMimicGen:
x.com/SteveTod1998/s…
Zhenyu Jiang@SteveTod1998
How can we scale up humanoid data acquisition with minimal human effort? Introducing DexMimicGen, a large-scale automated data generation system that synthesizes trajectories from a few human demonstrations for humanoid robots with dexterous hands. (1/n)
HT

New project! Flow Policy Gradients for Robot Control
tldr; a simple online RL recipe for training and fine-tuning flow policies for robots
co-led w/ @redstone_hong: hongsukchoi.github.io/fpo-control
English
Brent Yi retweetledi

mjlab v1.0.0 is officially out and considered stable.
Huge thanks to everyone who contributed code, reported issues, and gave feedback. This release wouldn’t have happened without you.
github.com/mujocolab/mjlab
English
Brent Yi retweetledi

If you let VLMs experiment on their own, they can do surprising things!
From an image, we let a VLM code a 3D scene from scratch in Blender, and then render to verify/refine in a loop. Even when each step is imperfect, it gets results like this, with zero training.
Haiwen (Haven) Feng@HavenFeng
✨Thinking with Blender~ Meet VIGA: a multimodal agent that autonomously codes 3D/4D blender scenes from any image, with no human, no training! @berkeley_ai #LLMs #Blender #Agent 🧵1/6
English
Brent Yi retweetledi

✨Thinking with Blender~
Meet VIGA: a multimodal agent that autonomously codes 3D/4D blender scenes from any image, with no human, no training!
@berkeley_ai #LLMs #Blender #Agent 🧵1/6
English
