Billy Yan รีทวีตแล้ว
Billy Yan
25 posts

Billy Yan
@BillyYYan
Math, CS undergrad @CILVRatNYU #RobotLearning #ComputerVision
Manhattan, NY เข้าร่วม Temmuz 2021
353 กำลังติดตาม49 ผู้ติดตาม
Billy Yan รีทวีตแล้ว

Teleoperation was pioneered ~1950 to remotely handle radioactive material. When we use it today to collect robot trajectories for BC, it is still clumsy. Surely, there is a better way! (Hint: human video, RL in sim).youtube.com/watch?v=Iihxza…

YouTube
English
Billy Yan รีทวีตแล้ว

✨ Meet YOR: Open-Source Bimanual Mobile Manipulator from @nyuniversity
Fully open-source mobile manipulator with dual 6-DoF PiPER arms by AgileX Robotics, BOM cost only ~$10k!
🌐 yourownrobot.ai
#Robotics #OpenSource #AgileXRobotics #PiPER #NYU
English
Billy Yan รีทวีตแล้ว

Robot foundation models are limited by costly real data, while simulation data is plentiful but visually mismatched to reality. We present Point Bridge, a method that enables zero-shot sim-to-real transfer for robot learning with minimal visual alignment.
pointbridge3d.github.io
English
Billy Yan รีทวีตแล้ว
Billy Yan รีทวีตแล้ว
Billy Yan รีทวีตแล้ว
Billy Yan รีทวีตแล้ว
Billy Yan รีทวีตแล้ว

We just released AINA, a framework for learning robot policies from Aria 2 demos, and are now open-sourcing the code: github.com/facebookresear…. It includes:
✅ Aria 2 data processing into 3D observations like shown
✅Training of point-based policies
✅Calibration
Give it a try!
GIF
English
Billy Yan รีทวีตแล้ว
Billy Yan รีทวีตแล้ว

When @anyazorin and @irmakkguzey open-sourced the RUKA Hand (a low-cost robotic hand) earlier this year, people kept asking us how to get one.
Open hardware isn’t as easy to share as code.
So we’re releasing an off-the-shelf RUKA, in collaboration with @WowRobo and @zhazhali01.
English
Billy Yan รีทวีตแล้ว

I gave a Early Career talk at CoRL 2025 in Seoul last week, where I talked about my observations from the past decade in robot learning along with where the field is headed for the next decade.
In summary, the future of robot learning needs:
(1) Data beyond teleop: We are never going to reach the scale of LLM / VLM data by tele-operating robots. Need to leverage consumer hardware already in people's hands (e.g. iPhones) and emerging devices (e.g. Smartglasses).
(2) Observations beyond vision: The hard problem in robotics is dexterity. Dexterity is all about moving objects intricately through contact. The sense of touch is critical for this. Vision can help you acquire objects, but anything more complex will need touch.
(3) Reasoning beyond reactivity: The biggest wins in robot learning have been in reactive policies (both manipulation and locomotion). But the class of models that got us here are generally feed-forward nets. Long-horizon reasoning needs the ability to predict future outcomes and manipulate them. Currently unclear what the right scalable architectures are here, but we are working on it.
(thanks @zacinaction for the pic!)

English
Billy Yan รีทวีตแล้ว
Billy Yan รีทวีตแล้ว
Billy Yan รีทวีตแล้ว

🚀 With minimal data and a straightforward training setup, our VisualTactile Local Policy (ViTaL) fuses egocentric vision + tactile feedback to achieve millimeter-level precision & zero-shot generalization! 🤖✨
Details ▶️ vitalprecise.github.io
English
Billy Yan รีทวีตแล้ว
Billy Yan รีทวีตแล้ว
Billy Yan รีทวีตแล้ว
Billy Yan รีทวีตแล้ว

Making touch sensors has never been easier!
Excited to present eFlesh, a 3D printable tactile sensor that aims to democratize robotic touch.
All you need to make your own eFlesh is a 3D printer, some magnets and a magnetometer.
See thread 👇and visit e-flesh.com
English
Billy Yan รีทวีตแล้ว

Everyday human data is robotics’ answer to internet-scale tokens.
But how can robots learn to feel—just from videos?📹
Introducing FeelTheForce (FTF): force-sensitive manipulation policies learned from natural human interactions🖐️🤖
👉 feel-the-force-ftf.github.io
1/n
English


