Peiqi Liu

107 posts

Peiqi Liu

Peiqi Liu

@LIUPEIQI2

@CMU_Robotics Prev: @CILVRatNYU & @hellorobotinc

New York Katılım Kasım 2020
256 Takip Edilen184 Takipçiler
Sabitlenmiş Tweet
Peiqi Liu
Peiqi Liu@LIUPEIQI2·
Thrilled to announce our paper Dynamem has been selected as the 𝐁𝐞𝐬𝐭 𝐏𝐚𝐩𝐞𝐫 at the Lifelong Learning for Home Robots Workshop at #CoRL2024! Thank you for your support. I am currently an intern at @hellorobotinc and will keep optimizing Dynamem there.
Mahi Shafiullah 🏠🤖@notmahi

If you want robots that can just live with you & help 24/7, it needs to build & update its memory on the fly. Current semantic memory representations like VoxelMap from OK-Robot can't change with the world. That's why we built DynaMem: dynamic memory for a changing, open world!

English
4
7
50
6.8K
Peiqi Liu
Peiqi Liu@LIUPEIQI2·
Actually, even Siglipv1 is strong enough to match object features, not just with another object feature, but also with the text feature. This idea will give you a language-conditional robot navigation system dynamem.github.io
Yu Xiang@YuXiang_IRVL

We noticed that DINOv3 was surprisingly strong at matching object features. This inspired L2G (Local Matches to Global Masks). With a few reference images, a robot can search a room for the target object. 🔗 Project: irvlutd.github.io/L2G/ 💻 Code: github.com/IRVLUTD/L2G

English
2
1
14
1.7K
Peiqi Liu retweetledi
Mahi Shafiullah 🏠🤖
Mahi Shafiullah 🏠🤖@notmahi·
Why buy a robot when you can build your own? Meet YOR, our new open-source bimanual mobile manipulator robot – built for researchers and hackers alike for only ~$10k. 🧵👇
English
7
22
171
37.3K
Peiqi Liu retweetledi
Omar Rayyan
Omar Rayyan@omarrayyann·
It’s hard to find true zero-shot end-to-end policies – ones that work without any fine-tuning in fully novel, simulated environments, even for single tasks! We test two policy families, the π family from @physical_int and the recent Contact-Anchored Policies (CAP) from NYU & UCB. On all our tasks, we are making steady progress – but we are nowhere close to saturation yet.
Omar Rayyan tweet media
English
1
1
4
1.1K
Peiqi Liu
Peiqi Liu@LIUPEIQI2·
DynaMem (dynamem.github.io) ran in the background of this video. It combines picking, placing, and navigation skills together to solve long horizon mobile manipulation. Integrating CAP can definitely improve the manipulation robustness and speed of DynaMem significantly!
Mahi Shafiullah 🏠🤖@notmahi

CAP 🧢 works well on our academic data, compute, and parameter budget – training 3 general policies for pick, open, and close on only 23 hrs of data. Fun fact: one of them has already won a best demo award in CVPR'25 after doing picks all day. It's only gotten better since then (4/n)

English
9
3
20
3.1K
Peiqi Liu retweetledi
Irmak Guzey
Irmak Guzey@irmakkguzey·
We just released AINA, a framework for learning robot policies from Aria 2 demos, and are now open-sourcing the code: github.com/facebookresear…. It includes: ✅ Aria 2 data processing into 3D observations like shown ✅Training of point-based policies ✅Calibration Give it a try!
GIF
English
4
32
139
22.4K
Peiqi Liu retweetledi
Irmak Guzey
Irmak Guzey@irmakkguzey·
Dexterous manipulation by directly observing humans - a dream in AI for decades - is hard due to visual and embodiment gaps. With simple yet powerful hardware - Aria 2 glasses 👓 - and our new work AINA 🪞, we are now one significant step closer to achieving this dream.
English
7
33
145
34K
Peiqi Liu retweetledi
Raunaq Bhirangi
Raunaq Bhirangi@Raunaqmb·
When @anyazorin and @irmakkguzey open-sourced the RUKA Hand (a low-cost robotic hand) earlier this year, people kept asking us how to get one. Open hardware isn’t as easy to share as code. So we’re releasing an off-the-shelf RUKA, in collaboration with @WowRobo and @zhazhali01.
English
14
42
252
48K
Peiqi Liu retweetledi
Jiahui(Jim) Yang
Jiahui(Jim) Yang@Jiahui_Yang6709·
Just arrived at Hangzhou for #IROS2025 I’ll present Neural MP at TuAT1.1 (Award Finalists Session 1), super excited that we got nominated as Best Paper Finalists and Best Student Paper Finalists! 🗓️ Oct. 21, 10:30-10:35 AM 📍 Room 401 🔗 Neural MP mihdalal.github.io/neuralmotionpl… I’ll also give a spotlight talk for DRP at the LeaPRiDE workshop! 🗓️ Oct. 20, 10:10-10:20 AM 📍 Room 102A 🔗 DRP deep-reactive-policy.com 🔗 LeaPRiDE Workshop leapride.robot-learning.net Looking forward to meet old and new friends!
English
1
5
20
2.2K
Peiqi Liu retweetledi
Lerrel Pinto
Lerrel Pinto@LerrelPinto·
I gave a Early Career talk at CoRL 2025 in Seoul last week, where I talked about my observations from the past decade in robot learning along with where the field is headed for the next decade. In summary, the future of robot learning needs: (1) Data beyond teleop: We are never going to reach the scale of LLM / VLM data by tele-operating robots. Need to leverage consumer hardware already in people's hands (e.g. iPhones) and emerging devices (e.g. Smartglasses). (2) Observations beyond vision: The hard problem in robotics is dexterity. Dexterity is all about moving objects intricately through contact. The sense of touch is critical for this. Vision can help you acquire objects, but anything more complex will need touch. (3) Reasoning beyond reactivity: The biggest wins in robot learning have been in reactive policies (both manipulation and locomotion). But the class of models that got us here are generally feed-forward nets. Long-horizon reasoning needs the ability to predict future outcomes and manipulate them. Currently unclear what the right scalable architectures are here, but we are working on it. (thanks @zacinaction for the pic!)
Lerrel Pinto tweet media
English
7
19
200
16.2K
Peiqi Liu retweetledi
Saumya Saxena
Saumya Saxena@saxena_saumya·
🎉We will be presenting GraphEQA at #CoRL2025 in Seoul! Curious about how to utilize 3D scene graphs for context-aware navigation in unexplored 3D environments? 👋Come visit us @ Spotlight 2 on Sept 28 (Poster 66)! Website: bit.ly/4nOiRfG Code: bit.ly/4mDqswT
Saumya Saxena@saxena_saumya

Can 3D scene graphs act as effective online memory for solving EQA tasks in⚡️real-time? Presenting GraphEQA🤖, a framework for grounding Vision Language Models using multimodal memory for real-time embodied question answering.

English
0
1
3
342
Peiqi Liu retweetledi
Skild AI
Skild AI@SkildAI·
We built a robot brain that nothing can stop. Shattered limbs? Jammed motors? If the bot can move, the Brain will move it— even if it’s an entirely new robot body. Meet the omni-bodied Skild Brain:
English
506
882
6.7K
2.4M
Peiqi Liu retweetledi
Ilir Aliu
Ilir Aliu@IlirAliu_·
From dexterous hands to imitation from internet videos, his group keeps dropping breakthroughs that set the tone for the field. @LerrelPinto’s lab at NYU has quietly reshaped robotic learning. A breakdown 🧵 [📍SAVE MEGA THREAD FOR LATER📍]
Ilir Aliu tweet media
English
7
11
136
30.1K
Peiqi Liu retweetledi
Chuanyang Jin
Chuanyang Jin@chuanyang_jin·
Missed our #RSS workshop on Continual Robot Learning from Humans? ✨ Or want to rewatch 📷 your favorite talks? We released all recordings on YouTube 👇 youtube.com/playlist?list=…
Chuanyang Jin@chuanyang_jin

Excited to announce the 1st Workshop on Continual Robot Learning from Humans @ #RSS2025 in LA! We're bringing together interdisciplinary researchers to explore how robots can continuously learn through human interactions! Full details: …-robot-learning-from-humans.github.io @RoboticsSciSys

English
0
6
8
1.7K
Peiqi Liu retweetledi
Hello Robot
Hello Robot@hellorobotinc·
The Hello Robot team is ready to go at the AI for Good Global Summit! Excited to connect with innovators from around the world and share how Hello Robot is building useful, inclusive robots that make a real difference. Can’t wait to get underway! #AIforGood #HelloRobot @AIforGood
Hello Robot tweet media
English
5
2
18
1.4K
Peiqi Liu retweetledi
Lerrel Pinto
Lerrel Pinto@LerrelPinto·
It is difficult to get robots to be both precise and general. We just released a new technique for precise manipulation that achieves millimeter-level precision while being robust to large visual variations. The key is a careful combination of visuo-tactile learning and RL. 🧵👇
English
5
21
212
18.1K
Peiqi Liu retweetledi
Zifan Zhao
Zifan Zhao@Zifan_Zhao_2718·
🚀 With minimal data and a straightforward training setup, our VisualTactile Local Policy (ViTaL) fuses egocentric vision + tactile feedback to achieve millimeter-level precision & zero-shot generalization! 🤖✨ Details ▶️ vitalprecise.github.io
English
1
9
35
4K
Peiqi Liu retweetledi
Chuanyang Jin
Chuanyang Jin@chuanyang_jin·
Welcome to join us tomorrow! 🗓️ June 21 | 8:50 AM – 12:30 PM PT 📍 USC (OHE 132) & Zoom (wse.zoom.us/j/95095685281)
Tianmin Shu@tianminshu

The #RSS2025 Workshop on Continual Robot Learning from Humans is happening on June 21. We have an amazing lineup of speakers discussing how we can enable robots to acquire new skills and knowledge from humans continuously. Join us in person and on Zoom (info on our website)!

English
0
3
10
1.1K
Siddharth Karamcheti
Siddharth Karamcheti@siddkaramcheti·
Thrilled to share that I'll be starting as an Assistant Professor at Georgia Tech (@ICatGT / @GTrobotics / @mlatgt) in Fall 2026. My lab will tackle problems in robot learning, multimodal ML, and interaction. I'm recruiting PhD students this next cycle – please apply/reach out!
Siddharth Karamcheti tweet mediaSiddharth Karamcheti tweet media
English
72
23
565
61K
Peiqi Liu retweetledi
Venkatesh
Venkatesh@venkyp2000·
Making touch sensors has never been easier! Excited to present eFlesh, a 3D printable tactile sensor that aims to democratize robotic touch. All you need to make your own eFlesh is a 3D printer, some magnets and a magnetometer. See thread 👇and visit e-flesh.com
English
8
88
555
69.1K