Jiaying Fang

13 posts

Jiaying Fang banner
Jiaying Fang

Jiaying Fang

@jiaying_fang0

Robotics PhD @Cornell | MS EE @Stanford | RA @StanfordIPRL

Katılım Mart 2025
106 Takip Edilen74 Takipçiler
Jiaying Fang retweetledi
Bo Ai
Bo Ai@BoAi0110·
Excited about our new release: π0.7. One result I’m especially excited about is cross-embodiment transfer. 🤖 We train on laundry folding data from lightweight, easy-to-teleoperate bimanual robots, yet the model transfers the skill to a bimanual UR5e — heavier, bulkier, and harder to control due to large joint inertia — with no task data on that robot. It matches the performance of our most experienced teleoperators attempting this task on UR5e for the first time. Early on, when we started exploring cross-embodiment transfer with @allenzren and @QuanVng, shirt folding on UR5e felt out of reach. Seeing the first signs that it might work was exciting — pushing it to this level was even more so. To me, this is both scientifically and practically exciting: robots are no longer isolated data sources. Experience from one platform can directly benefit another. Instead of fragmented, per-robot data collection, we move toward a shared learning process where every embodiment both contributes to and benefits from the whole. Proud of the team @physical_int ! 🎉🎉🎉
Physical Intelligence@physical_int

Our newest model, π0.7, has some interesting emergent capabilities: it can control a new robot to fold shirts for which we had no shirt folding data, figure out how to use an appliance with language-based coaching, and perform a wide range of dexterous tasks all in one model!

English
1
2
20
1.5K
Jiaying Fang retweetledi
Kushal
Kushal@kushalk_·
🤖 Can a single robot policy manipulate diverse tools without ever seeing them before? Introducing SimToolReal 🔨 : a generalist dexterous manipulation policy that transfers zero-shot sim→real to unseen tools + unseen tasks All videos are 1x speed (60 Hz control) 🧵👇
English
21
78
381
103.7K
Jiaying Fang retweetledi
Tapomayukh "Tapo" Bhattacharjee
Physical caregiving is one of robotics' hardest frontiers: it is contact-rich, physically intensive, long-horizon, safety-critical, and full of deformable objects. Physical caregiving tasks such as bathing, dressing, transferring, toileting, and grooming require professional training and considerable practical experience. Yet, no existing dataset captures how expert caregivers perceive, interact, and adapt in real-time when performing these tasks, in a form that robots can learn from. ✨ We introduce OpenRoboCare at #IROS2025, the first expert-collected, multi-task, multimodal dataset for physical robot caregiving, featuring: 🩺 21 expert occupational therapists demonstrating caregiving procedures 🛠️ 15 caregiving tasks across 5 Activities of Daily Living (bathing, dressing, transferring, toileting, grooming) 🧍 2 hospital-grade manikins for safety and repeatability 🎥 5 synchronized sensing modalities: RGB-D, pose tracking, eye gaze, tactile sensing, and expert task & action annotations 📂 315 sessions · 19.8 hrs · 31,185 samples Beyond raw data, OpenRoboCare distills core physical caregiving insights: - 3 core principles followed by occupational therapists: pre-positioning, anticipation of body mechanics, and task efficiency. - 4 key physical techniques: the bridge strategy, segmental rolling, wheelchair recline, and stabilization of key control points. - Quantitative patterns in task duration, predictive gaze behavior that precedes physical contact, and the timing, magnitude, and spatial distribution of contact forces across body regions and task phases. The dataset will be made openly accessible through the AWS Open Data Sponsorship Program soon. 🌐 Check out our project website for more visuals and insights: emprise.cs.cornell.edu/robo-care/ This work is led by: @xiaoyul14, @RealZiangLiu, and Kelvin Lin. This is a collaboration with Harold Soh's group from NUS and @DimitropoulouDr from CUIMC. @EmpriseLab @Cornell_CS @IROS2025 @awscloud
English
3
23
107
6.7K
Jiaying Fang retweetledi
Pranav Thakkar
Pranav Thakkar@pranavnnt·
Excited to share our work on multimodal perception for real robots, CLAMP, now accepted at #CoRL2025 ! Do come say hi at our poster session on Sep 30 if you're attending!
Tapomayukh "Tapo" Bhattacharjee@TapoBhat

Introducing CLAMP: : a device, dataset, and model that bring large-scale, in-the-wild multimodal haptics to real robots. Haptics / Tactile data is more than just force or surface texture, and capturing this multimodal haptic information can be useful for robot manipulation. Check out @pranavnnt’s work “CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception”, at #CoRL2025. The CLAMP device is an open-source, low-cost (<$200), portable (0.59 kg) tool that can sense 5 haptic modalities along with vision and language. Users can take it home and log haptic data via a PiTFT screen and buttons. As far as we know, the CLAMP dataset is the largest multimodal haptic dataset in the robotics literature, with a total of 12.3 million data points from 5357 objects in 41 homes, collected by 16 CLAMP devices. The CLAMP model is a material recognition model that outperformed GPT-4o, CLIP, and PG-VLM in our experiments, and generalized to haptic data from three different robot embodiments (WidowX and Franka with different grippers). A finetuned CLAMP model enabled a 7-DoF Franka Panda to robustly perform three real-world manipulation tasks involving clutter, occlusion, and visual ambiguity. @EmpriseLab @Cornell_CS @corl_conf 🗣️ Spotlight presentation at #CoRL2025 on Sep 30 (spotlight session 5) 📊 Poster session at #CoRL2025 on Sep 30 (poster session 3) 🌐 Website: emprise.cs.cornell.edu/clamp/ 📄 Paper: arxiv.org/pdf/2505.21495 Check this thread for more details (1/6) 🧵

English
1
3
9
1.9K
Jiaying Fang retweetledi
Tapomayukh "Tapo" Bhattacharjee
During a meal, food may cool down resulting in a change in its physical properties, even though visually it may look the same! How can robots reliably pick up food when it looks the same but feels different — such as steak 🥩getting firmer as it cools? 🍴 Check out @ZhanxinWu0725's work SAVOR: Skill Affordance Learning from Visuo-Haptic Perception for Robot-Assisted Bite Acquisition — an oral at #CoRL2025. SAVOR introduces a novel method to learn skill affordances, which capture how suitable a manipulation skill (e.g., skewering, scooping) is for a utensil–food interaction. Skill affordances arise from the combination of tool affordances (what a utensil can do) and food affordances (what the food allows). Using this method, SAVOR improves bite acquisition success by 13% over state-of-the-art methods. @EmpriseLab @Cornell_CS @corl_conf 🗣️ Oral presentation at #CoRL2025 — join us on Sep 28 (afternoon session) 🌐 Website: emprise.cs.cornell.edu/savor/ 📄 Paper: arxiv.org/pdf/2506.02353 Check this thread for more details (1/6) 🧵
English
2
13
30
3.7K
Jiaying Fang retweetledi
Tapomayukh "Tapo" Bhattacharjee
NERC 2025 is happening @Cornell this year. Here is the website with more details: nerc2025.cis.cornell.edu We have a fantastic set of keynote speakers from a variety of backgrounds @Majumdar_Ani from @Princeton, Victoria Webster-Wood from @CarnegieMellon, @wendyju from @cornell_tech, and @HerlantLaura from RAI With poster presentations of extended abstracts, Rising Star spotlight talks, and demos / booths and other support from our generous sponsors @FourierRobots @rai_inst @clearpathrobots @UnitreeRobotics @CornellCIS and @CornellCOE , this event is going to be exciting! Do register (Early Deadline: September 10th, Late Deadline: October 3rd) and come enjoy this event at the beautiful @Cornell Campus in Ithaca on October 11th 🎉🎉
NERC 2026@NERC_Robotics

UPDATE: The regular registration deadline has been extended to Wednesday, September 10th! Join us at Cornell on Oct 11 for a day of robotics talks, posters, and connections across the Northeast. 🔗 events.ces.scl.cornell.edu/event/NERC

English
0
6
24
2.7K
Jiaying Fang
Jiaying Fang@jiaying_fang0·
The internet is full of human videos, but how can we use them to teach robots? 🤖💡 Check out our new work, Masquerade 🎭 We "replace" human arms with robot arms!
Marion Lepert@marionlepert

Introducing Masquerade 🎭: We edit in-the-wild videos to look like robot demos, and find that co-training policies with this data achieves much stronger performance in new environments. ❗Note: No real robots in these videos❗It’s all 💪🏼 ➡️ 🦾 🧵1/6

English
1
3
20
2.5K
Dima Damen
Dima Damen@dimadamen·
When googling the authors to tag them I realized this is an all-female author list, which made me even more excited... Big shout out to female rising stars @marionlepert @jiaying_fang0 and of course their advisor @leto__jean
English
3
0
11
914
Jiaying Fang retweetledi
Marion Lepert
Marion Lepert@marionlepert·
Introducing Masquerade 🎭: We edit in-the-wild videos to look like robot demos, and find that co-training policies with this data achieves much stronger performance in new environments. ❗Note: No real robots in these videos❗It’s all 💪🏼 ➡️ 🦾 🧵1/6
English
17
39
285
66.6K
Jiaying Fang retweetledi
Marion Lepert
Marion Lepert@marionlepert·
Introducing Phantom 👻: a method to train robot policies without collecting any robot data — using only human video demonstrations. Phantom turns human videos into "robot" demonstrations, making it significantly easier to scale up and diversify robotics data. 🧵1/9
English
14
67
492
53.9K