EmPRISE Lab

80 posts

EmPRISE Lab

EmPRISE Lab

@EmpriseLab

EmPRISE Lab in CS at Cornell University. We are a full-stack robotics lab. Our vision is to EMpower People with Robots and Intelligent Shared Experiences.

Ithaca, NY Beigetreten Ekim 2020
47 Folgt203 Follower
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
Physical caregiving is one of robotics' hardest frontiers: it is contact-rich, physically intensive, long-horizon, safety-critical, and full of deformable objects. Physical caregiving tasks such as bathing, dressing, transferring, toileting, and grooming require professional training and considerable practical experience. Yet, no existing dataset captures how expert caregivers perceive, interact, and adapt in real-time when performing these tasks, in a form that robots can learn from. ✨ We introduce OpenRoboCare at #IROS2025, the first expert-collected, multi-task, multimodal dataset for physical robot caregiving, featuring: 🩺 21 expert occupational therapists demonstrating caregiving procedures 🛠️ 15 caregiving tasks across 5 Activities of Daily Living (bathing, dressing, transferring, toileting, grooming) 🧍 2 hospital-grade manikins for safety and repeatability 🎥 5 synchronized sensing modalities: RGB-D, pose tracking, eye gaze, tactile sensing, and expert task & action annotations 📂 315 sessions · 19.8 hrs · 31,185 samples Beyond raw data, OpenRoboCare distills core physical caregiving insights: - 3 core principles followed by occupational therapists: pre-positioning, anticipation of body mechanics, and task efficiency. - 4 key physical techniques: the bridge strategy, segmental rolling, wheelchair recline, and stabilization of key control points. - Quantitative patterns in task duration, predictive gaze behavior that precedes physical contact, and the timing, magnitude, and spatial distribution of contact forces across body regions and task phases. The dataset will be made openly accessible through the AWS Open Data Sponsorship Program soon. 🌐 Check out our project website for more visuals and insights: emprise.cs.cornell.edu/robo-care/ This work is led by: @xiaoyul14, @RealZiangLiu, and Kelvin Lin. This is a collaboration with Harold Soh's group from NUS and @DimitropoulouDr from CUIMC. @EmpriseLab @Cornell_CS @IROS2025 @awscloud
English
3
22
106
6.5K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
Robots often rely on whole-arm contact for caregiving tasks such as bed-bathing and transferring. Humans exhibit contact preferences in terms of where and how much force to exert for comfortable interactions. But comfort isn’t one-size-fits-all: preferences vary across people, and for the same person, across different body parts. How can a robot adapt to these preferences on the fly? Excited to share our work “PrioriTouch: Adapting to User Contact Preferences for Whole-Arm Physical Human-Robot Interaction” led by @rishabhmadan96! #CoRL2025 💡 Core idea Treat user contact preferences as a ranking over control objectives. PrioriTouch learns this priority ordering online and executes it with Hierarchical Operational Space Control (H-OSC), so higher-priority contacts (the ones closer to causing discomfort) are protected while others yield. 🧠 Learning to rank, safely We introduce LinUCB-Rank, a contextual bandit that updates the priority ordering from sparse user feedback (“I feel uncomfortable around my abdomen”). To keep people safe, risky exploration occurs first in a digital twin (simulation-in-the-loop) and then is deployed on the real robot. 📊 What does this buy us? A sample-efficient way of reasoning about contact preferences. In sim and hardware: fewer force-threshold violations, fewer feedback signals to reach the right ordering, and sustained task efficiency. @EmpriseLab @ToyotaResearch @Cornell_CS @corl_conf 🗣️ Spotlight: Sep 29 (Session 4) 📊 Poster: Sep 29 (Session 2) 🌐 Website: emprise.cs.cornell.edu/prioritouch 📝 Paper: arxiv.org/abs/2509.18447 (1/3) 🧵
English
1
10
33
4.5K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
Introducing CLAMP: : a device, dataset, and model that bring large-scale, in-the-wild multimodal haptics to real robots. Haptics / Tactile data is more than just force or surface texture, and capturing this multimodal haptic information can be useful for robot manipulation. Check out @pranavnnt’s work “CLAMP: Crowdsourcing a LArge-scale in-the-wild haptic dataset with an open-source device for Multimodal robot Perception”, at #CoRL2025. The CLAMP device is an open-source, low-cost (<$200), portable (0.59 kg) tool that can sense 5 haptic modalities along with vision and language. Users can take it home and log haptic data via a PiTFT screen and buttons. As far as we know, the CLAMP dataset is the largest multimodal haptic dataset in the robotics literature, with a total of 12.3 million data points from 5357 objects in 41 homes, collected by 16 CLAMP devices. The CLAMP model is a material recognition model that outperformed GPT-4o, CLIP, and PG-VLM in our experiments, and generalized to haptic data from three different robot embodiments (WidowX and Franka with different grippers). A finetuned CLAMP model enabled a 7-DoF Franka Panda to robustly perform three real-world manipulation tasks involving clutter, occlusion, and visual ambiguity. @EmpriseLab @Cornell_CS @corl_conf 🗣️ Spotlight presentation at #CoRL2025 on Sep 30 (spotlight session 5) 📊 Poster session at #CoRL2025 on Sep 30 (poster session 3) 🌐 Website: emprise.cs.cornell.edu/clamp/ 📄 Paper: arxiv.org/pdf/2505.21495 Check this thread for more details (1/6) 🧵
English
1
16
41
5.7K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
During a meal, food may cool down resulting in a change in its physical properties, even though visually it may look the same! How can robots reliably pick up food when it looks the same but feels different — such as steak 🥩getting firmer as it cools? 🍴 Check out @ZhanxinWu0725's work SAVOR: Skill Affordance Learning from Visuo-Haptic Perception for Robot-Assisted Bite Acquisition — an oral at #CoRL2025. SAVOR introduces a novel method to learn skill affordances, which capture how suitable a manipulation skill (e.g., skewering, scooping) is for a utensil–food interaction. Skill affordances arise from the combination of tool affordances (what a utensil can do) and food affordances (what the food allows). Using this method, SAVOR improves bite acquisition success by 13% over state-of-the-art methods. @EmpriseLab @Cornell_CS @corl_conf 🗣️ Oral presentation at #CoRL2025 — join us on Sep 28 (afternoon session) 🌐 Website: emprise.cs.cornell.edu/savor/ 📄 Paper: arxiv.org/pdf/2506.02353 Check this thread for more details (1/6) 🧵
English
2
11
30
3.6K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
NERC 2025 is happening @Cornell this year. Here is the website with more details: nerc2025.cis.cornell.edu We have a fantastic set of keynote speakers from a variety of backgrounds @Majumdar_Ani from @Princeton, Victoria Webster-Wood from @CarnegieMellon, @wendyju from @cornell_tech, and @HerlantLaura from RAI With poster presentations of extended abstracts, Rising Star spotlight talks, and demos / booths and other support from our generous sponsors @FourierRobots @rai_inst @clearpathrobots @UnitreeRobotics @CornellCIS and @CornellCOE , this event is going to be exciting! Do register (Early Deadline: September 10th, Late Deadline: October 3rd) and come enjoy this event at the beautiful @Cornell Campus in Ithaca on October 11th 🎉🎉
NERC 2025@NERC_Robotics

UPDATE: The regular registration deadline has been extended to Wednesday, September 10th! Join us at Cornell on Oct 11 for a day of robotics talks, posters, and connections across the Northeast. 🔗 events.ces.scl.cornell.edu/event/NERC

English
0
4
24
2.7K
EmPRISE Lab retweetet
Rajat Kumar Jenamani
Rajat Kumar Jenamani@rkjenamani·
Really excited to share that FEAST won the Best Paper Award at #RSS2025! Huge thanks to everyone who’s shaped this work, from roboticists to care recipients, caregivers, and occupational therapists. ❤️
Rajat Kumar Jenamani tweet media
Rajat Kumar Jenamani@rkjenamani

Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧵1/8

English
17
8
128
9.5K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
And we won the #RSS 2025 Best Paper Award! Congrats @rkjenamani and the entire @EmpriseLab team @CornellCIS 🎉🎉
Tapomayukh "Tapo" Bhattacharjee tweet media
Tapomayukh "Tapo" Bhattacharjee@TapoBhat

Congrats @rkjenamani and the entire team @EmpriseLab on this impressive accomplishment and being nominated for Best Paper Award and Best Systems Paper Award at #RSS 2025! This project took almost 2.5 years to get to this stage, and I am incredibly proud of what we have achieved in term of real-world deployment of a meal-assistance system with real users in their homes with minimal researcher intervention leveraging in-the-wild personalization. Key insight is that for in-the-wild deployment of user-centered systems, adaptation and personalization need to go hand-in-hand with transparency and safety. More technical details are in the thread 🧵below. @rkjenamani will be presenting this work on Monday (June 23, 2025) @RoboticsSciSys 2025 in the #HRI session. Do attend :-) Website: emprise.cs.cornell.edu/feast @CornellCIS @Cornell_CS @EmpriseLab

English
7
11
161
20.3K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
Congrats @rkjenamani and the entire team @EmpriseLab on this impressive accomplishment and being nominated for Best Paper Award and Best Systems Paper Award at #RSS 2025! This project took almost 2.5 years to get to this stage, and I am incredibly proud of what we have achieved in term of real-world deployment of a meal-assistance system with real users in their homes with minimal researcher intervention leveraging in-the-wild personalization. Key insight is that for in-the-wild deployment of user-centered systems, adaptation and personalization need to go hand-in-hand with transparency and safety. More technical details are in the thread 🧵below. @rkjenamani will be presenting this work on Monday (June 23, 2025) @RoboticsSciSys 2025 in the #HRI session. Do attend :-) Website: emprise.cs.cornell.edu/feast @CornellCIS @Cornell_CS @EmpriseLab
Rajat Kumar Jenamani@rkjenamani

Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧵1/8

English
3
4
62
12.7K
EmPRISE Lab retweetet
Rajat Kumar Jenamani
Rajat Kumar Jenamani@rkjenamani·
Most assistive robots live in labs. We want to change that. FEAST enables care recipients to personalize mealtime assistance in-the-wild, with minimal researcher intervention across diverse in-home scenarios. 🏆 Outstanding Paper & Systems Paper Finalist @RoboticsSciSys 🧵1/8
English
5
67
326
68.4K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
So proud of what we achieved with the PhyRC Challenge at #ICRA2025 with the support of our generous sponsors @KinovaRobotics and @hellorobotinc! From simulation to real-world assistive care tasks -- robot-assisted dressing and bed-bathing -- this was an ambitious undertaking. Huge congrats to all the teams who won a Stretch 3 robot (sponsored by @hellorobotinc) and a Gen 3 robot (sponsored by @KinovaRobotics) as prizes 👏. The teams were impressive but these tasks are extremely challenging, and no team was able to completely finish the tasks. Want to give these tasks a try during the next iteration of #PhyRC? Stay tuned for the next version :-) @EmpriseLab @CornellCIS
Ruolin Ye@Nekovowo

🚀 Wrapping up an incredible journey at the PhyRC Challenge (emprise.cs.cornell.edu/rcareworld/cha…) at #ICRA2025! Over the past few months, teams from around the world have been tackling challenging tasks in assistive robotics — dressing and bed-bathing. The journey began in RCareWorld (emprise.cs.cornell.edu/rcareworld/), our human-centric simulation platform, where teams first demonstrated their solutions virtually. The top performers then advanced to the real-world phase at ICRA 2025, executing the tasks with a manikin in a physical setting. The winners were awarded real robots as prizes, including a Kinova Gen 3 robot sponsored by @KinovaRobotics for the dressing winners, and a Stretch 3 robot from @hellorobotinc for the bed-bathing winners. A huge thank you to the sponsors! 🏆 Congratulations to our winning teams: RoboNotts — Bed-Bathing Track (Jialin Chen, Koyo Fujii, Kalu Stephen, Areeb Akhter, Zakaria Taghi, Liz Felton, Luis Figueredo, @praminda, Aly Magassouba from University of Nottingham) UniChAMPions — Dressing Track (Maria Fernanda Paulino Gomes, César Bastos da Silva, Elton Cardoso do Nascimento, Ervin Bolivar H. , Esther Luna Colombini, Paula Dornhofer Paro Costa, Leonardo Rocha Olivi, and Eric Rohmer from Universidade Estadual de Campinas, @FEECUnicamp, and HIAAC UNICAMP) We’d also like to recognize the incredible efforts of the RALLA Team (Eunice Firewood, John Bateman, @jihongzhu, Jian Zhao, and @kefhuang from University of York) and the UWMTR Team (Cheng Tang, Hao Tian, Hasan Khan, Siha Pyo, Eddy Zhang, and Chao Tang from @UWaterloo) for their participation throughout the challenge. Thank you to all the organizing team members who made the PhyRC Challenge possible! @tomssilver @rishabhmadan96 @rohanbbanerjee @ZhanxinWu0725 @TapoBhat Check out the highlight video below to relive some of the incredible moments!

English
0
3
20
2.2K
EmPRISE Lab retweetet
Rajat Kumar Jenamani
Rajat Kumar Jenamani@rkjenamani·
Excited to share our work on continual, flexible, active, and safe robot personalization w/ @tomssilver, @RealZiangLiu, Ben Dodson & @TapoBhat. Also: @tomssilver is starting a lab at Princeton!! I HIGHLY recommend joining — thoughtful, kind, and an absolute joy to work with!
Tom Silver@tomssilver

Happy to share a new preprint: "Coloring Between the Lines: Personalization in the Null Space of Planning Constraints" w/ @rkjenamani, @RealZiangLiu, Ben Dodson, and @TapoBhat. TLDR: We propose a method for continual, flexible, active, and safe robot personalization. Links 👇

English
1
4
19
1.8K
EmPRISE Lab retweetet
Tom Silver
Tom Silver@tomssilver·
Happy to share a new preprint: "Coloring Between the Lines: Personalization in the Null Space of Planning Constraints" w/ @rkjenamani, @RealZiangLiu, Ben Dodson, and @TapoBhat. TLDR: We propose a method for continual, flexible, active, and safe robot personalization. Links 👇
English
1
14
37
5.2K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
Congrats @rohanbbanerjee and the team on getting nominated for Best paper award at #ICRA2025 for their work on “To ask or not to ask: Human-in-the-loop contextual bandits with applications in robot-assisted feeding”. Check out his presentation in Room 302 on Tuesday at the Awards Finalists 2 session at 11:15am. In robot-assisted feeding (and many other real-world tasks in unstructured settings), full robot autonomy can be fragile as algorithms and systems may fail. Can a robot leverage a user, who is already present, for help vs. acting autonomously? In our new work, we introduce LinUCB-QG: A human-in-the-loop contextual bandit algorithm that adapts querying based on both task uncertainty and a predicted user-specific workload based on a querying model learned using a newly collected diverse dataset. Key takeaway is that users with mobility limitations may experience higher workload when queried compared to users without mobility limitations, so the robot queries less, prioritizing lower querying workload even at some cost to task performance. However, user satisfaction is high. Our system improves bite acquisition success while reducing user workload — validated via simulation and real-world user studies. This work was also done with @rkjenamani, Sid Vasudev, Amal Nanavati, @DimitropoulouDr, and Sarah Dean. Website: emprise.cs.cornell.edu/hilbiteacquisi… @EmpriseLab @CornellCIS @Cornell_CS @ieee_ras_icra @ieeeras
English
1
10
49
4.3K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
How can caregiving robots adapt their control policies across people with diverse mobility limitations, balancing robot assistance with user agency in action across various tasks? Check out @RealZiangLiu et al.'s work @EmpriseLab @CornellCIS: GRACE: Generalizing Robot-Assisted Caregiving with User Functionality Embeddings, where we represent a user’s mobility using learned functional embeddings to enable personalized, adaptive robot assistance. This was presented @HRI_Conference last week. We first collect DataGRACE, an open-source dataset of functional range of motion using motion capture data and functional scores administered by occupational therapists across 11 users simulating 4 different medical conditions using occupational therapy practices with resistive bands. We use DataGRACE to develop our algorithm GRACE, which learns functional embeddings for a user’s mobility representation and uses this to adapt downstream robot control policies for people with diverse mobility limitations. Using four simulated tasks —handover, rehab, dressing, bed-bathing, and a real-world handover task, we evaluate GRACE’s ability to generalize caregiving policies across diverse users. GRACE significantly improved user agency in action and task success compared to relevant heuristics based baselines. This work is a collaborative effort with Prof. Katherine Dimitropoulou, an Occupational Therapist by training, from Columbia University Irving Medical Center (CUIMC). Website: emprise.cs.cornell.edu/grace/
English
0
3
15
1.3K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
How can a caregiving robot with limited torque help people with mobility limitations and older adults transfer from a hospital bed to their wheelchair? Check out @Nekovowo's work at HRI @HRI_Conference: CART-MPC: Coordinating Assistive Devices for Robot-Assisted Transferring with Multi-Agent Model Predictive Control, where we took a first step by enabling a robot to transfer a manikin from a bed to a wheelchair while coordinating with assistive devices that exist in people's homes. @EmpriseLab @CornellCIS @Cornell_CS Website: emprise.cs.cornell.edu/cart-mpc/ Check this thread for more details (1/7) 🧵
English
1
12
38
4.1K
EmPRISE Lab retweetet
Rajat Kumar Jenamani
Rajat Kumar Jenamani@rkjenamani·
🚨 Call for Papers: Physical Caregiving Robots Workshop 🚨 📍 HRI 2025 | Melbourne, Australia 🌏🦘 📄 Submit a 2-page extended abstract on topics related to Physical Caregiving Robots 🔗 Key Links: Website: caregivingrobots.github.io Register interest: forms.gle/JAur9YyV7hiRvz…
Rajat Kumar Jenamani tweet media
English
0
8
19
2.5K
EmPRISE Lab retweetet
Tapomayukh "Tapo" Bhattacharjee
We finalized our Phase 2 (Real-world Phase) venue! Our competition proposal was accepted at ICRA 2025 in Atlanta🎉 @ieee_ras_icra Here’s the timeline: 🗓️ Phase 1 ends: Dec 23, 2024 🗓️ Phase 2 participants announced: Dec 30, 2024 So far, 56 teams are participating, tackling: Track 1: Fixed-base Manipulation for Robot-assisted Dressing Track 2: Mobile Manipulation for Robot-assisted Bed Bathing Top teams from Phase 1 will advance to compete in Phase 2 with real robot prizes sponsored by @KinovaRobotics and @hellorobotinc. 🌟 Huge thanks to everyone contributing to this challenge—let’s keep the momentum going! We’re thrilled to see participants from around the globe using our simulation platform. If you haven’t joined yet, there’s still time—registration is open, and it’s not too late to be part of this exciting journey! We have regular office hours to help you with your questions.🚀 🔗 Learn more: emprise.cs.cornell.edu/rcareworld/cha… #ICRA2025 #Robotics #PhyRC
Tapomayukh "Tapo" Bhattacharjee tweet media
Tapomayukh "Tapo" Bhattacharjee@TapoBhat

The @EmpriseLab is thrilled to announce the PhyRC Challenge, a competition to facilitate innovation in physical robotic caregiving. The competition has two tracks (Track 1: Fixed-based Manipulation for Robot-assisted Dressing and Track 2: Mobile Manipulation for Robot-assisted Bed Bathing), each evaluated through two phases, namely Phase 1: Simulation Phase and Phase 2: Real Robot Phase. We would like to thank @KinovaRobotics for generously sponsoring a Gen 3 7-DoF robot arm for the Track 1 winning team and @hellorobotinc for generously sponsoring a Stretch 3 robot for the Track 2 winning team. (1/4)

English
0
5
38
4.2K