Jishnu Jaykumar Padalunkal

304 posts

Jishnu Jaykumar Padalunkal banner
Jishnu Jaykumar Padalunkal

Jishnu Jaykumar Padalunkal

@jishnu_jaykumar

PhD Candidate | Research focus on improving #RobotPerception, #RobotLearning | Previously @rai_inst @iitkgp @NVIDIA @IISc @iiitvadodarasm

Texas, USA Katılım Mart 2012
2.2K Takip Edilen184 Takipçiler
Sabitlenmiş Tweet
Jishnu Jaykumar Padalunkal
Jishnu Jaykumar Padalunkal@jishnu_jaykumar·
🎉 Unveiling #HRT1 — a #oneshot human-to-robot trajectory transfer system that enables autonomous #mobilemanipulation in novel environments with #zerotraining. 🔗irvlutd.github.io/HRT1/ Excited to see where this leads #Robotics. @IRVLUTD @UTDCompSci @UT_Dallas @XPENGRobotics
Intelligent Robotics and Vision Lab @ UTDallas@IRVLUTD

Many #robot_learning works use human videos but need lots of data/retraining. We present #HRT1 — a robot learns from just one human video and performs mobile manipulation tasks in new environments with relocated objects — via trajectory transfer.🔗 irvlutd.github.io/HRT1/ (1/11)

English
2
4
9
1.2K
Jishnu Jaykumar Padalunkal
Jishnu Jaykumar Padalunkal@jishnu_jaykumar·
Stepping out of the lab and into the real world. Today at #ECSAIDays2026, we’ll be presenting a live demo of #HRT1 — a system that transfers human demonstrations to robot actions for mobile manipulation. 🔗 irvlutd.github.io/HRT1/ This short timelapse is a behind-the-scenes glimpse of what it actually takes. From moving the robot across campus, navigating buildings, setting up hardware, testing repeatedly, to making everything work outside controlled environments — a lot goes into what eventually looks like a “simple demo.” Especially in smaller labs, it’s all hands-on, end-to-end effort. Also, when it’s just two of us running a two-person setup and trying to film it, the camera doesn’t always cooperate 😄 — but the live demo will be much more fun. Grateful to be building this with @saihaneesh_allu at the @IRVLUTD If you’re around, drop by and see it live. 📍 ECSW, UT Dallas ⏳ Apr 30, 2026 Thanks to @Sriraam_UTD, @VibhavGogate, @YuXiang_IRVL and Tyler Summers for the opportunity and support. @UT_Dallas @UTDCompSci @UTDResearch
English
0
1
6
769
Jishnu Jaykumar Padalunkal retweetledi
RAI Institute
RAI Institute@rai_inst·
Watch AthenaZero juggle barehanded using on-board sensory feedback only. No motion capture. No funnels. No help adding the third ball. The robot learns to adapt to the uncertainties from contact and the appropriate hand-eye coordination. Learn more: rai-inst.com/resources/blog…
English
1
19
93
11.1K
Jishnu Jaykumar Padalunkal
Jishnu Jaykumar Padalunkal@jishnu_jaykumar·
🛠️ While working on #iTeach (lnkd.in/gj9s2eJR), where unseen object instance segmentation (UOIS) was a key task, I kept hitting the same wall: every UOIS dataset had its own loader and quirks. Getting data ready for training meant writing custom glue for each — time that should have gone into the models themselves. ✨ So I put together uois_toolkit — a small PyTorch library that wraps 5 popular UOIS datasets (Tabletop, OCID, OSD, Robot Pushing, iTeach-HumanPlay) behind one API. 🚀 Features:  📦 Load any dataset in 3 lines  📊 Compute F1, IoU, Precision, and Recall with a single call  ⚡ Plug directly into PyTorch Lightning  🤖 Works out of the box with robotics pipelines 💡 The idea is simple — take the friction out of data pipelines so more time can go into model building. 🎉 It has picked up around 2K downloads on PyPI since release, which was a nice surprise. 🙏 Sharing it a bit more openly now in case others find it helpful. ❤️ Huge thanks to the original dataset authors, whose open codebases made this possible. And to Avaya Aggarwal and Animesh Maheshwari for testing and feedback along the way. 🔗 GitHub: lnkd.in/grPU7rx5 📥 PyPI: lnkd.in/gugFzGWr @IRVLUTD @UT_Dallas #Robotics #RobotPerception #ComputerVision #PyTorch #Lightning #OpenSource
GIF
English
0
1
2
131
clem 🤗
clem 🤗@ClementDelangue·
Looking for some beta testers for our new robotics dataset hosting features. Who's currently hosting large robotics datasets and would be down to try? 🤖🤖🤖
English
32
11
103
14.9K
Jishnu Jaykumar Padalunkal
Jishnu Jaykumar Padalunkal@jishnu_jaykumar·
🤖 Robots don't fail in the lab. They fail in the wild — clutter, occlusion, constantly changing environments. The real question: Can robots learn directly from these failures during deployment? How about teaching robots the way we'd teach a child — by showing them where they went wrong? 🧵👇
English
6
3
13
2.9K
Jishnu Jaykumar Padalunkal
Jishnu Jaykumar Padalunkal@jishnu_jaykumar·
6/7:Real robot results on SceneReplica: 🟠+7% pick-and-place success 🟠+3% grasping improvement 🟠Direct gains in real-world performance 🚀 This isn't about a new model — it's about a new way to collect data and adapt perception using failure-driven, human-guided refinement.
Jishnu Jaykumar Padalunkal tweet media
English
0
0
0
54
Jishnu Jaykumar Padalunkal
Jishnu Jaykumar Padalunkal@jishnu_jaykumar·
5/7: Key insight: Data quality > quantity With human-guided scene interaction and targeted failure collection, we need significantly fewer iterations and samples for strong improvements. Results on UOIS: ✅ Strong gains from few targeted samples ✅ Better real-world generalization
Jishnu Jaykumar Padalunkal tweet mediaJishnu Jaykumar Padalunkal tweet media
English
0
0
0
48
Jishnu Jaykumar Padalunkal
Jishnu Jaykumar Padalunkal@jishnu_jaykumar·
4/7: The iTeach loop: 🔍 Human observes failures in situ 🎮 Quick HumanPlay interaction (5-10s) 👁️ Eye-gaze + voice annotation (final frame only) 🔄 SAM2 propagates labels across sequence ⚡ Model fine-tuned and redeployed → Loop continues until human deems performance is satisfactory 🎯
Jishnu Jaykumar Padalunkal tweet media
English
0
0
0
55
Jishnu Jaykumar Padalunkal
Jishnu Jaykumar Padalunkal@jishnu_jaykumar·
3/7:Our minimalist system setup is designed to enable in-the-wild interactive teaching: 🤖 Fetch mobile manipulator (RGB-D) 👓 HoloLens 2 (eye-gaze + voice) 💻 Single RTX 4090 laptop Everything runs onboard — no cloud dependency needed for real deployment scenarios!
Jishnu Jaykumar Padalunkal tweet mediaJishnu Jaykumar Padalunkal tweet mediaJishnu Jaykumar Padalunkal tweet media
English
0
0
0
84
Jishnu Jaykumar Padalunkal
Jishnu Jaykumar Padalunkal@jishnu_jaykumar·
2/7: We built iTeach — a failure-driven interactive teaching framework for robot perception. TL;DR: Humans do quick "HumanPlay" interactions when robots fail → annotate final frame with eye-gaze + voice → SAM2 propagates labels → robot learns immediately 3× better perception! 📈
Jishnu Jaykumar Padalunkal tweet media
English
0
0
0
88
Sanskar Pandey
Sanskar Pandey@sanskxr02·
@peteflorence My fav part of the blog - excited to see more and more people building on top of GEN-1 soon!
Sanskar Pandey tweet media
English
1
1
13
1.6K
Jishnu Jaykumar Padalunkal retweetledi
RAI Institute
RAI Institute@rai_inst·
Happy #NationalRoboticsWeek! Last year, we gave you a sneak peek at AthenaZero, our robotic manipulator built to tackle dynamic tasks like a human arm. Learn more about how this fast, precise robot can switch in an instant from a gentle touch to high force depending on what the task requires: rai-inst.com/resources/blog…
English
1
44
286
22.9K