Sehoon Ha

472 posts

Sehoon Ha banner
Sehoon Ha

Sehoon Ha

@sehoonha

Assistant Professor at Georgia Institute of Technology https://t.co/GWRmzIyQrc

Atlanta Katılım Ağustos 2009
337 Takip Edilen902 Takipçiler
Sehoon Ha retweetledi
Tianyu Li EasyPaperSniper
Tianyu Li EasyPaperSniper@SniperPaper·
We believe the best way to move forward is to bring others along. That's why we're launching our first @DexmateAI Research Grant Program! Selected research proposals will receive access to Vega U, our dual-arm manipulation platform, to explore and advance the frontiers of physical AI and embodied intelligence. Who's eligible: Full-time faculty at accredited universities or degree-granting research institutions with a US mailing address. 📅 Deadline: April 5, 11:59 PM PST 🔗 Apply: dexmate.ai/research-grant Not a PI? Pass this along to your lab lead — and tag anyone who should apply! #Dexmate #VegaU #Robotics #PhysicalAI #EmbodiedAI #ResearchGrant
Dexmate@DexmateAI

🤖 Robotics faculty and labs, we’re launching our first Dexmate Research Grant Program. The future of AI will not live only in software. It will move, interact, and operate in the physical world. At Dexmate, we’re building toward that future by supporting the researchers shaping it. This program is designed to accelerate breakthroughs in physical AI, embodied intelligence, and real world robotics applications. Selected applicants will receive access to Vega U, our dual arm manipulation platform, to explore, build, and push the boundaries of what robots can do. 🧪 Applications are now open. Apply here: dexmate.ai/research-grant ✨ Eligibility: 
Full time faculty at accredited universities and degree granting research institutions with a US mailing address. 📅 Deadline
: April 5, 11:59 PM PST Not a faculty PI? Share this with your PI or lab lead. Tag someone who should apply. Let’s see what your lab builds next 🚀 #Dexmate #VegaU #robotics #PhysicalAI #EmbodiedAI #ResearchGrant #Innovation

English
0
1
3
317
Sehoon Ha retweetledi
Danfei Xu
Danfei Xu@danfei_xu·
Introducing EgoVerse: an ecosystem for robot learning from egocentric human data. Built and tested by 4 research labs + 3 industry partners, EgoVerse enables both science and scaling 1300+ hrs, 240 scenes, 2000+ tasks, and growing Dataset design, findings, and ecosystem 🧵
English
34
158
856
251.9K
Sehoon Ha retweetledi
Humphrey Shi
Humphrey Shi@humphrey_shi·
Decisions for @CVPR 2026 are out—congratulations to all authors. I’m excited to share a community step forward: the new CVPR Findings Track. Area Chairs recommended 1717 papers for potential inclusion, creating a principled pathway to recognize and share valuable work that may not be the best fit for the main program—while still enabling authors to publish and present through integrated Findings poster sessions. As our field scales, we need not only better models—but better community infrastructure. This effort is led collectively by the Findings organizing team—Bryan Plummer, Kevin Shih, @anand_bhattad, @jccaicedo, @Grigoris_c, @BoqingGo, @liuziwei7, and me. Huge thanks to the CVPR General Chairs, Program Chairs, and especially the Area Chairs for supporting this step forward. Looking forward to seeing many of you at CVPR 2026—across the main program, Findings, and workshops.
Humphrey Shi tweet media
English
6
12
68
34.1K
Sehoon Ha
Sehoon Ha@sehoonha·
Georgia Tech’s School of Interactive Computing is launching a major search for a new faculty member in Computer Graphics this year. Despite a challenging job market, we are committed to recruiting an exceptional colleague who will help shape the future of our graphics community. We are especially interested in candidates who • have a strong presence in the Computer Graphics / SIGGRAPH community, and • bring research directions that complement—not overlap with—our current faculty (Sehoon Ha, Bo Zhu, Greg Turk). If you know outstanding scholars, we would be grateful for your recommendations. We warmly encourage applicants from all backgrounds and institutions. 📌 Official posting: ic.gatech.edu/faculty-hiring 📩 Contact: sehoonha@gatech.edu / bo.zhu@gatech.edu / turk@cc.gatech.edu
English
1
3
10
1.9K
Sehoon Ha retweetledi
RAI Institute
RAI Institute@rai_inst·
See Spot perform dynamic whole-body manipulation. Using a combination of reinforcement learning (RL) and sampling-based control, the robot is able to autonomously drag, roll, and stack tires weighing 15 kg (33 lb), well above its maximum arm lift capacity. Learn more about coordinating locomotion and manipulation processes: rai-inst.com/resources/blog…
English
20
148
769
87.2K
Sehoon Ha retweetledi
Maks
Maks@itsmaksX·
We taught Spot to stack 15kg car tires autonomously. It uses its whole-body and shows some dynamic manipulation!
English
37
76
601
49.4K
Sehoon Ha
Sehoon Ha@sehoonha·
✅ With RL + MAR, our controller achieves robust locomotion across diverse indoor & outdoor experiments, demonstrating a forward speed of 1.5 m/s and robust locomotion across diverse terrains, including slippery, sloped, uneven, and sandy terrains.
English
1
0
0
163
Sehoon Ha
Sehoon Ha@sehoonha·
What if we can take the periodic style of model-based controllers and the robustness of learning-based policies? In our #RAL paper, PPF: Pre-training and Preservative Fine-tuning of Humanoid Locomotion via Model-Assumption-based Regularization, we address this challenge. 🔑 Key idea: Imitate the model-based controller and then Fine-tune with RL, while preserving motion style via Model-Assumption-based Regularization (MAR) to avoid forgetting. Our controller is verified through comprehensive simulation tests and hardware experiments on a full-size humanoid robot, Digit, demonstrating a forward speed of 1.5 m/s and robust locomotion across diverse terrains, including slippery, sloped, uneven, and sandy terrains. 🙋Authors: Hyunyoung Jung* @hyunyoungjung, Zhaoyuan Gu* @gu_zy14, Ye Zhao @GT_LIDAR, Hae-Won Park, and Sehoon Ha @sehoonha 🌐 Website: hyunyoungjung.github.io/projects/ppf/p… 📄 arXiv: arxiv.org/abs/2508.134443
English
1
1
2
364
Sehoon Ha retweetledi
naveen manwani
naveen manwani@NaveenManwani17·
🚨 Paper Alert 🚨 ➡️Paper Title: EMMA: Scaling Mobile Manipulation via Egocentric Human Data 🌟Few pointers from the paper 🎯Scaling mobile manipulation imitation learning is bottlenecked by expensive mobile robot teleoperation. 🎯Authors of this paper presented “Egocentric Mobile MAnipulation (EMMA)”, an end-to-end framework training mobile manipulation policies from human mobile manipulation data with static robot data, sidestepping mobile teleoperation. 🎯To accomplish this, they co-train human full-body motion data with static robot data. 🎯In their experiments across three real-world tasks, EMMA demonstrates comparable performance to baselines trained on teleoperated mobile robot data (Mobile ALOHA), achieving higher or equivalent task performance in full task success. 🎯They found that EMMA is able to generalize to new spatial configurations and scenes, and they observe positive performance scaling as they increase the hours of human data, opening new avenues for scalable robotic learning in real-world environments. 🏢Organization: @GeorgiaTech 🧙Paper Authors: Lawrence Y. Zhu, Pranav Kuppili, @ryan_punamiya , Patcharapong Aphiwetsa, Dhruv Patel, @simar_kareer , @sehoonha , @danfei_xu 📝 Read the Full Paper here: arxiv.org/abs/2509.04443 🗂️ Project Page: ego-moma.github.io 🎥 Be sure to watch the attached Demo Video - Sound on 🔊🔊 Find this Valuable 💎 ? ♻️QT and teach your network something new Follow me 👣, @NaveenManwani17 , for the latest updates on Tech and AI-related news, insightful research papers, and exciting announcements.
English
0
2
7
754
Sehoon Ha
Sehoon Ha@sehoonha·
As a result, SDAX allows an agent to learn challenging motor skills without reward shaping, demonstrations, or curricula. We showcase hardware results for crawl, leap, and climb, and simulation results for super-agile wall-jumping. For more details, please visit our website: seungeunrho.github.io/projects/SDAX/
English
0
0
1
148
Sehoon Ha
Sehoon Ha@sehoonha·
“Skill collapse—but in a positive way.” One challenge in skill discovery is selecting the right skill from a distribution once training ends. With SDAX, task rewards act like a magnet, aligning diverse skills toward solving the task. We call this positive skill collapse—thanks to it, we could easily pick the right learned skills.
English
1
0
1
180
Sehoon Ha
Sehoon Ha@sehoonha·
Can robots learn to jump, crawl, and climb—without reward engineering, curriculum learning, or reference data? In our #CoRL2025 paper SDAX, Skill Discovery as eXploration, we show how unsupervised RL can learn these challenging motor skills without human priors —and transfer them to real robots. 📷 Unsupervised Skill Discovery as Exploration for Learning Agile Locomotion Seungeun Rho* @ross_rho , Kartik Garg* @_pacificax, Morgan Byrd, Sehoon Ha @sehoonha seungeunrho.github.io/projects/SDAX/
English
1
0
11
1K