Hung-Chieh Fang

17 posts

Hung-Chieh Fang

Hung-Chieh Fang

@hungchiehfang

BS @NTU_TW | Prev. intern @StanfordAILab | Robot learning; Representation learning

Taipei, Taiwan Katılım Eylül 2021
302 Takip Edilen48 Takipçiler
Sabitlenmiş Tweet
Hung-Chieh Fang
Hung-Chieh Fang@hungchiehfang·
Manipulation demands dexterous in-hand tool use, rich contact handling, and long-horizon stability. Introducing DexDrummer, our sim2real framework that unifies these skills in a drumming testbed. w/ @amberxie_, @jenngrannen, Kenneth Llontop, @DorsaSadigh (Videos at 1x speed)
English
9
11
40
6.1K
Hung-Chieh Fang
Hung-Chieh Fang@hungchiehfang·
Manipulation demands dexterous in-hand tool use, rich contact handling, and long-horizon stability. Introducing DexDrummer, our sim2real framework that unifies these skills in a drumming testbed. w/ @amberxie_, @jenngrannen, Kenneth Llontop, @DorsaSadigh (Videos at 1x speed)
English
9
11
40
6.1K
Hung-Chieh Fang
Hung-Chieh Fang@hungchiehfang·
@amberxie_ @jenngrannen @DorsaSadigh (6/7) Finger-driven control allows fine-grained control of the stick to follow precise trajectories, while arm-driven control cannot easily achieve such precision.
English
0
0
1
120
Hung-Chieh Fang
Hung-Chieh Fang@hungchiehfang·
@amberxie_ @jenngrannen @DorsaSadigh (5/7) The low-level policy learns dexterous skills with object-centric rewards and a contact curriculum to handle interactions. The curriculum is designed to first master dexterous skills, then progressively handle contacts, preventing early training failures.
English
0
0
1
136
Hung-Chieh Fang
Hung-Chieh Fang@hungchiehfang·
@amberxie_ @jenngrannen @DorsaSadigh (4/7) The high-level policy retargets object trajectories for motion planning, while residual RL on the arm enables small corrections crucial for high-dynamics tasks.
English
0
0
2
124
Hung-Chieh Fang
Hung-Chieh Fang@hungchiehfang·
@amberxie_ @jenngrannen @DorsaSadigh (3/7) Simulation enables scalable data collection, but suffers from a large exploration space. We address this with a hierarchical framework: - A high-level policy handles long-horizon coordination - A low-level policy handles dexterous control and contact-rich interaction
Hung-Chieh Fang tweet media
English
0
0
2
161
Hung-Chieh Fang
Hung-Chieh Fang@hungchiehfang·
@amberxie_ @jenngrannen @DorsaSadigh (2/7) Learning a drum-playing policy is hard! - Teleoperating dynamic, contact-rich, in-hand tasks is extremely difficult - Learning from human demos is also challenging for in-hand, contact-rich tasks - Planning-based policies struggle with the stochasticity of stick contacts
English
1
0
3
172
Hung-Chieh Fang retweetledi
Amber Xie
Amber Xie@amberxie_·
Introducing HandelBot 🎹🤖, a real-world piano playing robot! Piano is extremely hard (even for humans!). We take a small but exciting step to replicate this beautiful skill w HandelBot. Our insight is combining sim priors w real world refinement & RL. w/ @haozhiq @DorsaSadigh
English
11
31
166
36.8K
Hung-Chieh Fang retweetledi
Jenn Grannen
Jenn Grannen@jenngrannen·
Drumming 🥁 is a rich test bed for dexterous robot finger and hand control! This exciting work was lead by @hungchiehfang who is applying to PhDs this cycle. We're hoping to have more fun real-world videos soon! (think Smoke on the Water 🤘)
Ken Goldberg@Ken_Goldberg

Interesting new task for multi-fingered robot hands! : DexDrummer: In-Hand, Contact-Rich, and Long-Horizon Dexterous Robot Drumming - goo.gl/scholar/xdc9gb

English
0
1
14
2.1K
Hung-Chieh Fang retweetledi
Suvir Mirchandani
Suvir Mirchandani@suvir_m·
Data collection remains a bottleneck in imitation learning for robotics: it’s tedious & often needs access to a robot. Can we make the data collection process more accessible and engaging? We introduce RoboCade, a platform for gamifying remote robot data collection 🎮🤖 (1/6)
English
2
18
57
18.2K
Hung-Chieh Fang retweetledi
Jenn Grannen
Jenn Grannen@jenngrannen·
Meet Scanford 📚🤖: a robot that improves foundation models by doing useful work in the wild. Deployed for 2 weeks in the Stanford East Asia Library, Scanford scans books, helps librarians, and continually improves the VLM it relies on. 🔗 scanford-robot.github.io 🧵1/8
English
17
72
477
92.1K
Hung-Chieh Fang
Hung-Chieh Fang@hungchiehfang·
We study how to improve representation generalization in decentralized settings where data distributions are non-IID across clients. Our key idea is to enhance representation uniformity to maximize information through a "soft" regularization term that preserves semantic alignment. I’m presenting in Hall 1, #274 — feel free to stop by and chat if you’re at ICCV!
Hung-Chieh Fang tweet media
English
0
1
3
585
Hung-Chieh Fang retweetledi
Jubayer Ibn Hamid
Jubayer Ibn Hamid@jubayer_hamid·
Exploration is fundamental to RL. Yet policy gradient methods often collapse: during training they fail to explore broadly, and converge into narrow, easily exploitable behaviors. The result is poor generalization, limited gains from test-time scaling, and brittleness on tasks where strategic exploration is necessary. We introduce a framework for training a policy over sets of generations and use it to induce exploration. Work with @ifdita_hasan (co-lead), @ellenjxu_ , @chelseabfinn and @DorsaSadigh at Stanford 🧵
Jubayer Ibn Hamid tweet media
English
18
142
1K
196.3K
Hung-Chieh Fang retweetledi
Amber Xie
Amber Xie@amberxie_·
Introducing Importance Weighted Retrieval, a simple modification to existing retrieval methods! Our importance sampling inspired approach helps us more effectively retrieve from prior datasets for few shot imitation learning! #CoRL2025 Oral w/ Rahul Chand @DorsaSadigh @JoeyHejna
English
5
24
168
33.9K
Hung-Chieh Fang retweetledi
Shao-Hua Sun
Shao-Hua Sun@shaohua0116·
Kicking off #RLC2025 with our Workshop on Programmatic Reinforcement Learning! This workshop explores how programmatic representations can improve interpretability, generalization, efficiency, and safety in RL.
Shao-Hua Sun tweet media
English
2
7
52
10.1K
Hung-Chieh Fang
Hung-Chieh Fang@hungchiehfang·
Come check out our #ICML2025 poster tomorrow! We explore how domain alignment under distribution shifts - including both domain shift (e.g., covariate shift) and category shift (non-overlapping label sets) - can struggle in extreme cases where the category shift is large. The challenge arises from inaccurate estimation of uncertainty scores, which leads to large negative transfer during alignment. We tackle this from the representation learning perspective and show that it can steadily improve robustness across different shift scenarios. Jul 15, 11am - 1:30pm, East Exhibition Hall A-B, #E-2001 Project page: dc-unida.github.io
Hung-Chieh Fang tweet media
English
0
1
2
629