Mingxi Jia

140 posts

Mingxi Jia banner
Mingxi Jia

Mingxi Jia

@MingxiJiaa

PhD student @BrownCSDept | Before at @Northeastern. I work on building robots that are helpful and lovable.

Providence, RI Katılım Mart 2021
1.1K Takip Edilen207 Takipçiler
Sabitlenmiş Tweet
Mingxi Jia
Mingxi Jia@MingxiJiaa·
Very excited to present our work at #ICRA2023! Our work introduces a novel 3D data augmentation method and equivariant policy learning, leading to high data efficiency. Our video shows how our method learns manipulation tasks using less than 10 demos, trained from scratch! 1/n
English
3
8
45
4.3K
Mingxi Jia
Mingxi Jia@MingxiJiaa·
Kei’s career was a masterclass in resilience. He’s been a massive inspiration to me, showing that heart and hustle can overcome any power advantage on the court.
Kei Nishikori@keinishikori

English
0
0
0
153
Mingxi Jia retweetledi
Chris Paxton
Chris Paxton@chris_j_paxton·
NovaFlow is a very different kind of approach from many other learning-based methods, in that it uses generated videos *directly* without fine tuning to the robot, and then uses estimates of how points move in space to control the robot. This makes it "zero shot" across a wide range of tasks. Really interesting ideas in this work; lots of fun having @jiahuifu_carol and @Hongyu_Lii on @RoboPapers w/ @micoolcho
RoboPapers@RoboPapers

The holy grail of robotics is to be able to perform previously-unseen, out-of-distribution manipulation tasks “zero shot” in a new environment. NovaFlow proposes an approach which (1) generates a video, (2) computes predicted flow — how points move through the scene — and (3) uses this flow as an objective to generate a motion. Using this procedure, NovaFlow generates motions in unseen scenes, for unseen tasks, and can transfer across embodiments. To learn more, we are joined by @Hongyu_Lii and @jiahuifu_carol from RAI. Watch Episode #63 of RoboPapers with @chris_j_paxton and @micoolcho now to learn more!

English
1
7
57
6.2K
Mingxi Jia retweetledi
Aashish Rai
Aashish Rai@aashishrai3799·
We’re excited to announce the First Workshop on 4D World Models: Bridging Generation and Reconstruction, to be held in conjunction with CVPR 2026.
Aashish Rai tweet media
English
2
31
172
30.4K
Mingxi Jia retweetledi
Boce Hu
Boce Hu@boce_hu·
Closed-loop visuomotor control with wrist cameras is widely adopted and powerful, but leveraging full 3D symmetry from only RGB is still hard. Introducing our #NeurIPS2025 Spotlight paper, Image-to-Sphere Policy (ISP), for equivariant policy learning from eye-in-hand RGB images.
English
2
17
40
10.2K
Mingxi Jia retweetledi
Ahmed Jaafar
Ahmed Jaafar@ahmed__jaafar·
Wouldn't it be great if robots were more data efficient? Manipulation has gotten more data efficient, but mobile manipulation (MoMa) is lagging. Introducing LAMBDA (λ): long horizon benchmark of realistically sized datasets to push the limits of MoMa models. 🤖 #IROS2025 🧵1/N
English
1
11
28
6.1K
Mingxi Jia retweetledi
David Tao
David Tao@Taodav·
What does it mean to be “better at” partial observability in RL? Existing benchmarks don't always provide a clear signal for progress. We fix that. Our new work (at RLC 2025 🤖) introduces a new property that ensures your gains are from learning better memory vs other factors. AND we provide a new JAX benchmark with environments that all have this property! 🧵1/5
David Tao tweet media
English
4
24
154
12.4K
Mingxi Jia retweetledi
Hongyu Li
Hongyu Li@Hongyu_Lii·
We interact with dogs through touch -- a simple pat can communicate trust or instruction. Shouldn't interacting with robot dogs be as intuitive? Most commercial robots lack tactile skins. We present UniTac: a method to sense touch using only existing joint sensors! [1/5]
English
4
25
114
15.9K
Mingxi Jia retweetledi
Shivam Vats
Shivam Vats@ShivaamVats·
How can 🤖 learn from human workers to provably reduce their workload in factories? Our latest @RoboticsSciSys paper answers this question by proposing the first cost-optimal interactive learning (COIL) algorithm for multi-task collaboration.
English
2
36
103
14.9K
Mingxi Jia retweetledi
Hongyu Li
Hongyu Li@Hongyu_Lii·
Can we robustly track an object’s 6D pose in contact-rich, occluded scenarios? Yes! Our solution, V-HOP, fuses vision and touch through a visuo-haptic transformer for precise, real-time tracking. arXiv: arxiv.org/abs/2502.17434 Project: lhy.xyz/projects/v-hop/
English
6
26
165
37.1K
Andreas Illiger
Andreas Illiger@AndreasIlliger·
Today is the 14th anniversary of Tiny Wings! A big thank you to all the players who still play and love my game! Also: There might be a big content update coming very soon.. maybe even in the coming days..🌕🐣🫧
English
14
16
191
4.5K
Mingxi Jia retweetledi
Jing-Jing Li
Jing-Jing Li@drjingjing2026·
1/3 Today, an anecdote shared by an invited speaker at #NeurIPS2024 left many Chinese scholars, myself included, feeling uncomfortable. As a community, I believe we should take a moment to reflect on why such remarks in public discourse can be offensive and harmful.
Jing-Jing Li tweet media
English
175
551
3.5K
1M
Mingxi Jia retweetledi
Haojie Huang
Haojie Huang@HaojieHuang13·
Generate the goal state and then infer the manipulation pick-place action. Feel free to check poster session 4 at #36 for details.
Helping Hands Lab @ Northeastern@HelpingHandsLab

#CoRL2024 IMAGINATION POLICY: Using Generative Point Cloud Models for Learning Manipulation Policies Led by @HaojieHuang13. A key-frame multi-task policy can generate key poses (imagine) and do manipulation precisely with sample efficiency. Presenting at Poster Session 4.

English
0
4
4
956
Mingxi Jia retweetledi
Naman Shah
Naman Shah@shah__naman·
🚀 Excited to announce the Learning Effective Abstractions for Planning workshop at #CoRL2024 in Munich on Nov 9th! 📅 Join us as we explore cutting-edge research on learning abstractions for robot planning 🤖🧠. 🔗 leap-workshop.github.io
Naman Shah tweet media
English
2
12
39
5.1K
Mingxi Jia
Mingxi Jia@MingxiJiaa·
Saw this at MIT in 2023. Very powerful words.
Mingxi Jia tweet media
English
2
0
6
208