Xiaolong Wang

1.5K posts

Xiaolong Wang

Xiaolong Wang

@xiaolonw

Co-founder of Assured Robot Intelligence (ARI) Associate Professor @UCSDJacobs Postdoc @berkeley_ai PhD @CMU_Robotics

San Diego, CA Katılım Mart 2016
1.4K Takip Edilen20.2K Takipçiler
Xiaolong Wang retweetledi
Xueyan Zou
Xueyan Zou@xyz2maureen·
Our Latent Encoder-Decoder code base is fully open sourced, you can train and visualize the latent space: Code⚙️: github.com/EmptyBlueBox/D… ArXiv 📚: arxiv.org/abs/2603.10158 #CVPR2026
Guangqi Jiang@LuccaChiang

Ever want to have a single policy to control diverse robots as well as different dexterous hands, or to observe the emergent behavior under cross embodiment training? Introducing our #CVPR2026 paper XL-VLA, Cross-Hand Latent Representation for Vision-Language-Action Models.

English
1
33
123
16.6K
Xiaolong Wang retweetledi
Mahi Shafiullah 🏠🤖
Mahi Shafiullah 🏠🤖@notmahi·
Why buy a robot when you can build your own? Meet YOR, our new open-source bimanual mobile manipulator robot – built for researchers and hackers alike for only ~$10k. 🧵👇
English
8
22
170
36.3K
Xiaolong Wang retweetledi
Xiaolong Wang retweetledi
Changwei Jing
Changwei Jing@cwj99770123·
Can we bridge the Sim-to-Real gap in complex manipulation without explicit system ID? 🤖 Presenting Contact-Aware Neural Dynamics — a diffusion-based framework that grounds simulation with real-world touch. Implicit Alignment: No tedious parameter tuning. Tactile-Driven: Captures non-smooth contact events. Consistent: Stable predictions in contact-rich tasks.
English
6
48
312
41.5K
Xiaolong Wang retweetledi
Sourish Jasti
Sourish Jasti@SourishJasti·
1/ General-purpose robotics is the rare technological frontier where the US / China started at roughly the same time and there's no clear winner yet. To better understand the landscape, @zoeytang_1007, @intelchentwo, @vishnuman0 and I spent the last ~8 weeks creating a deep dive on humanoid robotics hardware and flew to China to see the supply chain firsthand. Here's everything we've created + our takeaways about the components, humanoid comparisons, supply chains, and geopolitics👇
English
71
260
1.8K
818.2K
Xiaolong Wang retweetledi
James Zou
James Zou@james_y_zou·
Standard AI learns to imitate. We introduce a new framework that trains AI to make new discoveries in science + engineering. Learning-to-discover + open source LM led to: 🥇best new bound on Erdos min overlap problem 🥇fastest GPU kernels 🥇better single-cell denoising + more!
James Zou tweet media
English
14
121
717
52.2K
Xiaolong Wang
Xiaolong Wang@xiaolonw·
TTT now beats AlphaEvolve in math problems, with a few hundred dollars and an open model. With Test-Time Training (TTT) + RL, the model continues learning on the job from its own attempts.
Mert Yuksekgonul@mertyuksekgonul

How to get AI to make discoveries on open scientific problems? Most methods just improve the prompt with more attempts. But the AI itself doesn't improve. With test-time training, AI can continue to learn on the problem it’s trying to solve: test-time-training.github.io/discover.pdf

English
0
11
112
15.6K
Xiaolong Wang retweetledi
Karan Dalal
Karan Dalal@karansdalal·
Test-time training is inevitable. We’re heading toward models that truly learn from experience: TTT for LLM memory (TTT-E2E), and now for open scientific problems. TTT-Discover is a powerful new algorithm that combines TTT with RL. With <$500 and an open-source model, it produced novel bounds to AC inequality problems, outperforming AlphaEvolve (Gemini + Terry Tao). Imagine the breakthroughs when TTT-Discover is deployed in frontier systems.
Mert Yuksekgonul@mertyuksekgonul

How to get AI to make discoveries on open scientific problems? Most methods just improve the prompt with more attempts. But the AI itself doesn't improve. With test-time training, AI can continue to learn on the problem it’s trying to solve: test-time-training.github.io/discover.pdf

English
4
20
165
18.3K
Xiaolong Wang retweetledi
Mert Yuksekgonul
Mert Yuksekgonul@mertyuksekgonul·
How to get AI to make discoveries on open scientific problems? Most methods just improve the prompt with more attempts. But the AI itself doesn't improve. With test-time training, AI can continue to learn on the problem it’s trying to solve: test-time-training.github.io/discover.pdf
Mert Yuksekgonul tweet media
English
25
171
751
372K
Xiaolong Wang retweetledi
NVIDIA AI Developer
NVIDIA AI Developer@NVIDIAAIDev·
We are entering a new era for LLM memory. 🧠 In our latest research, End-to-End Test-Time Training, LLMs keep learning at test time via next-token prediction on the context – compressing what they read directly into their weights. Learn more: nvda.ws/4sHb8na
NVIDIA AI Developer tweet media
English
52
185
1.5K
124.9K
Xiaolong Wang
Xiaolong Wang@xiaolonw·
After years of research, there is finally a solution to long context! TTT-E2E is how robotics will work in the future. Humanoids ingest vision, touch, audio – almost everything humans do. It only makes sense if robot memory works like human memory: learning during deployment.
Karan Dalal@karansdalal

LLM memory is considered one of the hardest problems in AI. All we have today are endless hacks and workarounds. But the root solution has always been right in front of us. Next-token prediction is already an effective compressor. We don’t need a radical new architecture. The missing piece is to continue training the model at test-time, using context as training data. Our full release of End-to-End Test-Time Training (TTT-E2E) with @NVIDIAAI, @AsteraInstitute, and @StanfordAILab is now available. Blog: nvda.ws/4syfyMN Arxiv: arxiv.org/abs/2512.23675 This has been over a year in the making with @arnuvtandon and an incredible team.

English
5
24
200
34.1K
Xiaolong Wang retweetledi
Kehlani Fay
Kehlani Fay@Kehlani_Fay·
What makes robot hands dexterous? 🤖🖐️ We generate robot hands + control, sim-to-real, in under 24 hours. Paper: Cross-Embodied Co-Design for Dexterous Hands 🔥 Rapid evaluation w/ cross-embodied policies 🦾 Open-source modular hand platform 💡 Automated full-hand generation
English
7
34
152
18.7K
Xiaolong Wang retweetledi
Karan Dalal
Karan Dalal@karansdalal·
Our new paper, “End-to-End Test-Time Training for Long Context,” is a step towards continual learning in language models. We introduce a new method that blurs the boundary between training and inference. At test-time, our model continues learning from given context using the same next-token prediction objective as training. With this end-to-end objective, our model can efficiently compress substantial context into its weights and still use it effectively, unlocking extremely long context windows for complex reasoning and applications in agents and robotics. Paper: test-time-training.github.io/e2e.pdf Code: github.com/test-time-trai…
Karan Dalal tweet media
English
42
212
1.2K
182.1K
Xiaolong Wang retweetledi
Lerrel Pinto
Lerrel Pinto@LerrelPinto·
It was wonderful being on the @RoadToAutonomy podcast and chat with Hugh Nguyen and Grayson Brulte. We talked about humanoid robots, on why enterprise robots come before home robots, key technological unlocks, and work we are doing at ARI to deploy frontier humanoids today.
The Road to Autonomy®@RoadToAutonomy

The Million Robot Bet: Why Enterprise Wins Before Homes Hugh Nguyen, Partner, Automotive Technology & Mobility, @KPMG and @LerrelPinto, Co-Founder, Assured Robot Intelligence (ARI) joined Grayson Brulte on The Road to Autonomy podcast to discuss why the immediate future of humanoid robotics lies in enterprise applications, rather than consumer homes. Episode Chapters 0:00 Humanoid Robot Market 6:44 Humanoid Due Diligence 9:40 Humanoid Value Chain 12:08 Humanoids Size and Hands 1 6:52 Building Humanoids 18:52 Humanoid Personalities 20:24 Managing Humanoid Risk 22:24 Humanoid Fleets 25:36 Humanoid Use Cases 29:58 China 33:20 Humanoid Policy 38:42 Chips 45:44 Deploying Humanoids in the Workplace 49:28 Future of Humanoids

English
1
5
41
8.5K