
Wenhao Ding
60 posts

Wenhao Ding
@wenhaoding95
Research Scientist @NVIDIA | Ph.D.@CarnegieMellon | B.E. @tsinghua_uni


Jensen today announced Alpamayo 1.5 at #NVIDIAGTC! #Alpamayo 1.5 is a major update to Alpamayo 1—@nvidia’s open 10B-parameter chain-of-thought reasoning VLA model, first introduced at #CES. Built on the #Cosmos-Reason2 VLM backbone and post-trained with RL, it adds support for navigation guidance, flexible multi-camera setups, configurable camera parameters, and user question answering. The result is an interactive, steerable reasoning engine for the AV community. We’re also releasing post-training scripts to help researchers and developers adapt the model. Additionally, we’ve significantly expanded the Alpamayo open platform across data and simulation, including releasing highly requested reasoning labels for the PhysicalAI Autonomous Vehicles dataset (huggingface.co/datasets/nvidi…), as well as our chain-of-causation auto-labeling pipeline. 🔎 Learn more about Alpamayo 1.5 and the latest extensions to the Alpamayo open platform: huggingface.co/blog/drmapavon… (please note that most of the links will become active in the next few days.) Happy building—and stay tuned for more in the coming months! @NVIDIADRIVE @NVIDIAAI




💨 How fast can an autonomous vehicle think? With Alpamayo 1, NVIDIA's 10B-parameter chain-of-thought reasoning model, the distilled version can reason in real time. Hear Marco Pavone (@drmapavone), Yan Wang, Yurong You, and Wenhao Ding from our AV Research team break down Alpamayo 1 and what's next for reasoning in autonomous driving. 🔁 Watch the replay: nvda.ws/3O5gKb3




Alpamayoちゃんと学習させれば日本でも結構使えそう!自動運転も世界モデルもオープンソースの時代!



🚀 Exciting news from #CES2026! In his keynote today, Jensen announced @nvidia Alpamayo — a *fully open* ecosystem of models, simulation tools, and datasets designed to accelerate reasoning-based autonomous vehicle (AV) architectures and advance the path to Level 4 autonomous driving. Alpamayo brings together several technologies we’ve developed to enable reasoning-based vision–language–action (VLA) models for AVs. Our goal is to provide researchers and developers with a flexible, fast, and scalable platform for evaluating and training reasoning-based AV architectures in realistic closed-loop settings. Explore Alpamayo: -- Press Release: nvidianews.nvidia.com/news/alpamayo-… -- Hugging Face Blog: huggingface.co/blog/drmapavon… -- Tech Blog: developer.nvidia.com/blog/building-… -- Alpamayo 1 reasoning model: research.nvidia.com/publication/20… -- Physical AI AV Dataset: huggingface.co/datasets/nvidi… -- AlpaSim simulator: github.com/NVlabs/alpasim I’m incredibly proud of the @nvidia AV Research team (research.nvidia.com/labs/avg/) and our many @nvidia collaborators whose contributions made this possible. More releases and features are coming soon — we can’t wait to see what the community builds with Alpamayo! 💡 Want to help grow the Alpamayo ecosystem? We’re hiring: [Sr.] Research Scientist: nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAEx… [Sr.] Research Engineer: nvidia.wd5.myworkdayjobs.com/en-US/NVIDIAEx… #AutonomousVehicles #AutonomousDriving #AI #Simulation #ReasoningAI #OpenEcosystem #Alpamayo @NVIDIAAI @NVIDIADRIVE


We’ve just released @nvidia #DRIVE Alpamayo-R1 (AR1) — the world’s first industry-scale open #reasoning #VLA model for autonomous-vehicle (AV) research. AR1 integrates Chain-of-Causation reasoning with trajectory planning to improve decision-making in complex driving scenarios. Built on @nvidia #Cosmos #Reason, AR1 is designed as a customizable foundation for a broad range of AV applications — from instantiating an end-to-end backbone for autonomous driving to powering advanced, reasoning-based auto-labeling tools. Resources: Model: huggingface.co/nvidia/Alpamay… Inference Code: github.com/NVlabs/alpamayo Paper: research.nvidia.com/publication/20… Blog Post: blogs.nvidia.com/blog/neurips-o… A subset of the data used to train and evaluate AR1 is available in the @nvidia Physical AI Open Datasets: huggingface.co/datasets/nvidi… AR1 can be evaluated using AlpaSim (github.com/NVlabs/alpasim), @nvidia's newly released open-source AV simulation framework built specifically for research and development. (Separate post on AlpaSim coming soon.) This release completes @nvidia’s trifecta — model, data, and simulator — to accelerate research and development in the autonomous-vehicle domain. Happy developing, and stay tuned for more! Huge thanks to the phenomenal team that made this possible @NVIDIAAI @nvidia.


Excited to unveil @nvidia's latest work on #Reasoning Vision–Language–Action (#VLA) models — Alpamayo-R1! Alpamayo-R1 is a new #reasoning VLA architecture featuring a diffusion-based action expert built on top of the #Cosmos-#Reason backbone. It represents one of the core technologies driving NVIDIA’s push toward Level 4 autonomy and robotaxis (nvidianews.nvidia.com/news/nvidia-ub…), as announced by Jensen Huang at #gtc DC last week. 📄 Paper: Alpamayo-R1 research.nvidia.com/publication/20… We present: - Architecture & Design: How to transform a VLM into a driving-ready Reasoning VLA - Chain of Causation Labeling: A new framework enabling reasoning-based learning - Training Strategy: From internet-scale pre-training → AV-specific SFT → RL-based post-training - Extensive Evaluation: From closed-loop simulation to real-world, on-vehicle testing 📈 Results: Alpamayo-R1 delivers significant performance gains over end-to-end baselines — especially in rare, safety-critical scenarios — all while maintaining real-time inference (99 ms end-to-end latency). Coming soon: releases of model variants and reasoning metadata built on top of the Physical AI Dataset (huggingface.co/datasets/nvidi…)—with more updates on the way. Stay tuned! 🙌 Huge thanks to Wenjie Luo and @yan_wang_9 (project co-leads); the @nvidia AV Research team (@iamborisi, @YurongYou, @xinshuoweng, @tianran_, @wenhaoding95, and many others); collaborators across @nvidia Research (@liu_mingyu, @visualyang, @PavloMolchanov, and many others); and the @nvidia AV Product team (Sarah Tariq, Patrick Liu, Jack Huang, and many more). Full contributor list in the Appendix. @NVIDIADRIVE @NVIDIAAI


We’ve just released the @nvidia Physical AI Autonomous Vehicles Dataset! huggingface.co/datasets/nvidi… Highlights: - 1,727 hours of driving data collected by @nvidia - Spanning 25 countries and 2,500+ cities - Capturing diverse traffic, weather, and driving scenarios - Includes camera, LiDAR, and radar data This is just the beginning — features, tools, and challenges will continue to evolve. Stay tuned! @NVIDIADRIVE @NVIDIAAI






