Harsh
999 posts

Harsh
@HSlifelearner
CV/DL @nvidia | Robograd @CMU_Robotics | Investing in startups
San Francisco, CA Katılım Haziran 2009
2.6K Takip Edilen923 Takipçiler
Harsh retweetledi

Harsh retweetledi
Harsh retweetledi

What if a world model could render not an imagined place, but the actual city?
We introduce Seoul World Model, the first world simulation model grounded in a real-world metropolis.
TL;DR: We made a world model RAG over millions of street-views.
proj: seoul-world-model.github.io
English
Harsh retweetledi

#NVIDIA just released a whole ecosystem for human(oid) motion and robot learning from human data. 🚀🦾
Data, as we all know, is the key to scaling AI models. To accelerate the field of Embodied AI, we have open-sourced a full stack of models and tools to capture, generate, retarget, and simulate human(oid) motion data at scale, along with a massive high-quality dataset and a standard human skeletal representation, SOMA, to make them all seamlessly communicate with each other.
The entire suite is available under the Apache 2.0 license.
1️⃣ SOMA: A universal interface to unify all parametric human body models (SOMA-shape, SMPL, MHR, etc.) into a standard skeletal representation, eliminating the need for custom adapters or model-specific retargeting.
🔗 lnkd.in/gsxhiJnn
2️⃣ Kimodo: High-fidelity, controllable text-to-motion generation for both humans and humanoid robots.
🔗 lnkd.in/gCc84XnX
3️⃣ GEM: A global human pose estimation method from in-the-wild videos, natively compatible with SOMA.
🔗 lnkd.in/g_QAvRjn
4️⃣ Bones-SEED: A massive dataset of 150k+ motions in SOMA format, including data already retargeted for the Unitree G1, created with our partners at Bones Studio.
🔗 lnkd.in/gfx-QD-w
🔗 lnkd.in/gyNdTwQx
5️⃣ SOMA Retargeter: A dedicated tool for seamless motion retargeting from the SOMA skeleton to the Unitree G1.
🔗 lnkd.in/gqz9Na-H
6️⃣ ProtoMotions: Our high-performance simulation framework for training digital human(oid)s via RL, now with native SOMA support.
🔗 lnkd.in/gmvMikMU
This is just the beginning, and we have much more in the pipeline. Excited to see what the community builds next!
#NVIDIA #GTC #GTC2026 #Robotics #EmbodiedAI #PhysicalAI @NVIDIAAI
English
Harsh retweetledi

288 hours of high-quality, text-annotated human motion data are now available! 140k motion sequences!
Do you know that a large part of SONIC's training data is now open-sourced?
Check out the dataset here 👇🏻 from our friends at Bones Studio!
Full human + G1 retargeted motion!
Stie🌐:bones.studio/datasets/seed
Data💿:huggingface.co/datasets/bones…
SONIC training code coming VERY VERY soon!
English
Harsh retweetledi

Nemoclaw is now released 🤙
Sandboxes and policy controls to run OpenClaw safely. Leverages local GPU if available
It’s still in alpha, but check it out
github.com/NVIDIA/NemoClaw
Nader Khalil🍊@NaderLikeLadder
Lobster has entered the building 🦞💚🤙 @steipete @NVIDIAAIDev
English
Harsh retweetledi
Harsh retweetledi
Harsh retweetledi

Welcome to NVIDIA GTC. Where a hockey stadium fills up to see Jensen speak. Three hours before keynote starts.
Am here covering it as media for funds.rayliant.com/cnqq/ Chinese investor media.

English
Harsh retweetledi

I (finally) put together a new LLM Architecture Gallery that collects the architecture figures all in one place!
sebastianraschka.com/llm-architectu…

English

Came to judge a robotics hackathon and seeing ppl using Sonic @NVIDIARobotics zero shot a Video->GMR->policy. Every few months the demos keep getting easier.
English
Harsh retweetledi
Harsh retweetledi
Harsh retweetledi

We collaborated with @NVIDIA to teach you about Reinforcement Learning and RL environments.
Learn:
• Why RL environments matter + how to build them
• When RL is better than SFT
• GRPO and RL best practices
• How verifiable rewards and RLVR work
Blog: unsloth.ai/blog/rl-enviro…

English



















