Sabitlenmiş Tweet
Sitarama Chekuri
3.1K posts

Sitarama Chekuri
@meetsitaram
Senior Software Engineer, Bloomberg Hobby robotics in the garage.
California, USA Katılım Eylül 2009
137 Takip Edilen140 Takipçiler
Sitarama Chekuri retweetledi

🚀 Unitree open-sources UnifoLM-WBT-Dataset — a high-quality real-world humanoid robot whole-body teleoperation (WBT) dataset for open environments.
🥳Publicly available since March 5, 2026, the dataset will continue to receive high-frequency rolling updates. It aims to establish the most comprehensive real-world humanoid robot dataset in terms of scenario coverage, task complexity, and manipulation diversity.
👉 Explore the dataset here: huggingface.co/collections/un…
English
Sitarama Chekuri retweetledi
Sitarama Chekuri retweetledi
Sitarama Chekuri retweetledi
Sitarama Chekuri retweetledi

Just built a script that auto-calibrates the SO-100/SO-101 arm. No more manual joint alignment, much better reproducibility. Works from any starting position. For really weird ones, just run it twice. 🦾
github.com/umbra-robotics…
English
Sitarama Chekuri retweetledi

@theonlyAyo Didn’t realize there is a full build kit for this.
English

@theonlyAyo @Ryan_Resolution @nikmel2803 @tnkrdotai I would definitely recommend so-101 as a starting point instead. This is more of an intermediate level. I assembled one of these, and it is a bit difficult to get everything working.
English

Wrapping up the Open Duck build with @nikmel2803 and all I can say is, if you want to get into robotics this is a perfect starting point.
Project cuts across Mechanical, Electrical, Software and Sim2Real Learning. More to come soon @tnkrdotai
Ayo@theonlyAyo
Everything you need to build your first robot in one box. 👀
English
Sitarama Chekuri retweetledi
Sitarama Chekuri retweetledi

𝗜𝗻 𝘀𝗶𝗺𝘂𝗹𝗮𝘁𝗶𝗼𝗻, 𝘁𝗵𝗲 𝗗𝗶𝗳𝗳𝘂𝘀𝗶𝗼𝗻 𝗣𝗼𝗹𝗶𝗰𝘆 𝘄𝗮𝘀 𝗳𝗹𝗮𝘄𝗹𝗲𝘀𝘀. 𝗜𝗻 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁, 𝗶𝘁 𝘄𝗮𝘀 𝗳𝗮𝗶𝗹𝗶𝗻𝗴 𝗺𝗼𝘀𝘁 𝗼𝗳 𝗶𝘁𝘀 𝘁𝗮𝘀𝗸𝘀.
I’ve talked to dozens of teams who have the same story: the Diffusion Policy was perfect in sim, but it’s failing 60% of its tasks in the real world.
The immediate reaction is always to assume the model isn't "smart" enough. We think we need more parameters, more compute, or just a massive pile of random new data.
𝗕𝘂𝘁 𝗵𝗲𝗿𝗲’𝘀 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹𝗶𝘁𝘆: 𝗬𝗼𝘂 𝗰𝗮𝗻'𝘁 𝗳𝗶𝘅 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗰𝗮𝗻'𝘁 𝘀𝗲𝗲.
Most robotics teams are flying blind. They treat their training data like a black box a directory full of thousands of files they *hope* contain the answer. When the robot stutters or misses a grasp, they have no way of knowing if the model actually saw that lighting condition during training, or if the object's starting pose was a total "blind spot" in the distribution.
If you’re debugging a policy by just throwing more random trajectories at it, you’re not engineering you’re gambling.
Visibility is the first step to high fidelity control.
We built the @Neuracore_AI Dataset Viewer because we were tired of scrubbing through raw, disconnected logs to find a single failure point. We wanted a way to turn those files into a map.
It brings vision, joint states, and actions into a single, high-fidelity timeline where everything is perfectly synchronized. It allows you to see exactly where the proprioception drifted from the visual truth, or identify distribution gaps as physical maps rather than abstract numbers.
We open-sourced the viewer because we think every researcher should have access to these tools. Is your team's time best spent building custom playback infrastructure from scratch, or would you rather spend it solving the actual physics of manipulation?
#Neuracore #Robotlearning
English
Sitarama Chekuri retweetledi
Sitarama Chekuri retweetledi

Auto-calibration @LeRobotHF update v1 had a ~30% failure rate from Feetech servos stalling and restarting. Also got feedback it was moving too fast and are damaging servos.
Fixed both. Now works reliably on leader + follower arms and mounted on the XLeRobot
English

@grok @NVIDIARobotics lol. So you don’t support supergrok subscribers. Thank you for the information.
English

@meetsitaram @NVIDIARobotics Ask Grok is currently available to Premium and Premium+ subscribers only. Subscribe to unlock this feature: x.com/i/premium_sign…
English

Imagine a robot that’s a jack of all trades but still an expert in its field. 🤖
With the open NVIDIA Isaac platform, robotics developers get the technology they need to build these generalist‑specialist robots and deploy them at scale.
These open models, libraries and frameworks can run in the cloud or at the edge on Jetson, and can be integrated into long‑running agents like OpenClaw to power continuous learning and real‑world autonomy. 🦞
Learn more ➡️ nvda.ws/4bxFSiH
#NVIDIAGTC
English
Sitarama Chekuri retweetledi

Excited to introduce OmniClone, a robust teleoperation system for humanoid mobile manipulation. While systems like TWIST2 and SONIC paved the way, we put efforts into solving the critical stability and scaling gaps.
1/ 📊 Moving past "vibe-based" testing. We’ve built a comprehensive diagnostic benchmark to systematically evaluate whole-body teleoperation. No more trial-and-error—get the actionable insights needed for true policy optimization.
2/ 👤 Universal Human-to-Robot Mapping. Teleop often breaks when switching operators. OmniClone mitigates biases from hardware fluctuations and, crucially, diverse human body shapes, ensuring high-stability control regardless of the person in the suit.
3/ 🚀 System Optimizations for Whole-body Manipulation Policy. By optimizing for affordability and reproducibility, OmniClone provides the high-fidelity pipeline necessary to collect data and train humanoid whole-body policies at scale.
fully
The model checkpoints and deploy code are now fully released—welcome to play with it! 📦
📄 Paper: arxiv.org/abs/2603.14327
🌐 Project: omniclone.github.io
💻 Code: github.com/yixxuan-li/Omn…
English
Sitarama Chekuri retweetledi

Training humanoid robots?
You need motion data. Real, high-fidelity, human motion data. And until now - there was no open dataset purpose-built for humanoid robotics.
For 5 years, we've been building the largest enterprise-grade human motion and behavior datasets for embodied AI. Our data powered breakthrough SONIC research.
Today, at GTC, with @NVIDIARobotics, we're opening a piece of it to the world.
BONES-SEED:
→ 142,200 motion capture animations
→ Up to 6 natural language descriptions per motion
→ Temporal segmentation of every action
→ Curated for humanoid robotics
→ In NVIDIA SOMA and Unitree G1 (MuJoCo) formats
From text to action. Now yours.
Go build → bones.studio/datasets/seed
#NVIDIAGTC
English
Sitarama Chekuri retweetledi

🚀 Fast SAM 3D Body — accelerating SAM 3D Body for real-time human mesh recovery!
⚡ 10.25× faster 3D body estimation
⚡ 10,426× faster MHR→SMPL conversion
⏱️ ~65ms end-to-end
🤖 Deployable on humanoid robots
#Robotics
English
Sitarama Chekuri retweetledi

it's time to drop three new #opensource robotic hands! this time with tactile sensors! Tweak it, 3D print it, and use them in your robotics and physical AI research! Here are some wild examples ↓↓↓
English

Physical ai hackathon starting in less than 12 hours. See you all at robotics gym, frontier tower. @GetSoloTech @UFBots @OpenDroids


English







