The Humanoid Hub

6.9K posts

The Humanoid Hub banner
The Humanoid Hub

The Humanoid Hub

@TheHumanoidHub

Humanoid Robots: Tech, Business, and Social Dynamics. Click the “𝕊𝕦𝕓𝕤𝕔𝕣𝕚𝕓𝕖” button on the profile to support. Run by @dev_and_

Beigetreten Temmuz 2023
897 Folgt105.2K Follower
Angehefteter Tweet
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
New Episode: @elvisnavah is an ML expert and Co-Founder and CTO of Mimic Robotics, an ETH Zurich spin-off creating advanced humanoid hands and AI for dexterous manipulation tasks. We dive into the Audi rubber-strip demo, video model backbones, Elvis’s outlook on AGI in the physical world, and more. 0:58 Intro 1:38 Elvis Personal Journey 3:25 ETH Zurich Legacy 5:23 Why Five-Finger Hands 7:13 Hands vs Full Humanoids 8:27 Tendon-Driven Advantages 10:41 Audi Rubber Strip Demo 13:30 Video Model for Policy Backbone 18:44 10x Data Efficiency 21:58 Long-Term Vision Mimic 24:08 Physical AGI Timeline
English
12
20
137
24.8K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
@cixliv Wow they brought it up!! I gotta check out the booth again
English
1
0
2
349
CIX 🦾
CIX 🦾@cixliv·
Damn the H2 is a big boy. More videos below of him fighting at GTC
English
6
9
52
12K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
At GTC 2026 Skild booth, @shikharbahl & @kmarinou_ demo Skild Brain operating autonomously from pixels to robot actions, doing busbar assembly for NVIDIA GB300 compute tray. Skild uses the same omni-bodied base model for humanoids, quadrupeds, and variety of industrial robots.
English
3
12
65
6.6K
Barrak
Barrak@BarrakAli·
@TheHumanoidHub Getting photobombed by Jensen Huang at GTC is honestly a better story than a regular selfie would have been.
English
1
0
1
67
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Managed to snag a selfie with Jensen! Okay, more like a photobomb.
English
15
0
73
5.4K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Ashok Elluswamy, Tesla's AI lead, during a GTC discussion, highlighting the fundamental similarity in AI approaches for self-driving cars and humanoid robots: - Hierarchical decision making is useful, but it has to be done as part of the same decision-making process as lower-level controls. - We haven't seen the long tail of humanoid robotics, but Tesla has seen the long tail of self-driving, where high and low-level decisions have to be jointly made at a pretty high framerate. - Optimus's architecture is designed in a similar way, where there's a hierarchy but it's all running as part of the same model and the latencies involved in decision making are well modeled. - This architecture will scale quite well with humanoid robots. - The distinction of the decision-making levels is only in the developer's mind. For the model, it's a continuous space of decision making, where there are dials available to make them more fine or coarse. - Humanoids have more sensor modalities and higher degrees of freedom compared to self-driving, but the fundamental constraints remain the same: you need to make real-time decisions. There's obviously a hierarchy to these control signal outputs, but the lowest frequency cannot be too low, because the safety of the robot cannot depend upon things running at very low frequencies.
English
120
381
1.6K
471.6K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
It's a sign of things to come: a world where companion robots are relatable and engaging and can evoke a genuine emotional response. This was easily the most adorable robot I’ve ever seen in person.
The Humanoid Hub@TheHumanoidHub

How Disney Research brought the animated character Olaf to life, achieving an accurate, stylized gait alongside robust balance, low noise, and thermal safety. Sets new standard for animated-to-physical robotic characters.

English
1
3
32
3.5K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
AGIBOT World Challenge @ ICRA 2026 $530,000 global robotics competition Registration open now The Challenges - Reasoning to Action Challenge: Online simulation to real-robot testing on AGIBOT G2. Focus: robust physical interaction in complex environments (sim-to-real emphasis). - World Model Challenge: Fully online competition focused on predicting future visual states from initial observation + action sequence. Developer Resources: - AGIBOT World open dataset - ACoT-VLA base model - Genie Sim 3.0 Who Should Apply University labs, research groups, tech companies, and startups working on embodied AI, world models, sim-to-real transfer, or manipulation/locomotion intelligence. Key Dates • Submissions Close: April 20, 2026 • Finals: June 1, 2026, at ICRA (Vienna, Austria) @AGIBOTofficial #AGIBOT #AGIBOTWorldChallenge #ICRA2026
English
4
15
63
7.3K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
GR00T is moving away from VLM-based backbones in favor of integrated world models. Jensen Huang teased GR00T N2 during his keynote; NVIDIA's next-gen foundation model built on DreamZero research. Utilizing a new world-action model architecture, it succeeds at novel tasks in unfamiliar environments over 2x more often than leading VLAs. Currently ranked #1 on MolmoSpaces and RoboArena, GR00T N2 is slated for release by year-end.
The Humanoid Hub@TheHumanoidHub

Not the flashiest demos, but what’s under the hood represents a foundational shift for general-purpose robotics. World models are the next-gen foundation of Physical AI, not the VLM backbones found in typical VLAs. DreamZero is a 14B-parameter World Action Model (WAM) by NVIDIA that treats robotics as a joint video-and-action prediction task. Unlike traditional Vision-Language-Action (VLA) models that map images directly to motor commands, DreamZero leverages a pretrained video diffusion backbone to predict future world states and actions simultaneously. - achieves 2× better zero-shot generalization to unseen tasks and environments compared to state-of-the-art VLAs. - learns effectively from heterogeneous, non-repetitive data (500 hours), breaking the need for thousands of repeated demonstrations. - adapts to new robot embodiments with just 30 minutes of play data. - enables 7Hz closed-loop control via system optimizations and "DreamZero-Flash," making high-capacity diffusion models viable for real-time use.

English
7
25
222
22K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Jensen just said NVIDIA’s $1T projection for 2025-27 covers only Blackwell and Rubin to keep it consistent with the previous projection. He mentioned he could have included Groq in that number: "so if I would've included that, theoretically, not actually, but theoretically, that one trillion could have been $1.2 trillion." He stated that the $1 trillion figure does not include "various CPUs standalone. It does not include Groq. It does not include storage, [or] BlueField." $NVDA Reporting from Jensen's press conference at GTC.
The Humanoid Hub tweet media
The Humanoid Hub@TheHumanoidHub

Nvidia targets data center revenue of $1+ trillion for 2025-2027. That’s already quite ridiculous, with the AI physical world only in its zeroth innings . $NVDA

English
4
14
86
25.1K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Jensen says he can't think of a company building robots that isn't working with Nvidia.
The Humanoid Hub tweet media
English
15
17
160
32.9K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
We are the media now!
Ilir Aliu@IlirAliu_

I’ll be at the GTC this week, here with my friend Devang (@TheHumanoidHub) looking forward to see great sessions on robotics. Waiting for Jensen right now. One session I’m especially curious about today: ‘From Concept to Production: Humanoid Robotics at Scale’. nvidia.com/gtc/session-ca… With @pathak2206 (Skild AI)a @chelseabfinn (Stanford / Physical Intelligence), Pras Velagapudi (Agility Robotics), and Amit Goel from @Nvidia. Humanoid robotics needs to move from research to production. Let’s see what the people actually building it have to say….

English
4
3
47
7K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Algos are not sure how to feel about this
The Humanoid Hub tweet media
English
2
3
34
4.8K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Nvidia targets data center revenue of $1+ trillion for 2025-2027. That’s already quite ridiculous, with the AI physical world only in its zeroth innings . $NVDA
The Humanoid Hub tweet media
English
7
12
105
17.3K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Jensen: “Nvidia is the first vertically integrated but horizontally open company.” This strategy positions Nvidia as the backbone of robotics without stifling innovation. Vertical integration ensures cutting-edge performance on each layer of the AI stack. Horizontal openness builds a thriving open-source community around tools like Isaac and Omniverse - ultimately driving faster adoption in areas like humanoid robots.
English
13
12
114
7.1K
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Jensen is cementing the idea that Nvidia-powered AI is now the backbone of every major industry. He said robotics alone will be a $50 trillion industry.
The Humanoid Hub tweet media
English
16
19
97
7.9K