Johnny Núñez

6.5K posts

Johnny Núñez banner
Johnny Núñez

Johnny Núñez

@johnnync13

Robotics and AI at @NVIDIA

Barcelona Присоединился Ağustos 2015
1.7K Подписки560 Подписчики
Закреплённый твит
Johnny Núñez
Johnny Núñez@johnnync13·
@sama @OpenAI My 93-year-old grandfather discovers ChatGPT Voice Mode for the first time, and the results are nothing short of amazing. He loved it, and it made him so happy. Affective computing and AI like this will transform life for older adults. #MerryChristmas
English
0
0
9
1.3K
Zeno
Zeno@ZenoInMotion·
We're launching the European Student Robotics Association (ESRA, @esra_robotics) (13 universities, 8 countries, 2.5k+ members)! We’re young, driven and together we’re tackling the European fragmentation problem heads on. Who are we? - ETH Robotics Club (Zürich) @ethroboticsclub - RoboTUM (Münich) - EPFL AI Team (Lausanne) @epflaiteam - Unaite (Paris) - Team Polar (Eindhoven) - TU Wien Robotics Club (Vienna) - Robotics Collective (Aachen) @robocollectiv - KTH AI Society (Stockholm) @KTHAISociety - Delft Robotics Student Association - KN CybAiR (Poznan) - AEA Polimi (Milan) What do we do? → Pan-European robotics competitions → Cross-border technical project collaborations → Coordinated access to funding opportunities across Europe And this is just the beginning! Thanks @andreasklinger, @lukas_m_ziegler and @IlirAliu_ for helping us spread the word :)
English
21
58
542
29.4K
Johnny Núñez ретвитнул
The Linux Foundation
The Linux Foundation@linuxfoundation·
Announcing the general availability of #Newton 1.0, the open-source, extensible physics engine for robot learning. Key features: • Stable articulated mechanism simulation • Hydroelastic contact modeling • Deformable body simulation (cables, cloth, rubber) • Accelerated robot learning at scale Netwon is a Linux Foundation project developed by Disney Research, Walt Disney Imagineering, @GoogleDeepMind and @nvidia Learn more: bit.ly/4cMm39x #NVIDIAGTC
English
5
51
272
18.4K
Johnny Núñez ретвитнул
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
Amit Goel is the Head of Robotics and Edge Computing Ecosystem at @NVIDIARobotics. We sat down for a chat at GTC 2026 to discuss the future of edge computing and model ecosystem for humanoids. 0:21 Favorite part of Jensen's keynote 1:19 Edge computing inflection points 2:47 Unique humanoid challenges on the edge 4:06 Future hardware priorities for Jetson 5:50 Humanoids want cheaper Jetson Thor 7:32 GR00T model adoption 9:17 Video prediction integrated into GR00T 10:16 NVIDIA competing with model companies? 12:07 Imminent data-compute imbalance
English
3
12
68
19.5K
Johnny Núñez ретвитнул
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
GR00T is moving away from VLM-based backbones in favor of integrated world models. Jensen Huang teased GR00T N2 during his keynote; NVIDIA's next-gen foundation model built on DreamZero research. Utilizing a new world-action model architecture, it succeeds at novel tasks in unfamiliar environments over 2x more often than leading VLAs. Currently ranked #1 on MolmoSpaces and RoboArena, GR00T N2 is slated for release by year-end.
The Humanoid Hub@TheHumanoidHub

Not the flashiest demos, but what’s under the hood represents a foundational shift for general-purpose robotics. World models are the next-gen foundation of Physical AI, not the VLM backbones found in typical VLAs. DreamZero is a 14B-parameter World Action Model (WAM) by NVIDIA that treats robotics as a joint video-and-action prediction task. Unlike traditional Vision-Language-Action (VLA) models that map images directly to motor commands, DreamZero leverages a pretrained video diffusion backbone to predict future world states and actions simultaneously. - achieves 2× better zero-shot generalization to unseen tasks and environments compared to state-of-the-art VLAs. - learns effectively from heterogeneous, non-repetitive data (500 hours), breaking the need for thousands of repeated demonstrations. - adapts to new robot embodiments with just 30 minutes of play data. - enables 7Hz closed-loop control via system optimizations and "DreamZero-Flash," making high-capacity diffusion models viable for real-time use.

English
7
27
239
25.1K
Johnny Núñez ретвитнул
NVIDIA Robotics
NVIDIA Robotics@NVIDIARobotics·
Newton 1.0 is now generally available. 🙌 Take robot learning to the next level with: 🤖 Stable Articulated & Complex Mechanism Simulation – accurate, reliable machine modeling. 🖐️ High-Fidelity Hydroelastic Contact Modeling – realistic soft contact and touch-based interactions. 🧵 Deformable Body Simulation – simulate cables, cloth, rubber, and other elastic materials with VBD. ⚡ Accelerated Robot Learning at Scale – seamless integration with open simulation and learning frameworks, NVIDIA Isaac Sim and Isaac Lab for scalable workflows. Learn how to integrate this open-source physics engine into your workflow: nvda.ws/3NGTzUo #NVIDIAGTC
English
25
160
1.1K
79.7K
Johnny Núñez ретвитнул
Kaiming Cheng
Kaiming Cheng@KaimingCheng·
We just published KernelAgent blog on the PyTorch site 🚀 🧠 Core approach: KernelAgent integrates GPU hardware performance signals into a closed-loop multi-agent workflow to guide Triton kernel optimization. 📈 Key results: - 2.02× speedup over the correctness-focused KernelAgent - 1.56× faster than out-of-the-box torch.compile - 88.7% hardware roofline efficiency on NVIDIA H100 🌐 Codebase: Our entire stack is fully open-sourced: github.com/meta-pytorch/K…, along with the optimization artifacts: github.com/kaiming-cheng/… We hope this work helps advance practical, scalable kernel optimization in the PyTorch ecosystem. 🙏 Acknowledgements This work was developed at the Meta Superintelligence Labs – PyTorch team with Laura Wang, Jack Khuu, Mark Saroufim, Wenyuan Chi, Jiannan Wang, and Joe Isaacson. We thank Paulius Micikevicius, Yang Wang, Lu Fang, Jie Liu, Zacharias Fisches, Alec Hammond, Richard Li, Chris Gottbrath, Davide Italiano, Joe Spisak, and John Myles White for helpful discussions and feedback. ⬇️ See the blog for more details
PyTorch@PyTorch

Building on the previous correctness-focused pipeline, KernelAgent can now integrate GPU hardware-performance signals into a closed-loop multi-agent workflow to guide the optimization for Triton Kernels. Learn more: hubs.la/Q045Wsqq0 @KaimingCheng @marksaroufim

English
0
5
69
7.1K
Johnny Núñez ретвитнул
NVIDIA Robotics
NVIDIA Robotics@NVIDIARobotics·
Curious about Newton Physics Engine? Join Moritz Baecher from Disney Research, Walt Disney Imagineering alongside key industry experts to learn how Newton and Disney Research’s Kamino solver enables physics-based reinforcement learning for robots.🦿 📆 Thursday, March 19 | 11:00 a.m PT Add to schedule ➡️ nvda.ws/3OUXCN9
English
2
19
146
6.1K
Johnny Núñez
Johnny Núñez@johnnync13·
See you there!
Seeed Studio@seeedstudio

🚀 2026 #EmbodiedAI #Hackathon is live! No access to a #ReachyMini? Here it is! 👉luma.com/zsaa3r3d Join us right after #NVIDIA #GTC at @seeedstudio’s US office in Santa Clara, co-organized with @NVIDIARobotics, @huggingface, and @pollenrobotics. Build up your personal Reachy Mini to be an AI agent, compete for $6K+ prizes, and get hands-on experience for NVIDIA open model deployments and validation (Mar 21–22, register DDL-Mar 7) 👉 Register: docs.google.com/forms/d/e/1FAI…

English
0
0
3
79
Johnny Núñez ретвитнул
Jim Fan
Jim Fan@DrJimFan·
We trained a humanoid with 22-DoF dexterous hands to assemble model cars, operate syringes, sort poker cards, fold/roll shirts, all learned primarily from 20,000+ hours of egocentric human video with no robot in the loop. Humans are the most scalable embodiment on the planet. We discovered a near-perfect log-linear scaling law (R² = 0.998) between human video volume and action prediction loss, and this loss directly predicts real-robot success rate. Humanoid robots will be the end game, because they are the practical form factor with minimal embodiment gap from humans. Call it the Bitter Lesson of robot hardware: the kinematic similarity lets us simply retarget human finger motion onto dexterous robot hand joints. No learned embeddings, no fancy transfer algorithms needed. Relative wrist motion + retargeted 22-DoF finger actions serve as a unified action space that carries through from pre-training to robot execution. Our recipe is called "EgoScale": - Pre-train GR00T N1.5 on 20K hours of human video, mid-train with only 4 hours (!) of robot play data with Sharpa hands. 54% gains over training from scratch across 5 highly dexterous tasks. - Most surprising result: a *single* teleop demo is sufficient to learn a never-before-seen task. Our recipe enables extreme data efficiency. - Although we pre-train in 22-DoF hand joint space, the policy transfers to a Unitree G1 with 7-DoF tri-finger hands. 30%+ gains over training on G1 data alone. The scalable path to robot dexterity was never more robots. It was always us. Deep dives in thread:
English
145
282
1.7K
268.6K
Johnny Núñez ретвитнул
Jim Fan
Jim Fan@DrJimFan·
Announcing DreamDojo: our open-source, interactive world model that takes robot motor controls and generates the future in pixels. No engine, no meshes, no hand-authored dynamics. It's Simulation 2.0. Time for robotics to take the bitter lesson pill. Real-world robot learning is bottlenecked by time, wear, safety, and resets. If we want Physical AI to move at pretraining speed, we need a simulator that adapts to pretraining scale with as little human engineering as possible. Our key insights: (1) human egocentric videos are a scalable source of first-person physics; (2) latent actions make them "robot-readable" across different hardware; (3) real-time inference unlocks live teleop, policy eval, and test-time planning *inside* a dream. We pre-train on 44K hours of human videos: cheap, abundant, and collected with zero robot-in-the-loop. Humans have already explored the combinatorics: we grasp, pour, fold, assemble, fail, retry—across cluttered scenes, shifting viewpoints, changing light, and hour-long task chains—at a scale no robot fleet could match. The missing piece: these videos have no action labels. So we introduce latent actions: a unified representation inferred directly from videos that captures "what changed between world states" without knowing the underlying hardware. This lets us train on any first-person video as if it came with motor commands attached. As a result, DreamDojo generalizes zero-shot to objects and environments never seen in any robot training set, because humans saw them first. Next, we post-train onto each robot to fit its specific hardware. Think of it as separating "how the world looks and behaves" from "how this particular robot actuates." The base model follows the general physical rules, then "snaps onto" the robot's unique mechanics. It's kind of like loading a new character and scene assets into Unreal Engine, but done through gradient descent and generalizes far beyond the post-training dataset. A world simulator is only useful if it runs fast enough to close the loop. We train a real-time version of DreamDojo that runs at 10 FPS, stable for over a minute of continuous rollout. This unlocks exciting possibilities: - Live teleoperation *inside* a dream. Connect a VR controller, stream actions into DreamDojo, and teleop a virtual robot in real time. We demo this on Unitree G1 with a PICO headset and one RTX 5090. - Policy evaluation. You can benchmark a policy checkpoint in DreamDojo instead of the real world. The simulated success rates strongly correlate with real-world results - accurate enough to rank checkpoints without burning a single motor. - Model-based planning. Sample multiple action proposals → simulate them all in parallel → pick the best future. Gains +17% real-world success out of the box on a fruit packing task. We open-source everything!! Weights, code, post-training dataset, eval set, and whitepaper with tons of details to reproduce. DreamDojo is based on NVIDIA Cosmos, which is open-weight too. 2026 is the year of World Models for physical AI. We want you to build with us. Happy scaling! Links in thread:
English
80
176
1.2K
201.9K
Johnny Núñez ретвитнул
NVIDIA HPC Developer
NVIDIA HPC Developer@NVIDIAHPCDev·
🎉 We’re thrilled to announce the release of CuPy v14.0.0 — packed with new features, major performance boosts, important bug fixes, and more. ✅ Full compliance and support for NumPy v2 and Python Array API standard ✅ Support for CUDA pip wheels - much better installation footprint and interoperability with CUDA Python and PyTorch ✅ Support for structured dtypes & ML dtypes (starting with bfloat16) ✅ Better coverage over new NumPy & SciPy APIs ✅ Support for new cuFFT JIT callbacks - works on both Linux and Windows! ✅ New %gpu_timeit magic for Jupyter notebook users to properly benchmark GPU code This release marks the CuPy 10th anniversary, highlighting our close collaboration between @PreferredNet, and the wider open source community. @CuPy_Team 🥳 Accelerate your Python program on our GPUs.👇
CuPy@CuPy_Team

Announcing CuPy v14! 🚀 🔹 NumPy v2 semantics 🔹 CUDA Pip Wheels support 🔹 bfloat16 & structured dtype 🔹 Enhanced NumPy/SciPy API coverage Read more on our blog for the full details! 👇

English
2
8
83
7.1K
Johnny Núñez
Johnny Núñez@johnnync13·
@_guillecasaus Si, y estamos optimizandolo a nivel de CUDA, para que sea aún más rápido
Español
0
0
0
53
Guillermo Casaus
Guillermo Casaus@_guillecasaus·
🚨 NVIDIA acaba de lanzar PersonaPlex-7B. Un modelo de voz que puede escucharte mientras responde, manteniendo conversaciones en tiempo real como un humano. Es gratis y 100% open-source 👇
Español
27
282
2K
125.2K
Johnny Núñez ретвитнул
Sirui Xu
Sirui Xu@xu_sirui·
Humanoids need autonomy + versatility + generalization to be truly useful. Loco-manipulation makes that hard. InterPrior is our step toward bridging the gap — one policy, no reference. Could be promising for immersive games 🎮 and real robots 🤖 🔗 sirui-xu.github.io/InterPrior 📜 arxiv.org/abs/2602.06035 [1/9]
English
5
43
219
35.1K
Johnny Núñez ретвитнул
Lukas Ziegler
Lukas Ziegler@lukas_m_ziegler·
Is Zurich really the Silicon Valley of robotics? Well, if you look at their robotics ecosystem, it seems true! Zurich is a great place to start a robotics company because everything you need is close and well connected. It has top engineering talent, mainly from ETH Zürich, one of the best robotics and AI universities in the world. Many successful robotics startups come directly from ETH research. Also, the presence of Disney Research and RAI Institute helps to be on the frontier of physical AI. The city also has strong industry and customers nearby. Switzerland is home to global companies in robotics, manufacturing, and automation, such as ABB Robotics, which often work with startups as partners or early customers. Zurich offers good access to funding, especially for deep-tech and robotics. Investors here are used to long development cycles and complex hardware products. 💰 Finally, Zurich is known for stability and quality of life. It is safe, well organized, and centrally located in Europe, making it easier to attract international talent and scale globally. Listing some of the Zurich-based companies here: → @anybotics -  builds autonomous four-legged robots for industrial inspection and maintenance, and has raised $150M+ (€127M+) in funding. → @GravisRobotics - develops AI-powered autonomy systems that turn heavy construction machines into self-driving robots, and has raised ~$27 M in total funding. → Verity - builds autonomous indoor drones for warehouse inventory tracking and has raised ~$60M+ in funding. → @mimicrobotics - develops AI-powered robotic picking systems that enable fast, reliable item handling for warehouse and logistics operations. → @BotaSystems - develops high-precision force-torque sensors and sensing software that give robots real-time tactile awareness for manipulation. → @duatic_ag - develops advanced human-scale robotic arms and mobile manipulation systems. → @rivr_tech - builds autonomous Physical AI-powered wheeled-legged robots for last-mile doorstep delivery and has raised about $22 M in seed funding. → @FlexionRobotics - builds the autonomous AI “brain” and software stack for humanoid robots, and has raised about $50 M in funding. → FORGIS - develops AI-powered edge software that makes industrial machines and factory systems autonomous and intelligent, and has raised ~€3.8 M (~$4.5 M) in pre-seed funding. → Voliro - builds autonomous aerial robots for industrial inspection and maintenance and has raised ~$23 M in Series A funding. → @7Srobotics - built AI-powered 3D vision navigation tech for autonomous mobile robots and was acquired by ABB Robotics in January 2024. → Tethys Robotics - builds autonomous underwater robots (hybrid ROV/AUVs) for automated inspection and subsea missions, and has raised a €3.5 M pre-seed round to scale its technology. → Embotech - develops Level 4 autonomous driving systems for industrial logistics and has raised about CHF 23.5 M (~$27 M) in confirmed Series B funding. → Ascento - builds autonomous outdoor security patrolling robots with AI for industrial sites and has raised about $4.3 M in pre-seed funding. → @Wingtra -  makes high-precision VTOL aerial mapping and surveying drones and has raised $20M-$25M+ in Series B/B1 funding. → @auterion - builds a common AI-enabled operating system and software stack to power and coordinate autonomous drone and robotic fleets, and has raised a $130 M Series B to scale its platform. → Loki Robotics - builds autonomous robots that clean complex real-world environments like commercial restrooms and shared spaces, and has raised about $1.6 M in pre-seed funding. → Nautica Technologies - develops autonomous underwater robots that clean and inspect ship hulls to improve fuel efficiency and sustainability, and has raised about $4 M in seed funding. I'm aware that there might be some companies missing, and the map is not completed! ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com
Lukas Ziegler tweet media
English
46
234
1.5K
82.1K
Johnny Núñez ретвитнул
Lukas Ziegler
Lukas Ziegler@lukas_m_ziegler·
Paris loves robots! 🫶🏼 ... and of course, vice versa! What is the first thing that comes to mind when you think of Paris? Probably warm croissants and the Eiffel Tower glittering at night. What if I tell you, that it should be robotics? 🤯 Paris is a great place to launch a robotics startup because it mixes world-class tech talent, big startup energy, and strong public support for deep tech. First, Paris has excellent research and engineers, coming from places like Inria (a major research institute in robotics and AI) and top schools around Paris/Saclay. This matters because robotics startups often start from research and need smart people who can build “real” systems. Second, Paris is one of Europe’s biggest startup hubs thanks to STATION F (one of the world’s largest startup campuses). It gives founders access to mentors, partners, and lots of investors, which makes it easier to meet the right people fast. Third, France is unusually strong in government-backed deep-tech funding through programs like French Tech 2030 and support from Bpifrance. Robotics takes time and money (prototypes, hardware, testing), so this kind of support is a big advantage. Here's the landscape of robotics companies in Paris: → Genesis AI is a physical-AI lab building generalist robots + robotics foundation models, and launched with $105M funding. → @StanleyRobotics builds robotic “valet parking” systems that autonomously move and park cars. → @UMA_Robots is building general-purpose mobile/humanoid robots for industrial and real-world tasks. → @LeRobotHF (@huggingface) is an open-source robotics stack with datasets, and models to lower the barrier to robot learning, they also have @pollenrobotics in their portfolio. → @EnchantedTools builds friendly humanoid robots (e.g., Mirokaï) for service environments and raised ~$17M seed. → Fuzzy Logic Robotics makes a platform to quickly program/deploy industrial robots and raised €2.5M seed. → @in_bolt builds real-time 3D vision guidance for industrial robotic arms and raised ~$16–17M Series A. → @heex_io builds “smart data” software that captures only the most useful edge data to improve AI systems and raised €6M seed. → @WandercraftHQ builds advanced robotic exoskeletons for mobility/rehab (and humanoid robotics R&D) and has $75M total funding. → SoftBank Robotics UK Ltd (EMEA HQ) develops and commercializes service robots like Pepper and NAO for customer interaction and education. → Galam Robotics builds modular warehouse storage automation robots/systems and raised €10M - Series A. → Exwayz builds GPS-free 3D localization / SLAM software for autonomous robots and raised €1M. → @phospho_ai (YC W24) builds tools/software to easily connect, control, and train AI models on real robots (“brains for robots”). → General Robotics builds industrial robotics solutions for automation and integration. → Gobano Robotics develops AI-powered robots to automate dexterous industrial tasks and raised ~€3M pre-seed. → Marso Robotics builds brownfield-ready warehouse companion robots to automate hard manual logistics tasks. → @DeplaceAI provides data + semantic guidance that turns large-scale demonstrations into training signal for robotics/physical AI. I'm aware that there might be companies missing, and the map is not completed! More in comments... ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com
Lukas Ziegler tweet media
English
15
96
513
52.5K