Jason Ma

838 posts

Jason Ma

Jason Ma

@JasonMa2020

Co-founder @DynaRobotics Prev: @GoogleDeepMind, @NVIDIAAI, @MetaAI, @Penn, @Harvard.

Katılım Ağustos 2018
968 Takip Edilen13.4K Takipçiler
Sabitlenmiş Tweet
Jason Ma
Jason Ma@JasonMa2020·
Introducing Dynamism v1 (DYNA-1) by @DynaRobotics – the first robot foundation model built for round-the-clock, high-throughput dexterous autonomy. Here is a time-lapse video of our model autonomously folding 850+ napkins in a span of 24 hours with • 99.4% success rate — zero human intervention • 60% human throughput speed • 4.3/5 quality ratings (set by the client) A thread on our motivation, insights and results:
English
62
123
939
515.5K
Tony Zhao
Tony Zhao@tonyzzhao·
We raised $165M at a $1.15B valuation to stop doing demos. 2026 is about 1) deployment and 2) research. We will start shipping Memo with our new frontier models in a few months. Our series-B is led by Coatue, with Thomas Laffont joining the board. ->🧵
English
114
101
1.5K
351K
Tongzhou Mu 🤖🦾🦿
Tongzhou Mu 🤖🦾🦿@tongzhou_mu·
Proud to share what I’ve been working on with my colleagues at Rhoda AI: Direct Video-Action Models (DVA). TL;DR: - We pre-train causal video models from scratch to control robots - They handle complex production tasks for hours without intervention - Only use ~10 hours of robot data How? 🧵👇
Rhoda AI@rhoda_ai_

To bring generalist intelligent robots to the real world, we have to overcome the data scarcity problem. At Rhoda, we are solving it by reformulating robot policies as video generation. Today, we introduce the Direct Video-Action Model (DVA)

English
10
24
201
16K
underscore advait patel
underscore advait patel@_advaitpatel·
Today is a good day for me to announce that I’ve joined Mind Robotics as a researcher. Excited to build general purpose robots for manufacturing and beyond!
RJ Scaringe@RJScaringe

I am excited to announce Mind Robotics’ $500M financing, co-led by @Accel and @a16z!  Mind is focused on building the world’s leading industrial robotics platform, capable of performing dexterous, variable, and reasoning-intensive tasks. Existing industrial robotics can perform repeatable, dimensionally stable tasks, but a large share of industrial value-add work requires human-like dexterity, adaptation, and physical reasoning that classical robotics cannot address.  We are building AI-powered robots—models, hardware, and deployment infrastructure—that will perform real tasks, in real plants, at real scale.

English
41
4
175
15.4K
Jason Ma
Jason Ma@JasonMa2020·
@YXWangBot Congrats Yixuan, super impressive work!
English
0
0
1
1.1K
Yixuan Wang
Yixuan Wang@YXWangBot·
1/ World models are getting popular in robotics 🤖✨ But there’s a big problem: most are slow and break physical consistency over long horizons. 2/ Today we’re releasing Interactive World Simulator: An action-conditioned world model that supports stable long-horizon interaction. 3/ Key result: ✅ 10+ minutes of interactive prediction ✅ 15 FPS ✅ on a single RTX 4090🔥 4/ Why this matters: it unlocks two critical robotics applications: 🚀 Scalable data generation for policy training 🧪 Faithful policy evaluation 5/ You can play with our world model NOW at #interactive-demo" target="_blank" rel="nofollow noopener">yixuanwang.me/interactive_wo…. NO git clone, NO pip install, NO python. Just click and play! NOTE ⚠️ ALL videos here are generated purely by our model in pixel space! They are **NOT** from a real camera More details coming 👇 (1/9) #Robotics #AI #MachineLearning #WorldModels #RobotLearning #ImitationLearning
English
25
84
470
106.5K
Selina
Selina@Selinaliyy·
Finally got pic in front of iconic outdoor YC sign! (Lowkey terrible angle though 😭) 18 days left till demo day 😳
Selina tweet media
English
23
0
128
5.1K
Jason Ma retweetledi
Jason Ma retweetledi
Jason Ma
Jason Ma@JasonMa2020·
The holy-grail of robotics is a self-improving system that curiously explores and learns from experience. In Tether, we demonstrated that robots can scaffold foundation models' knowledge into high-quality exploratory data and autonomously improve themselves from zero to hero. Some Highlights: 1. autonomously play for 24 hrs with only needing 5 interventions 2. produced 1000+ successful trajectories 3. downstream closed-loop policy learning from 0 to 90% This is my last project from grad school, and I am really excited about the results and what they mean for future agentic robotic systems! Congrats to @willjhliang @sam_wang23 @johnnywang_16 for the herculean effort in pulling this off!
Will Liang@willjhliang

Introducing Tether 🪢, a fun little idea to scale data by having our robot “play” in the real world for over 24 hours, throughout the day and overnight—improving policies from zero to mastery with minimal supervision! But play is messy, with out-of-distribution scenarios that are hard to anticipate. To perform autonomous functional play in the real world, from just a handful of demos, we propose a highly robust few-shot imitation method that warps demo trajectories using visual correspondences. Then, continuously running it within a multi-task VLM-guided cycle, we generate a data stream that produces 1000+ expert-level demos. This generated data is finally funneled downstream to train imitation learning policies, which improve from zero to near-perfect success rates. We’ll be presenting Tether at #ICLR2026 in just a few weeks! But before that, deep dive with me… 🧵

English
7
18
227
22.3K
Chris Paxton
Chris Paxton@chris_j_paxton·
General-purpose reward "foundation models" would allow you to scale real-world reinforcement learning. BUT getting dense ground-truth labels, as you need for reinforcement learning, is basically impossible, so what do you do with the huge piles of data that are all successes or all failures? no signal. instead, learn to compare trajectories.
Jesse Zhang@Jesse_Y_Zhang

A reward model that works, zero-shot, across robots, tasks, and scenes? Introducing Robometer: Scaling general-purpose robotic reward models with 1M+ trajectories. Enables zero-shot: online/offline/model-based RL, data retrieval + IL, automatic failure detection, and more! 🧵 (1/12)

English
7
6
63
8.1K
Jason Ma retweetledi
Jesse Zhang
Jesse Zhang@Jesse_Y_Zhang·
A reward model that works, zero-shot, across robots, tasks, and scenes? Introducing Robometer: Scaling general-purpose robotic reward models with 1M+ trajectories. Enables zero-shot: online/offline/model-based RL, data retrieval + IL, automatic failure detection, and more! 🧵 (1/12)
English
7
103
398
86.1K
Soroush Nasiriany
Soroush Nasiriany@snasiriany·
Proud to share the final project of my PhD, RoboCasa365! There has been so much progress in robot learning in the last couple of years but it’s starting to feel like running large-scale experimentation is increasingly out of reach for independent researchers and there is no consensus yet on benchmarking in the field. During my PhD I wanted to build something that would allow myself and other researchers to study robot learning on large datasets in a meaningful way. So I dedicated my time to building RoboCasa, a large-scale simulation framework for training and benchmarking generalist robot policies. We released the original framework in 2024 and today we are releasing a major new release, RoboCasa365. Compared to the original release, RoboCasa365 feels a lot more like a “full stack” simulation framework: - 2500 kitchen scenes - 365 everyday tasks - 600+ hours of teleportation data and 1600+ hours of synthetically generated trajectories - Benchmarking on sota VLA models By current industry standards, 600 hours of teleportation data is considered modest, but I think this is a good sweet spot of data to study how well robot foundation models can adapt to downstream applications. Right now the benchmark is far from solved. This makes it a useful tested to develop the next algorithms and architectures to push the boundaries on robot learning, be it VLAs, world models, RL algorithms, etc. There is a lot of work left to push generalization, reliability, and throughput for general purpose robots. I was incredibly lucky to have the support of my adviser @yukez, who gave me all the creative freedom, resources, and time to build RoboCasa. I am also very fortunate to work with two hardworking and passionate students, @abhirammaddukur and my brother @SepNasiriany. We spent countless long nights together on this project and it was really fun working as a lean team. I also want to thank @bgxc and the team at @LightwheelAI for being a major supporting force in sourcing assets and collecting the data that we used in this project. Also thank you to the RPL lab and Nvidia for supporting our work. You can now check out RoboCasa365 at robocasa.ai!
English
11
17
159
9.7K
Jiafei Duan
Jiafei Duan@DJiafei·
Instead of asking a VLM to output progress, it reads the model’s internal belief directly from token logits. No in-context learning. No fine-tuning. No reward training. 📈 We introduce: TOPReward, a zero-shot reward modeling approach for robotics using token probabilities from pretrained video VLMs. The simplest way of doing reward modelling for robotics! Project: topreward.github.io/webpage/ 🧵👇
English
12
65
362
105.7K
Jason Ma
Jason Ma@JasonMa2020·
Really awesome tool to supercharge your company’s internal workflows! We use @bubblelab_ai’s pearl daily at Dyna, and find it super useful and getting a lot of internal traction amongst our ops team! Try it out!
Y Combinator@ycombinator

.@bubblelab_ai supercharges your ops work in Slack. Deploy Pearl in one click, connect it to tools like Notion, Jira, Stripe, and let it run tasks and automations for your team directly in Slack. Congrats on the launch, @Selinaliyy and @zhubzyz! ycombinator.com/launches/PWl-b…

English
0
1
12
2.8K
Jason Ma
Jason Ma@JasonMa2020·
Some examples from our work at dyna the past year: x.com/DynaRobotics/s…
Dyna Robotics@DynaRobotics

Wrapped Day-1 at @corl_conf - our booth was buzzing all day, and this is why. ⬇️ Here’s a timelapse of Dynasaur folding continuously, shrugging off every interruption researchers threw at it. It’s thrilling to see the crowd erupt in applause after the robot nails a flawless fold Don't miss it! We're here all week - booth 28 #CoRL2025 #Robotics #AI

English
1
0
16
1.8K
Jason Ma
Jason Ma@JasonMa2020·
@ericjang11 Congrats on the great tenure at 1X, Eric! Excited to see what you do next!
English
0
0
4
327
Eric Jang
Eric Jang@ericjang11·
Life update: I've decided to leave 1X. It's been an honor helping grow the company. I joined Halodi Robotics in 2022 (prior name of the company) as the only California-based employee. At the time, we were about 40 based out of Norway and 2 in Texas. My first hire and I worked from my garage for a few months to save money. Today, 1X is hundreds of people, with hardware, design, software, AI, manufacturing, product all relocated to the SF Bay area, firing on all cylinders and working on getting NEO ready for the home. A big thank you to all my colleagues that I worked with. It was a hard decision to leave. When working at an exciting startup that is growing fast, there's always so much to do and never a perfect time time to move on. We have several works in the pipeline that are so exciting because they greatly advance general autonomy and scalability of our deployment approach and really show a realistic path towards the product working. The recent World Model autonomy update is one example, and there's more coming. The 1X factory is so exciting. Things are accelerating at a speed I would have been surprised by a few years ago. In 2022, most technologists and researchers and VCs were skeptical about humanoids and large scale imitation learning. "Why Legs?" "How could end-to-end learning ever be good enough?" "Why go for the home and not the factory?" "How will we ever gather enough data?" The Overton window on general-purpose robotics has shifted a lot since then. Although we are still early in our mission, I remain confident that soon, house robots will be as commonplace as air conditioners, cars, and ChatGPT. Just talk to the bot, and it will go and quietly get it done. Entire economies will eventually re-organize around this technology. People get it now. What's next? I believe that progress in applied deep learning generally rides on "harnessing the magic" of a few magical objects. These magical objects possess way more generalization power than one might normally expect. Just asking the LLM to understand what you want is magic. Video generation models are magic. Reasoning is magic. You don't run into a magic object every day, but when you do, you make sure to grab it and put it to work to make something useful in the robot somehow. A lot of my early conviction for where robotics was headed was working on BC-Z from 2018-2021. The "magical object" I bet on at the time was the surprising data-absorption capabilities of supervised learning and "just ask for generalization". This pioneered a lot of the standard ingredients we see in VLAs today: - Generalization to unseen language commands - Human-Guided DAgger for policy improvement - Open-loop auxiliary predictions + receding horizon control, AKA action chunking - Manipulation keypoints to improve servoing - Simple ResNet18 with FiLM conditioning on multi-modal inputs The next "magical object" we bet on at 1X was video models, because they are clearly magical objects that learn a data distribution not too dissimilar from what a robot needs to learn. They generalize surprisingly well. I am once again feeling that there are more magical objects in play now, which opens up a lot of new possibilities for robotics and beyond. I'm taking a few months to empty my cup of priors and gain fresh perspective. When I left Google in 2022, I spent about 2 weeks deciding what to do next. This time, I want to take a lot more time to catch up what has happened in the broader AI + robotics space. I've been re-implementing some deep learning papers. I'm working on a big tutorial for my blog. I'm learning all the Claude power user tricks. I'm reading the Thinking Machines blog posts to understand what kinds of experiments are being run at frontier labs. I'm reading Ben Katz's 2016 thesis on the Mini-cheetah actuator. I'm traveling to China in March to meet incredible companies in the Chinese robotics ecosystem. Now, more than ever, is the time for both humans and machines to learn. The next token of my life sequence will be an important one. To colleagues and investors that bet on 1X early, even before we became a household name - I thank you from the bottom of my heart. I won't forget it♥️
English
155
44
1.7K
285.2K
Junyao Shi
Junyao Shi@JunyaoShi·
A bit more insight into this: This is a long-horizon, end-to-end policy. Making scrambled eggs is a highly challenging task for current robot models: it involves a long, multi-stage workflow with many different subtasks; picking up eggs and cracking them both require fine-grained manipulation; whisking the eggs demands sufficiently fast arm motions; and the robot must also understand when to stop mixing the eggs in the bowl and in the pan. For robot models, this task serves as a comprehensive test of multiple complex capabilities.
Junyao Shi@JunyaoShi

Here's what I've been "cooking" 🍳 since I started my internship 3 months ago at @SkildAI! It was incredibly satisfying to see the robot policy I trained make real scrambled eggs — taste-tested by me at the end of the video. Stay tuned for more "cooking" from our robot master chef 👨‍🍳!

English
3
1
26
2.7K