EgoScale

27 posts

EgoScale banner
EgoScale

EgoScale

@EgoScale

First-person (POV) data from real humans for robot learning and embodied AI.

Katılım Kasım 2025
11 Takip Edilen441 Takipçiler
EgoScale
EgoScale@EgoScale·
Not just video. Training-ready 3D human behavior. From raw POV to structured 3D trajectories & actions. Built for embodied models.
English
2
15
119
8.5K
Jim Fan
Jim Fan@DrJimFan·
We trained a humanoid with 22-DoF dexterous hands to assemble model cars, operate syringes, sort poker cards, fold/roll shirts, all learned primarily from 20,000+ hours of egocentric human video with no robot in the loop. Humans are the most scalable embodiment on the planet. We discovered a near-perfect log-linear scaling law (R² = 0.998) between human video volume and action prediction loss, and this loss directly predicts real-robot success rate. Humanoid robots will be the end game, because they are the practical form factor with minimal embodiment gap from humans. Call it the Bitter Lesson of robot hardware: the kinematic similarity lets us simply retarget human finger motion onto dexterous robot hand joints. No learned embeddings, no fancy transfer algorithms needed. Relative wrist motion + retargeted 22-DoF finger actions serve as a unified action space that carries through from pre-training to robot execution. Our recipe is called "EgoScale": - Pre-train GR00T N1.5 on 20K hours of human video, mid-train with only 4 hours (!) of robot play data with Sharpa hands. 54% gains over training from scratch across 5 highly dexterous tasks. - Most surprising result: a *single* teleop demo is sufficient to learn a never-before-seen task. Our recipe enables extreme data efficiency. - Although we pre-train in 22-DoF hand joint space, the policy transfers to a Unitree G1 with 7-DoF tri-finger hands. 30%+ gains over training on G1 data alone. The scalable path to robot dexterity was never more robots. It was always us. Deep dives in thread:
English
143
283
1.7K
267K
EgoScale
EgoScale@EgoScale·
We’ve reached 25K hours of real-world egocentric (POV) human activity data. Covering multiple agents × environments × strategies: the same goal, different paths; the same scene, different decisions. If your model must generalize, diversity is essential.
English
10
22
236
29.4K
EgoScale
EgoScale@EgoScale·
@DrJimFan We provide large-scale, highly diverse egocentric (POV) human video datasets — multi-user, real-world, long-horizon. Open to collaboration.
English
0
0
1
134
EgoScale
EgoScale@EgoScale·
@niccruzpatane A big part of why robots can achieve this now is the data. When training includes large-scale, real-world human behavior with natural corrections, noise, and long-horizon structure, these kinds of behaviors stop being “impressive demos” and start being learnable.
English
0
0
0
39
Nic Cruz Patane
Nic Cruz Patane@niccruzpatane·
These are the types of humanoid robot demos we want to see. 😂
English
41
45
450
36.9K
EgoScale
EgoScale@EgoScale·
@oprydai Want robots to generalize? Start from how humans actually act in the real world.
English
0
0
0
247
Mustafa
Mustafa@oprydai·
if you’re learning: • robotics • actuators, motors, gears, transmissions • sensors, IMUs, encoders, cameras, LiDAR • kinematics (forward, inverse) • dynamics & control (PID, MPC) • ROS2, rviz2, gazebo • motion planning (RRT*, A*, trajectory optimization) • SLAM (lidar, visual, fusion) • drones • autonomy stacks (perception → planning → control) • embedded systems & microcontrollers • power systems & batteries • CAD & mechanical design • supply chain and Manufacturing • product design and development • full stack engineering (software for hardware) welcome. this account is for you.
English
46
125
2.2K
124.3K
EgoScale
EgoScale@EgoScale·
@saranormous A huge part of the “divergence” is data + distribution: what real-world behavior looks like vs. what we train on. The gap is still massively underpriced.
English
0
0
1
306
sarah guo
sarah guo@saranormous·
the divergence of opinion in how robotics plays out is one of the biggest money making (and career making) opportunities in AI
English
52
34
571
99.9K
Chris Paxton
Chris Paxton@chris_j_paxton·
What an utterly bleak post. I share the skepticism for anything that separates robot "brain" from "body," but: - a lot of the best whole body control work is coming from the USA and Europe -- dancing robots are not useful; of the Chinese companies UBTech seems like the leader in deploying commercially viable humanoids and they're mostly absent from the discussion - there will be many successful robotics companies, not just in china - the future is not set in stone, American companies can in fact build robots. We just saw the public fauna release for example and the most impressive CES demo was Boston Dynamics! - its an open secret that most of these companies have not found any product market fit, on either side of the ocean, even though there's huge demand for robotic solutions - robotics is also more than humanoids -- ai for science robotics has exploded recently and is again dominated by western startups Are there headwinds? Yes. Is the robotics supply chain in China? Clearly also yes, in part. But the supply chain is global -- it always will be, we live in a globalized world. I think a lot of people want to cede the robotics race prematurely.
Benjamin Bolte@benjamin_bolte

The obvious end state for this path is Chinese body, Chinese brain. I'm actually pretty excited to see what happens this year, it will probably result in some really amazing stuff being built and it seems a lot more useful than a bunch of Chinese Cluelys. Just going by the demos at CES this year compared to last year, the bar is moving up so rapidly and there are so many little details getting figured out. But yea, if you have any degree of intellectual honesty you can tell that many of the best robotics software demos are coming from China, particularly for full-body control, for the same reason that the best LLMs are from America - modern AI is mostly an infrastructure problem, not a methodological problem. I am quite worried that the future of robotics in the US looks a lot like the current electric car situation, and we're stuck with expensive, worse, "premium-only" options because no one actually really wants to do the hard, boring infrastructure work. I don't have much faith that America will be able to put together any kind of coherent industrial policy to do something different when the people involved are so obviously self-motivated and interested in regulatory capture for the status quo. The 200+ humanoid startups in China aren't trying to become the next Foxconn and I have no idea why so many smart people in Silicon Valley convinced themselves that this was the case before even talking to any of them. A good example was the Astribot - Pi "partnership". I met the Astribot CEO a few months before that got announced and it was obvious that they were an extremely ambitious full-stack team that had no intention of being the Cursor to Pi's Anthropic unless there was some exclusivity on the table. The brain companies don't have much leverage. Just look at the margins they're paying for hardware. I'm pretty sure American VCs have helped incubate dozens of Chinese companies. It all just feels kind of depressing, watching from the outside. Shenzhen really seems like Detroit in its heyday and I'm kind of jealous of everyone that has decided to move there in the last year or two. Anyway, all this is to say that I'm a Figure stan now and I hope they don't blow up. And Sunday and Bot Co of course. There are several former K-Scale people at Bot Co now and I am very excited for their launch.

English
12
4
81
11.1K
EgoScale
EgoScale@EgoScale·
@CyberRobooo DualWorld looks amazing for whole-body control. Massive unscripted human POV data could take its predictive power to the next level. Loving the progress in this space!
English
0
0
0
39
CyberRobo
CyberRobo@CyberRobooo·
Another world model for whole-body motion control: DualWorld It attempts to integrate pre-planning and rapid action in humanoid robots. In the video, Fourier GR-3 can handle complex household chores. The world model gives the humanoid robot two brains: One is responsible for predicting what the world might look like in the next few seconds,The other reacts in real time to ensure accurate movements. This reduces trial and error and increases foresight.And,it allows the humanoid robot to act more like a human.
English
10
47
212
5.9K
EgoScale
EgoScale@EgoScale·
@atajik “Data. Data. Data.” is a hard constraint. Most generalization failures come from not seeing real, messy, long-horizon behavior in the wild.
English
0
0
0
77
Arash Tajik
Arash Tajik@atajik·
One of the best robotics panels in SF so far this year. - Data. Data. Data. Deploying robots in real environments is key. - Generalize purpose robots are far out or near depending on how you define general purpose! Anywhere b/w 2 to 20 years depending on the task and complexity. - VLAs are our best options today but video, world models, future architectures could be the future. @chris_j_paxton, Jason Ma, Adrian Li-Bell, Daniel Ho, @1x_tech @agilityrobotics @physical_int @DynaRobotics
Arash Tajik tweet media
English
12
10
147
11.7K
EgoScale
EgoScale@EgoScale·
We recently assembled a real world POV manipulation demo (home and kitchen) for quick sanity checks. -Continuous, unscripted human behavior. -Task level and action level temporal annotations. -Diverse users and environments. If you’re working on embodied or manipulation models, this is the kind of data you want to look at. Happy to share the demo if useful.
English
3
0
8
805
EgoScale
EgoScale@EgoScale·
@_akhaliq Exactly. We’re building large-scale, real-world egocentric (POV) human behavior data to address long-horizon generalization.
English
1
0
0
221
AK
AK@_akhaliq·
Being-H0.5 Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization
English
6
21
121
43.2K
EgoScale
EgoScale@EgoScale·
@SkildAI Long-horizon generalization is less about clever architectures, and more about seeing enough diverse, real-world behavior over time.
English
0
0
1
15
Skild AI
Skild AI@SkildAI·
Humans learn by watching. Robots should too.
English
26
131
848
1M
EgoScale
EgoScale@EgoScale·
For teams looking to go deeper, we can keep expanding diverse, high-value data and tailor the processing pipeline to specific training needs, efficiently at scale.
English
1
0
0
529
EgoScale
EgoScale@EgoScale·
We’re nearing 5,000 hours of real-world egocentric POV manipulation data. Collected across different people and real-world environments, with varied object layouts and execution styles, all from natural, unscripted first-person behavior.
English
1
1
5
903
EgoScale
EgoScale@EgoScale·
Human data matters when it is captured at scale across diverse real-world scenes and behaviors. At EgoScale, we deliver large-scale real-world egocentric (POV) data across users, scenes, and behaviors. It is already used by multiple teams to train and evaluate real-world policies.
David Protein@david_protein

Designed for disciplined decadence. David Bronze delivers 20g of protein, 150 calories, and 0g of sugar, equating to 53% of its calories from protein. Available in 4 core, indulgent flavors. Buy 4 cartons on our site, and get the 5th free.

English
0
0
2
771
EgoScale
EgoScale@EgoScale·
@minchoi Impressive speed and precision. What often gets overlooked is how much this depends on real human manipulation data at scale.
English
0
0
1
381
Min Choi
Min Choi@minchoi·
It's over Robot hand can now achieve beyond human speed and precision 🤯
English
228
491
3.4K
333.7K
EgoScale
EgoScale@EgoScale·
@BrianRoemmele Bimanual fine-motor skills like this need real-world human demos. EgoScale collects unscripted egocentric POV workflows for embodied robots.
English
0
0
0
210
Brian Roemmele
Brian Roemmele@BrianRoemmele·
China-based TARS Robotics demonstrated a humanoid robot performing two-handed hand embroidery during a public showcase on December 22, marking a notable step in fine motor control for humanoid systems. The robot threaded a needle and stitched a logo using both hands with sub-millimeter accuracy, working with soft, flexible materials that are difficult for traditional industrial robots to handle due to deformation and variability. According to the company, this capability is enabled by a closed-loop "Data-Al-Physics" system that connects real-world data collection, embodied Al models, and physical execution, reducing the gap between simulation and real deployment. The models are said to be open source when released. Founded in February 2025, TARS Robotics has moved quickly from research to live demonstrations and has raised over $240 million in early funding from the Chinese government, reflecting growing interest in general-purpose humanoid robots for precision and dexterous tasks.
English
19
84
432
83.1K