
@abra55513382 I think CSPs are the next big disruption target. Distributed compute in a world of AI seems like an obvious outcome. We’re just waiting for its “Hello world” moment so we know it is possible.
Midcentury
37 posts

@MidcenturyAI
Multimodal data research lab for real-world superintelligence.

@abra55513382 I think CSPs are the next big disruption target. Distributed compute in a world of AI seems like an obvious outcome. We’re just waiting for its “Hello world” moment so we know it is possible.

introducing Egocentric-1M. the largest egocentric video dataset in the world, and our next step in building the internet for physical AI.

Introducing Eastworlds We help robots leave the lab faster Discover More eastworlds.io


ARC-AGI-3 Insights: @arcprize is part of a dying breed of benchmarkers that are actually focused on identifying genuine intelligence, rather than objective capability alone. If you ask most benchmarkers out there about the mission or purpose of their benchmark, they wouldn’t have an answer for you. For ARC-AGI, the answer is simple— they want to keep identifying the “human-AI gap” until it doesn’t exist anymore. Their thesis is that once this gap no longer exists, we will have reached AGI. Not only is this a strong, defined mission statement, but it’s also one of the only falsifiable definitions for AGI in the research community. Today, they have gone a step further and proved that the “human-AI gap” is still MUCH wider than we originally thought, with humans scoring 100% and AI scoring 1% on ARC-AGI-3. What this obviously tells us is that there is still some fundamental part of the equation that we are missing. So the question remains… what is the missing piece? After attending the ARC-AGI-3 launch party and listening to the panel discussion, the answer seems quite clear to me: continual learning. Continual learning was a huge theme in both the conversations that took place and the design of ARC-AGI-3. @GregKamradt stated in his talk that one of the hardest aspects of ARC-AGI-3 was that each game was multi-level, and each level built on the concepts of the previous one. The implication of this is that any model that can pass ARC-AGI-3 will have some sort of ability to learn from its past actions in a meaningful way. The lack of this ability without a harness is one of the biggest blockers for performance on ARC-AGI-3 today. Following this, @deedydas asked the panel how close they thought we were to AGI and @sama gave an interesting response. He said that he believes we are majority of the way there, we’re just missing one crucial piece: continual learning. There are a lot of interpretations of continual learning in today’s landscape. Some people think that continual learning is simply an engineering feat, while others think it should be an inherent quality of the architecture we use for our models. Regardless of the answer, continual learning will undoubtedly be a large component of future general intelligence.


Some of the most important intelligence technology in the world was developed by @drfeifei. Her track record speaks for itself. And she's not stopping anytime soon. Join us at SPC on March 18th. Spots to RSVP below.

We trained a humanoid with 22-DoF dexterous hands to assemble model cars, operate syringes, sort poker cards, fold/roll shirts, all learned primarily from 20,000+ hours of egocentric human video with no robot in the loop. Humans are the most scalable embodiment on the planet. We discovered a near-perfect log-linear scaling law (R² = 0.998) between human video volume and action prediction loss, and this loss directly predicts real-robot success rate. Humanoid robots will be the end game, because they are the practical form factor with minimal embodiment gap from humans. Call it the Bitter Lesson of robot hardware: the kinematic similarity lets us simply retarget human finger motion onto dexterous robot hand joints. No learned embeddings, no fancy transfer algorithms needed. Relative wrist motion + retargeted 22-DoF finger actions serve as a unified action space that carries through from pre-training to robot execution. Our recipe is called "EgoScale": - Pre-train GR00T N1.5 on 20K hours of human video, mid-train with only 4 hours (!) of robot play data with Sharpa hands. 54% gains over training from scratch across 5 highly dexterous tasks. - Most surprising result: a *single* teleop demo is sufficient to learn a never-before-seen task. Our recipe enables extreme data efficiency. - Although we pre-train in 22-DoF hand joint space, the policy transfers to a Unitree G1 with 7-DoF tri-finger hands. 30%+ gains over training on G1 data alone. The scalable path to robot dexterity was never more robots. It was always us. Deep dives in thread:

I built the first AI that earns its existence, self-improves, and replicates without a human wrote about the technology that finally gives AI write access to the world, The Automaton, and the new web for exponential sovereign AIs WEB 4.0: The birth of superintelligent life





Introducing Q Labs, a research lab focused on solving generalization. Alongside others (SSI, Flapping Airplanes), we see data efficiency as the key problem, but we're taking an unconventional approach to solve it: a new learning algorithm approximating Solomonoff induction.

SF is officially the capital of Physical AI. 🌉 Thrilled to announce @cline and @virtuals_io are joining us as a sponsors for the Physical AI Hackathon in SF on Jan 31 – Feb 1! 400+ builders have already applied to get their hands on real hardware and multimodal data. If you’re building in Physical AI, this is the room you need to be in. RSVP: luma.com/8ca2z1rr More details: physicalaihack.com

RL environments

World generation is a bottleneck for robotics. We’re exploring how generative 3D worlds can reduce manual simulation setup and enable broader, more realistic evaluation 🧵

World Model + Autonomy update: we can now prompt NEO to do autonomous behaviors on unseen tasks. This is motion transfer from Internet videos: NEO has never been trained on lifting up toilet seats, but the WM + IDM are able to generate trajectories that do this. A huge unlock in generalization for home robots.