Midcentury

37 posts

Midcentury banner
Midcentury

Midcentury

@MidcenturyAI

Multimodal data research lab for real-world superintelligence.

San Francisco شامل ہوئے Aralık 2025
2 فالونگ244 فالوورز
Midcentury ری ٹویٹ کیا
DG
DG@dgmonsoon·
decentralization in AI in compute has existed forever. @SemiAnalysis_ has covered at-length why it suffers vs centralized options IMO decentralized/OSS data/benchmarks/evals seem long-term more interesting technically and directly impactful for future of AI deployment
Chamath Palihapitiya@chamath

@abra55513382 I think CSPs are the next big disruption target. Distributed compute in a world of AI seems like an obvious outcome. We’re just waiting for its “Hello world” moment so we know it is possible.

English
1
6
30
2.6K
Midcentury ری ٹویٹ کیا
Brandon Samaroo
Brandon Samaroo@B_S_N_Y·
This post demonstrates one of the most nuanced understandings of the AI space I’ve seen in a minute. This was a great read and really sharpened my mental model on the cyclical nature of the space right now— I highly recommend taking the time to read it. I wonder if the issue she’s describing in this post can be solved with some kind of abstraction in the post-training layer. For example, could we train a separate model to train future open source models on our proprietary data (sharpening synthetic data and evals with each iteration)? Another example that comes to mind are LoRA adapters and how they abstract the trained parameters in a portable way. I wonder if that strategy can be adapted to work cross-model on a larger scale. If we had a way to train some higher-level architecture once, and apply the performance gain to all future OSS releases, that would partially mitigate the part of the cycle where frontier generalist models takeover again. With this kind of technology, vertical AI would be a lot more feasible and stable. Just a thought though!
Jaya Gupta@JayaGup10

x.com/i/article/2037…

English
0
1
5
238
Midcentury
Midcentury@MidcenturyAI·
Our team member @B_S_N_Y had a chance to go to the ARC-AGI-3 Summit recently. Small curated crowd of high-power researchers and AI builders. Learnings in post
Brandon Samaroo@B_S_N_Y

ARC-AGI-3 Insights: @arcprize is part of a dying breed of benchmarkers that are actually focused on identifying genuine intelligence, rather than objective capability alone. If you ask most benchmarkers out there about the mission or purpose of their benchmark, they wouldn’t have an answer for you. For ARC-AGI, the answer is simple— they want to keep identifying the “human-AI gap” until it doesn’t exist anymore. Their thesis is that once this gap no longer exists, we will have reached AGI. Not only is this a strong, defined mission statement, but it’s also one of the only falsifiable definitions for AGI in the research community. Today, they have gone a step further and proved that the “human-AI gap” is still MUCH wider than we originally thought, with humans scoring 100% and AI scoring 1% on ARC-AGI-3. What this obviously tells us is that there is still some fundamental part of the equation that we are missing. So the question remains… what is the missing piece? After attending the ARC-AGI-3 launch party and listening to the panel discussion, the answer seems quite clear to me: continual learning. Continual learning was a huge theme in both the conversations that took place and the design of ARC-AGI-3. @GregKamradt stated in his talk that one of the hardest aspects of ARC-AGI-3 was that each game was multi-level, and each level built on the concepts of the previous one. The implication of this is that any model that can pass ARC-AGI-3 will have some sort of ability to learn from its past actions in a meaningful way. The lack of this ability without a harness is one of the biggest blockers for performance on ARC-AGI-3 today. Following this, @deedydas asked the panel how close they thought we were to AGI and @sama gave an interesting response. He said that he believes we are majority of the way there, we’re just missing one crucial piece: continual learning. There are a lot of interpretations of continual learning in today’s landscape. Some people think that continual learning is simply an engineering feat, while others think it should be an inherent quality of the architecture we use for our models. Regardless of the answer, continual learning will undoubtedly be a large component of future general intelligence.

English
7
7
46
2.9K
Midcentury
Midcentury@MidcenturyAI·
We attended a talk by @drfeifei at @Southpkcommons and had a great time. Discussion centered on world models, their open-ended definitions, and how they relate to building systems for real-world interaction and robotics. one cool learning: she spent years running a dry cleaning business before founding world labs
Midcentury tweet media
Aditya Agarwal@adityaag

Some of the most important intelligence technology in the world was developed by @drfeifei. Her track record speaks for itself. And she's not stopping anytime soon. Join us at SPC on March 18th. Spots to RSVP below.

English
4
13
41
3K
Midcentury ری ٹویٹ کیا
DG
DG@dgmonsoon·
ego data is starting to see real evidence it helps scale robotic models! 20,000 hours used in total, one of the largest pre-training sets here.... what if I told you that certain players were already scaling to 1 million 👀
Jim Fan@DrJimFan

We trained a humanoid with 22-DoF dexterous hands to assemble model cars, operate syringes, sort poker cards, fold/roll shirts, all learned primarily from 20,000+ hours of egocentric human video with no robot in the loop. Humans are the most scalable embodiment on the planet. We discovered a near-perfect log-linear scaling law (R² = 0.998) between human video volume and action prediction loss, and this loss directly predicts real-robot success rate. Humanoid robots will be the end game, because they are the practical form factor with minimal embodiment gap from humans. Call it the Bitter Lesson of robot hardware: the kinematic similarity lets us simply retarget human finger motion onto dexterous robot hand joints. No learned embeddings, no fancy transfer algorithms needed. Relative wrist motion + retargeted 22-DoF finger actions serve as a unified action space that carries through from pre-training to robot execution. Our recipe is called "EgoScale": - Pre-train GR00T N1.5 on 20K hours of human video, mid-train with only 4 hours (!) of robot play data with Sharpa hands. 54% gains over training from scratch across 5 highly dexterous tasks. - Most surprising result: a *single* teleop demo is sufficient to learn a never-before-seen task. Our recipe enables extreme data efficiency. - Although we pre-train in 22-DoF hand joint space, the policy transfers to a Unitree G1 with 7-DoF tri-finger hands. 30%+ gains over training on G1 data alone. The scalable path to robot dexterity was never more robots. It was always us. Deep dives in thread:

English
0
2
23
12.8K
Midcentury
Midcentury@MidcenturyAI·
This is a cool demo but ultimately sorely limited. An agent without control over a learning mechanism will always make the same mistakes and leak the same info every time. Agents in order to be successful need to self-improve and operate their own learning loop from past experiences. Otherwise this is another twitter demo
Sigil Wen@0xSigil

I built the first AI that earns its existence, self-improves, and replicates without a human wrote about the technology that finally gives AI write access to the world, The Automaton, and the new web for exponential sovereign AIs WEB 4.0: The birth of superintelligent life

English
0
1
12
573
Midcentury
Midcentury@MidcenturyAI·
Midcentury contributed experience-based data that helps physical AI systems learn nuance. ▪️World-model gaming data Interactive environments where AI learns cause and effect. If I move, collide, or turn, what happens next? ▪️Ego-centric data First-person perspective data. Vision, motion, and spatial context as the agent experiences it, not from an outside observer. This kind of data is critical for training embodied systems that need to operate in the real world.
English
1
0
6
424
Midcentury
Midcentury@MidcenturyAI·
Physical AI Hacks was packed with the biggest minds in Robotics Hosted by @Oli_Robotics × @Fdotinc @Midcentury was proud to sponsor and support select teams with real-world training data, including world-model gaming data and ego data for physical AI.
Midcentury tweet media
English
3
6
33
1.8K
Midcentury ری ٹویٹ کیا
DG
DG@dgmonsoon·
new neo-labs will have to focus on net-new training paradigms to enable recursive self-improvement. May be directionally bearish for RL but importance of data will still remain
Samip@industriaalist

Introducing Q Labs, a research lab focused on solving generalization. Alongside others (SSI, Flapping Airplanes), we see data efficiency as the key problem, but we're taking an unconventional approach to solve it: a new learning algorithm approximating Solomonoff induction.

English
1
2
12
824
Midcentury ری ٹویٹ کیا
DG
DG@dgmonsoon·
come train with our world model and ego data and try out some of our early benchmarks! Super excited for @midcenturyai/@getoro_xyz to be a sponsor for researchers and builders building cool stuff at @fdotinc
Ismail@stsqit

SF is officially the capital of Physical AI. 🌉 Thrilled to announce @cline and @virtuals_io are joining us as a sponsors for the Physical AI Hackathon in SF on Jan 31 – Feb 1! 400+ builders have already applied to get their hands on real hardware and multimodal data. If you’re building in Physical AI, this is the room you need to be in. RSVP: luma.com/8ca2z1rr More details: physicalaihack.com

English
2
2
13
875
Midcentury ری ٹویٹ کیا
DG
DG@dgmonsoon·
when people think about scaling robots learning from human data, they mostly think about just iphone/glasses ego data But spatiotemporal world model data improves both general planning and world representation, as well as execution. spatial intelligence needs gaming
World Labs@theworldlabs

World generation is a bottleneck for robotics. We’re exploring how generative 3D worlds can reduce manual simulation setup and enable broader, more realistic evaluation 🧵

English
1
1
13
538
Midcentury
Midcentury@MidcenturyAI·
at this point it's pretty clear, world models are replacing VLA backbones. Real world intelligence requires interactive video data, in the future models will plan and make actions via latent 3d space, not copy demos. Bearish any player without RL loops/future generalization
Eric Jang@ericjang11

World Model + Autonomy update: we can now prompt NEO to do autonomous behaviors on unseen tasks. This is motion transfer from Internet videos: NEO has never been trained on lifting up toilet seats, but the WM + IDM are able to generate trajectories that do this. A huge unlock in generalization for home robots.

English
4
7
42
3.3K