Mecka

88 posts

Mecka banner
Mecka

Mecka

@MeckaAI

The Data Platform For Robotics Dba Mecka AI

Katılım Ocak 2024
100 Takip Edilen21.5K Takipçiler
Mecka
Mecka@MeckaAI·
We couldn't be more excited to have @markgrinev on the team. Mark has spent years building 0-1 at early stage startups — during his time at Lazer they got to work with some of the incumbents in each domain: Abridge, Polymarket, Decagon, Coinbase. Now he's bringing that same energy to robotics.
Mark Grinev@markgrinev

excited to announce i've joined my dear friend @jasontheutopian to build @MeckaAI robotics will be the next trillion dollar market and we're building the infrastructure to make it happen more to come soon 👀

English
4
1
12
2K
Mecka
Mecka@MeckaAI·
Slowly then all at once
Jim Fan@DrJimFan

Announcing DreamDojo: our open-source, interactive world model that takes robot motor controls and generates the future in pixels. No engine, no meshes, no hand-authored dynamics. It's Simulation 2.0. Time for robotics to take the bitter lesson pill. Real-world robot learning is bottlenecked by time, wear, safety, and resets. If we want Physical AI to move at pretraining speed, we need a simulator that adapts to pretraining scale with as little human engineering as possible. Our key insights: (1) human egocentric videos are a scalable source of first-person physics; (2) latent actions make them "robot-readable" across different hardware; (3) real-time inference unlocks live teleop, policy eval, and test-time planning *inside* a dream. We pre-train on 44K hours of human videos: cheap, abundant, and collected with zero robot-in-the-loop. Humans have already explored the combinatorics: we grasp, pour, fold, assemble, fail, retry—across cluttered scenes, shifting viewpoints, changing light, and hour-long task chains—at a scale no robot fleet could match. The missing piece: these videos have no action labels. So we introduce latent actions: a unified representation inferred directly from videos that captures "what changed between world states" without knowing the underlying hardware. This lets us train on any first-person video as if it came with motor commands attached. As a result, DreamDojo generalizes zero-shot to objects and environments never seen in any robot training set, because humans saw them first. Next, we post-train onto each robot to fit its specific hardware. Think of it as separating "how the world looks and behaves" from "how this particular robot actuates." The base model follows the general physical rules, then "snaps onto" the robot's unique mechanics. It's kind of like loading a new character and scene assets into Unreal Engine, but done through gradient descent and generalizes far beyond the post-training dataset. A world simulator is only useful if it runs fast enough to close the loop. We train a real-time version of DreamDojo that runs at 10 FPS, stable for over a minute of continuous rollout. This unlocks exciting possibilities: - Live teleoperation *inside* a dream. Connect a VR controller, stream actions into DreamDojo, and teleop a virtual robot in real time. We demo this on Unitree G1 with a PICO headset and one RTX 5090. - Policy evaluation. You can benchmark a policy checkpoint in DreamDojo instead of the real world. The simulated success rates strongly correlate with real-world results - accurate enough to rank checkpoints without burning a single motor. - Model-based planning. Sample multiple action proposals → simulate them all in parallel → pick the best future. Gains +17% real-world success out of the box on a fruit packing task. We open-source everything!! Weights, code, post-training dataset, eval set, and whitepaper with tons of details to reproduce. DreamDojo is based on NVIDIA Cosmos, which is open-weight too. 2026 is the year of World Models for physical AI. We want you to build with us. Happy scaling! Links in thread:

English
0
2
21
3.7K
Mecka
Mecka@MeckaAI·
We’re hiring across hardware, software and operations to build the data layer for Physical AI. Roles across Toronto/NYC/Shenzhen. Some of our roles: - Senior Full Stack Engineer - Computer Vision Engineer - ML engineer - VLA & LLM Engineers - Robotocists We’ve got some of the best customers in the world and get to glimpse into the future… Come join us to build on the frontier! Apply below or send a DM of anything of extraordinary talent you’ve built. jobs.ashbyhq.com/mecka.ai
English
10
24
215
275.6K
Mecka retweetledi
Jim Fan
Jim Fan@DrJimFan·
New milestone: we trained a robot foundation model on a world model backbone, and enabled zero-shot, open-world prompting capability for new verbs, nouns, and environments. If the world model can "dream" the right future in pixels, then the robot can execute well in motors. We call it "DreamZero", our first World Action Model (WAM). Our team had tons of fun at the lab typing anything we like into an open text prompt, and watch the robot perform tasks it was never trained on. An emergent capability we didn't quite expect. Obviously not GPT-3 reliable yet, but we are marching into the GPT-2 era. Discoveries: - Model and data recipe co-evolve. Compared to VLAs, WAMs learn best from diverse data, breaking away from the conventional wisdom that lots of repeated demos per task are the bread and butter. Diversity >> repetitions. - X-embodiment is extremely hard. Pixels are the answer. Different robot morphologies traditionally have a hard time sharing knowledge well. But if we put video first, pixels become the universal bridge connecting different hardware - even videos of human first-person view. DreamZero shows significant robot2robot and human2robot transfer. With only 55 trajectories on a *new*, unseen hardware (~30 min of teleop), it adapts so quickly and retains zero-shot prompting ability. Yesterday I posted about the "Second Pre-training Paradigm": world models are the next-gen foundation of Physical AI, not language backbones. Today, we are proving it works. And 2026 has just begun. Paper: World Action Models are Zero-Shot Policies. Read it now: (thread)
English
47
113
604
57.6K
Eric Jang
Eric Jang@ericjang11·
the sheer panic of running out of thinking tokens in my subscription and having to resort to manual thinking
English
12
6
137
14.2K
Ty
Ty@TyneeWorld·
2025 was a wild year. And 2026 is going to top it 🤖
English
1
0
9
550
Mecka
Mecka@MeckaAI·
We are at ICRA 2025! Reach out if you'd like to chat 🤖🧠
Mecka tweet media
English
32
28
116
16.6K
Mecka
Mecka@MeckaAI·
@TheHumanoidHub "Robotic data-collection emerges as a prominent theme in AI"... slowly then all at once
English
0
0
9
386
dar
dar@radbackwards·
I’ve long been frustrated with all you nerds using ‘based’ as a proxy for whatever the opposite of ‘woke’ is… But on this day in the robot factory, Lil B aka Lil B from the Pack aka BASED GOD (@LILBTHEBASEDGOD) reminded me that no matter who adopts and adapts BASED— the term will never lose the love and positivity he packed into its syllables… So keep going nerd burgers. Make all things based. Swag to the whole team and shout out to the Based God #TYBG
a@siliconvmg

Lil B explaining the term “Based” to Eric Jang, VP of AI at 1X Technologies

English
16
23
165
33.4K
Mecka
Mecka@MeckaAI·
Time for a refresh 🤖
Mecka tweet media
English
90
81
219
9.7K