mimic

47 posts

mimic banner
mimic

mimic

@mimicrobotics

Physical AI to scale your most tedious tasks from manufacturing to logistics. Our robots intuitively learn new skills from you and operate fully autonomously.

Katılım Haziran 2023
7 Takip Edilen2.6K Takipçiler
Sabitlenmiş Tweet
mimic
mimic@mimicrobotics·
First look at our collaboration with @AudiOfficial on bringing AI-driven robotics into industrial production. Our end-to-end pixel-to-action model, running on our bi-manual platform, is capable of performing a complex, dexterous and long-horizon insertion task.
English
16
44
306
62.6K
mimic retweetledi
Elvis Nava
Elvis Nava@elvisnavah·
Seeing the truly insane @mimicrobotics team bring this to life in a very compressed timeline was truly something else. Super proud to be working with @AudiOfficial on the bleeding edge of end to end manipulation!
mimic@mimicrobotics

First look at our collaboration with @AudiOfficial on bringing AI-driven robotics into industrial production. Our end-to-end pixel-to-action model, running on our bi-manual platform, is capable of performing a complex, dexterous and long-horizon insertion task.

English
6
1
37
4.7K
mimic retweetledi
Elvis Nava
Elvis Nava@elvisnavah·
I just wrote a blog post on mimic-video, @mimicrobotics' answer to a question many have been asking: Why are state of the art VLAs for robotics built on top of Vision-Language Model (VLM) backbones, if those backbones are not pre-trained with physical knowledge in mind?
Elvis Nava tweet media
English
11
17
163
16.2K
mimic retweetledi
Lukas Ziegler
Lukas Ziegler@lukas_m_ziegler·
Robots might learn better from video than from language! 📼 Most Vision-Language-Action (VLA) models learn what to do from text, but still struggle with how things move in the real world. That makes them data-hungry and slow to train. @mimicrobotics video takes a different route. Instead of grounding robot control in text, it grounds it in video, using large pre-trained video models that already capture physical motion and dynamics. The idea is straightforward: let the video model handle “what will happen next,” and let a smaller control model focus only on turning that visual plan into robot actions. The result is big gains in practice. Robots trained this way need 10× less data, converge twice as fast, and perform better on both simulated benchmarks and real bimanual manipulation tasks. If robots can “imagine” motion using video, control becomes a much simpler problem. Shoutout to Jonas Pai, Liam Achenbach, Oier Mees, @elvisnavah and the rest of the team! Here's the project page: mimic-video.github.io ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com
English
14
49
369
49.7K
mimic retweetledi
Elvis Nava
Elvis Nava@elvisnavah·
Today @mimicrobotics and friends are excited to share mimic-video, a new class of Video-Action Model that elevates video model backbones as first class citizens for robot learning!
English
17
42
318
83.2K
mimic retweetledi
Ilir Aliu
Ilir Aliu@IlirAliu_·
Most robot foundation models still learn physics the hard way. From robot data only. This paper takes a different path. mimic-video uses large-scale internet video to learn motion and physical dynamics first, then maps that into robot actions. • Policies are grounded in video, not static images • Physical dynamics are learned during pretraining, not patched later • 10× better sample efficiency than typical VLAs • 2× faster convergence on real and simulated robots • Works across grippers and dexterous hands Instead of asking a VLA to infer time, causality, and motion from sparse robot rollouts, this approach starts where motion already lives: video. It feels like a quiet but important shift in how embodied intelligence might scale. Work by @mimicrobotics with collaborators from Microsoft Zurich, @ETH_en , @UofCalifornia, @UCBerkeley, and @NVIDIARobotics. Led by Jonas Pai, Liam Achenbach, Oier Mees, @elvisnavah, and team. Paper and project links in the comments.
English
7
24
186
36K
mimic
mimic@mimicrobotics·
We’re excited to showcase mimic-video, a new class of Video-Action Model, paving the way for scaling robot learning from pure RGB video pretraining. Read more:
Elvis Nava@elvisnavah

Today @mimicrobotics and friends are excited to share mimic-video, a new class of Video-Action Model that elevates video model backbones as first class citizens for robot learning!

English
1
3
6
1.1K
mimic retweetledi
Follow the Gradient
Follow the Gradient@followgradient·
When @elvisnavah said, ‘the only way the research could even be done was the company,’ we knew we had to talk.
Follow the Gradient tweet media
English
1
2
5
1.3K
mimic
mimic@mimicrobotics·
Thank you to our incredible team, investors, and partners! Onwards and upwards.
English
1
0
5
1.3K
mimic
mimic@mimicrobotics·
Today, we’re excited to announce our $16M seed funding round, accelerating our mission to deploy frontier physical AI across industries. Led by @Elaia_Partners alongside @speedinvest, with participation from Founderful, 1st Kind, 10x Founders, 2100 Ventures, Sequoia Scout Fund.
mimic tweet media
English
6
8
39
4.6K
mimic retweetledi
Elvis Nava
Elvis Nava@elvisnavah·
@mimicrobotics is going to be in Seoul for CoRL and Humanoids! 🦾 🇰🇷 We’re also hosting an exclusive mimic apéro in Seoul on 30th September - a chance to step away from the conference floor and continue conversations about cutting-edge robotics & AI in a more relaxed setting.
Elvis Nava tweet media
English
1
1
7
1K
mimic
mimic@mimicrobotics·
How can we leverage the wealth of cross-embodiment robot data to efficiently train policies that natively support highly dexterous hands? Erik worked on this exact question during his time at mimic. Check out the paper:
Erik Bauer@erikbauerr

How can robots with different end-effectors efficiently learn from each other? In our latest work, we propose a new approach for learning imitation learning policies across end-effectors that enables efficient cross-embodiment skill transfer through two steps: 🧵

English
1
0
14
2.7K
mimic
mimic@mimicrobotics·
mimic x manufacturing We are proud to be working with established partners in the manufacturing space to automate complex assembly tasks. We believe that by pioneering state of the art dexterity, we in turn unlock new possibilities in scalable learning directly from human demos.
English
4
16
132
12.9K