Igor Kulakov

1.3K posts

Igor Kulakov banner
Igor Kulakov

Igor Kulakov

@ihorbeaver

Building MicroFactory — assembling robot units that bring Factorio into the real world. Seed soon.

San Francisco, CA Katılım Kasım 2007
529 Takip Edilen8.4K Takipçiler
Sabitlenmiş Tweet
Igor Kulakov
Igor Kulakov@ihorbeaver·
Introducing MicroFactory DevKit The first production version of MicroFactory — a general-purpose robot designed to automate manual work. With a glimpse of self-replication.
English
91
183
1.4K
127.4K
Igor Kulakov retweetledi
Hubert Thieblot
Hubert Thieblot@hthieblot·
In 2026, I’m inviting 1,000 founders to San Francisco to start their companies. Are you in?
English
261
119
1.5K
252.1K
Igor Kulakov
Igor Kulakov@ihorbeaver·
@johndevor Yes, a self-contained small factory in a container sounds like a good product
English
0
0
1
11
John Devor
John Devor@johndevor·
@ihorbeaver What if we scale it up to container-sized and put more than just arms in the containers?
English
2
0
0
57
Igor Kulakov
Igor Kulakov@ihorbeaver·
A Salesforce Tower full of MicroFactories could replace Shenzhen for the US. A single Microfactory occupies about 3×2×2 feet (12 cubic feet). The Salesforce Tower in San Francisco is about 22 million cubic feet, so it could fit around 1.83 million MicroFactories. Meanwhile, Shenzhen has 1.8 million people employed in manufacturing. Sure, factories need extra space for elevators, material storage, etc., but Shenzhen also ships only 20% of its output to the US, and not everyone is doing physical work, many hold managerial positions.
David@DavidSHolz

5 million humanoid robots working 24/7 can build Manhattan in ~6 months. now just imagine what the world looks like when we have 10 billion of them by 2045. now imagine the year 2100.

English
2
1
31
3K
Yunzhu Li
Yunzhu Li@YunzhuLiYZ·
Teleoperation is often used to scale robot data collection. But anyone who has actually done fine-grained teleop knows how hard it is, even for experienced operators: precise alignment, axis-constrained rotation, contact regulation, and millimeter-level control are extremely unforgiving. In this work, we introduce Residual Copilot, a real2sim2real shared-autonomy framework that learns low-level corrections to assist teleoperators in contact-rich manipulation. residual-copilot.github.io User study results: 🔩 Nut Threading: 40% → 100% success ⚙️ Gear Meshing: 16.4s → 10.9s completion 📌 Peg Insertion: 30.3s → 18.5s completion Great work led by @shashuo0104 👇
Shuo Sha@shashuo0104

[1/5] Fine-grained teleop is slow, error-prone, and frustrating even for experts. We introduce a real2sim2real shared autonomy framework that learns a residual copilot for low-level corrections. It enables: 🎮 fine-grained teleop for novices 🤖 a copilot learned from <5 min of teleop data 📈 higher-quality demonstrations for imitation learning 🔗 residual-copilot.github.io

English
1
13
96
11.7K
Andreas Klinger 🦾
Andreas Klinger 🦾@andreasklinger·
Let's do a robots watch party! 😉 This is a whole video where we go through videos of real-world use cases for robotics – industry per industry – 20 minutes – only the coolest robot stuff out there. 🤖🔥 And I have the perfect person to join me: @lukas_m_ziegler, 300k+ followers across platforms, million views, one of the leading influencer voices bringing robotics mainstream. We start with five markets that are mature: 📦 Logistics, ⚡ Energy, 🌾 Farming, 🏗️ Construction, 🔒 Security. Robotics, real world, actually happening. Not pitch decks. We explain the context and what opportunities we see. 👀 Then we get into the weird stuff. Robot skin made from human cells. Brain-controlled robots. Robots in Brains. Fight clubs. A happy robot whose only job is to stop grain bins from exploding. If you ever wondered what the robotics landscape actually looks like right now: This is it. 00:00 Intro 00:37 Logistics 📦 03:04 Energy ⚡ 05:22 Farming 🌾 07:47 Construction 🏗️ 09:46 Security 🔒 11:20 Humanoids 🧍‍♂️ 13:48 😵 Freak section
English
16
33
227
28.2K
Ari Wasch
Ari Wasch@ariwasch·
I had to respin a PCB because I missed a stupid error. Cost me $2K and 1 month. So I built hardware.dog Catch design issues and design your next board faster.
English
19
42
478
30.1K
Igor Kulakov
Igor Kulakov@ihorbeaver·
Preparing the second MicroFactory unit so they can work in a line. Real-world Factorio is coming.
Igor Kulakov tweet media
English
22
33
741
19.8K
Eric Jang
Eric Jang@ericjang11·
Life update: I've decided to leave 1X. It's been an honor helping grow the company. I joined Halodi Robotics in 2022 (prior name of the company) as the only California-based employee. At the time, we were about 40 based out of Norway and 2 in Texas. My first hire and I worked from my garage for a few months to save money. Today, 1X is hundreds of people, with hardware, design, software, AI, manufacturing, product all relocated to the SF Bay area, firing on all cylinders and working on getting NEO ready for the home. A big thank you to all my colleagues that I worked with. It was a hard decision to leave. When working at an exciting startup that is growing fast, there's always so much to do and never a perfect time time to move on. We have several works in the pipeline that are so exciting because they greatly advance general autonomy and scalability of our deployment approach and really show a realistic path towards the product working. The recent World Model autonomy update is one example, and there's more coming. The 1X factory is so exciting. Things are accelerating at a speed I would have been surprised by a few years ago. In 2022, most technologists and researchers and VCs were skeptical about humanoids and large scale imitation learning. "Why Legs?" "How could end-to-end learning ever be good enough?" "Why go for the home and not the factory?" "How will we ever gather enough data?" The Overton window on general-purpose robotics has shifted a lot since then. Although we are still early in our mission, I remain confident that soon, house robots will be as commonplace as air conditioners, cars, and ChatGPT. Just talk to the bot, and it will go and quietly get it done. Entire economies will eventually re-organize around this technology. People get it now. What's next? I believe that progress in applied deep learning generally rides on "harnessing the magic" of a few magical objects. These magical objects possess way more generalization power than one might normally expect. Just asking the LLM to understand what you want is magic. Video generation models are magic. Reasoning is magic. You don't run into a magic object every day, but when you do, you make sure to grab it and put it to work to make something useful in the robot somehow. A lot of my early conviction for where robotics was headed was working on BC-Z from 2018-2021. The "magical object" I bet on at the time was the surprising data-absorption capabilities of supervised learning and "just ask for generalization". This pioneered a lot of the standard ingredients we see in VLAs today: - Generalization to unseen language commands - Human-Guided DAgger for policy improvement - Open-loop auxiliary predictions + receding horizon control, AKA action chunking - Manipulation keypoints to improve servoing - Simple ResNet18 with FiLM conditioning on multi-modal inputs The next "magical object" we bet on at 1X was video models, because they are clearly magical objects that learn a data distribution not too dissimilar from what a robot needs to learn. They generalize surprisingly well. I am once again feeling that there are more magical objects in play now, which opens up a lot of new possibilities for robotics and beyond. I'm taking a few months to empty my cup of priors and gain fresh perspective. When I left Google in 2022, I spent about 2 weeks deciding what to do next. This time, I want to take a lot more time to catch up what has happened in the broader AI + robotics space. I've been re-implementing some deep learning papers. I'm working on a big tutorial for my blog. I'm learning all the Claude power user tricks. I'm reading the Thinking Machines blog posts to understand what kinds of experiments are being run at frontier labs. I'm reading Ben Katz's 2016 thesis on the Mini-cheetah actuator. I'm traveling to China in March to meet incredible companies in the Chinese robotics ecosystem. Now, more than ever, is the time for both humans and machines to learn. The next token of my life sequence will be an important one. To colleagues and investors that bet on 1X early, even before we became a household name - I thank you from the bottom of my heart. I won't forget it♥️
English
155
44
1.7K
285.6K
Igor Kulakov
Igor Kulakov@ihorbeaver·
Robotic insight 3 There’s a widely held view among leading robotics thinkers: the current priority is to automate some commercially profitable work to get real-world data. If you set the goal as “the first profitable deployment that also generates data” you get a few insights: - People with industry experience may be better positioned to nail this than academic researchers. - You can try to find non obvious initial domain, that’s easier to deploy in practice, while still provide useful data.
Igor Kulakov@ihorbeaver

Robotic insight 2: One undocumented feature of VLA models is that they can generalize pretty well from very little data, if you run them on good hardware. Here, with 35 demonstrations (5 mins in total), robot can pick up small pcb from any position and at any rotation angle.

English
12
12
182
19.4K
Igor Kulakov
Igor Kulakov@ihorbeaver·
@elias_ab_ 1 variation per episode. In total: 7 positions + 8 rotations in 2 positions + 4 rotations in 2 positions
Français
1
0
2
461
Elias
Elias@elias_ab_·
@ihorbeaver How many variations were recorded in the demonstrations (or how many demonstrations per each variation/position)?
English
1
0
0
552
Igor Kulakov
Igor Kulakov@ihorbeaver·
Robotic insight 2: One undocumented feature of VLA models is that they can generalize pretty well from very little data, if you run them on good hardware. Here, with 35 demonstrations (5 mins in total), robot can pick up small pcb from any position and at any rotation angle.
Igor Kulakov@ihorbeaver

The advantage of arms with industrial internals is that they don’t wobble, so the AI model can control it faster by just multiplying frames per second. Here’s we multiplied the FPS by 3× compared to teleoperation (180 instead of 60).

English
21
26
338
46.5K
Igor Kulakov
Igor Kulakov@ihorbeaver·
@paravn Not quite. Cheap arms have backlash, so they overshoot from inertia, go back and forth and take time to settle. Arms with no backlash and stiff structure just reach the position cleanly.
English
1
0
14
955
Parav
Parav@paravn·
@ihorbeaver What you mean by "don't wobble"? Meaning they have high repeatability?
English
2
0
0
1K
Igor Kulakov
Igor Kulakov@ihorbeaver·
The advantage of arms with industrial internals is that they don’t wobble, so the AI model can control it faster by just multiplying frames per second. Here’s we multiplied the FPS by 3× compared to teleoperation (180 instead of 60).
English
21
24
379
69.4K
Igor Kulakov
Igor Kulakov@ihorbeaver·
@GPTJustin Yes. The problem is that current models don’t have memory so they don’t know at what stage of wobble they are at the moment. The result is a lot of slow demos in robotics.
English
3
0
10
1.4K
Justin Strong
Justin Strong@GPTJustin·
@ihorbeaver Do you think "can approximate where the wobble ends up" is a likely emergent property of models that generalize well?
English
1
0
2
1.7K
Igor Kulakov
Igor Kulakov@ihorbeaver·
Industrial UR5e arm inside our box for size comparison. Industrial arms are good because they are rigid and have low backlash, so they do not shake during operation. However, they are quite bulky. One of the reasons to build own arms is to combine the best of both worlds: compact size and stability.
English
6
5
144
8.3K