Max Wittstamm

42 posts

Max Wittstamm banner
Max Wittstamm

Max Wittstamm

@MWtstm

Building the Workflow Planning Layer for Physical AI Product Creation

Berlin Katılım Mayıs 2022
65 Takip Edilen15 Takipçiler
Max Wittstamm
Max Wittstamm@MWtstm·
@uncover_ai 1X's world model framing is the right foundation. The gap that remains: once Neo is in your home, what derives the task sequence, fixture constraints, and failure recovery plan from the environment geometry? That workflow layer is what turns a capable robot into a reliable one.
English
0
0
0
90
Uncover AI
Uncover AI@uncover_ai·
The founder of 1X said something during this humanoid robot factory tour that genuinely shocked me to my core: “I think over the next decade, the entire substrate of society will change.” Then, he added: “When you get your Neo, it’ll be able to attempt almost anything. You could say: ‘Hey Neo, can you go upstairs and make me a coffee?’” That’s when the video stopped feeling like a tech demo. This company is building humanoid robots designed for consumer homes. And the craziest part? He thinks this will become “useful”l for average consumers by 2027. Not 2040. The whole video feels like watching the first iPhone demo. Especially when he explains the “world model”: “It’s trained on how the world works in a dynamical sense.” Meaning the robot doesn’t just memorize tasks. It learns reality itself. At one point, he says: “Sometimes it will succeed, and sometimes it will fail. But that, of course, is how you learn.” That line stayed with me. Because every mistake becomes training data. Every house becomes a classroom. Every robot makes the next robot smarter. The video gets even crazier when they walk through the 58,000-square-foot space. A factory that builds everything in-house. One line from the founder explains why this matters: “How quickly can you learn and iterate and make this better and better… that is what is going to define the next decade.” Not gonna lie. This is one of the few AI videos that genuinely made me feel like the future has already arrived quietly while everyone was distracted arguing about chatbots...
Roberto Nickson@rpnickson

I toured the only factory in America building humanoid robots from scratch. Last month, I got an exclusive first look at the facility making NEO— the humanoid robot launching into consumer homes later this year. It is the most vertically integrated humanoid robot factory in America. Every critical component is done in-house. CEO Bernt Børnich walked us through the full story: from building soapbox cars with his dad at age 11 to leading the only end-to-end humanoid robot factory in the United States. VP of Operations Vikram Kothari then took us inside the entire build process, where every motor, limb, circuit, and sensor comes together under one roof on a rapid four-week cycle from CAD to finished robot. [TIMESTAMPS] (00:00) Welcome to the NEO Factory (01:00) 1X: from childhood dream to reality (02:21) The World Model difference: true general intelligence (03:48) Making everything in America- why is it so important? (05:01) Walking the factory floor (06:04) Safety and privacy (07:57) A more abundant future: the real impact of NEO (09:26) Bernt's story and the 1X North Star Huge thank you to @Berntbornich, Vikram, and the entire @1X_tech team for opening the doors and showing us everything!

English
9
53
88
37.6K
Max Wittstamm
Max Wittstamm@MWtstm·
@Opanarchyai Visual + AI-generated robotic workflows is exactly the layer the product creation chain has been missing. CAD outputs geometry. Humanoids execute motion. The orchestration between them - sequence, fixtures, DFA, feasibility - is where the chain breaks.
English
0
0
2
61
Opanarchy
Opanarchy@Opanarchyai·
First preview of Agentic Workflow Engine. Build robotic workflows visually. Or generate them instantly with AI. The orchestration layer for embodied AI.
Opanarchy tweet media
English
1
6
20
721
Max Wittstamm
Max Wittstamm@MWtstm·
@zachdive Striking how far code + screenshot already gets you. Next feedback loops to close the chain: simulation against physical user requirements (load, thermal, kinematics), then manufacturability and assembly (DFA, fixtures, tolerance stack-up). Same trick, harder modalities.
English
0
0
1
103
Max Wittstamm
Max Wittstamm@MWtstm·
@aphysicist Factorio for wire harnesses is exactly the right frame. Routing logic, sequence, tooling - that's DFA made playable. The gap is exporting that plan to a humanoid that can execute it. Solve that bridge and you close the loop from design to assembly.
English
0
0
1
90
Aaron Slodov
Aaron Slodov@aphysicist·
this is dope, factorio for wire harnesses
Lucas Crupi@lucas_crupi

@loombotic We’re launching the world’s first quick-turn, high-mix, fully automated wire harness production line. Our goal is to make wire harnesses as fast and easy to order as PCBs or sheet metal. Customers can already upload a harness design and get an instant quote. Starting today, these parts can now be produced on our automated line. We’re starting with Mini-Fit Jr., with more connectors coming soon.

English
6
10
278
32.3K
Max Wittstamm
Max Wittstamm@MWtstm·
@robotsdigest Simulation-ready articulation at generation time compresses the pipeline significantly. Most text-to-mesh tools hand you geometry and stop. PhysForge handing you kinematics the simulator can actually run is a different starting point for workflow planning.
English
0
0
0
8
Robots Digest 🤖
Robots Digest 🤖@robotsdigest·
Most 3D generation models stop at geometry. PhysForge goes further: generating simulation-ready assets with built-in materials, articulation, and kinematics. Not just a chair mesh, but a chair the simulator actually understands.
English
2
12
88
10.6K
Max Wittstamm
Max Wittstamm@MWtstm·
@Inbarium Spectacle vs. ROI is the right cut. A humanoid that can run a half-marathon still needs someone to derive the assembly sequence, fixture layout, and DFA constraints from the CAD before it touches a real part. That derivation is where deployment stalls.
English
1
0
1
10
Elad Inbar
Elad Inbar@Inbarium·
At the Beijing Half Marathon, a humanoid robot just ran nearly 7 minutes faster than the human world-record time. I run a robotics company. I still won't put one in your business yet. Here's the deployment line that separates spectacle from ROI:
English
2
13
18
17.4K
Max Wittstamm
Max Wittstamm@MWtstm·
@Baris Stateful, temporal, heterogeneous - world model serving is harder than LLM serving in every dimension. The rollout management problem alone is unsolved at scale. Whoever cracks clean state handoff between action loops and world model updates owns a real infrastructure layer.
English
0
0
0
24
Baris
Baris@Baris·
The missing layer in the physical AI stack is starting to emerge: reactor.inc LLMs became useful to developers only after the serving layer matured. Inference providers abstracted away GPU orchestration, batching, routing, scaling, and model-specific deployment. This led to several multi-$B startups 💰 World models need their own version of that layer. But this is not just “LLM inference with video” 🚫 World models are stateful, temporal, and heterogeneous. They involve video tokens, latent states, action loops, rollouts, streaming generation, multimodal inputs, and very different latency/compute profiles across models. The next generation of physical AI experiences will need a runtime that can orchestrate these models, preserve state, manage latency, hide backend complexity, and expose a clean developer surface. The foundation models for physical AI are being built. Now the experience layer needs its execution engine. @reactorworld is starting to reveal that layer today... Check out the demo here 👀 @taiuti @_bschmidtchen
English
1
0
3
96
Max Wittstamm
Max Wittstamm@MWtstm·
@ChefRobotics Asynchronous inference fixing physical jerk in a VLA is a genuinely sharp result. 64.9% velocity discontinuity reduction with no added inference cost - that's the kind of latency accounting that separates demo robots from ones that can repeat a task 10,000 times.
English
0
0
0
12
Chef Robotics
Chef Robotics@ChefRobotics·
When we built a physical AI system that assembles a burger in under a minute, we ran into an unexpected issue: the robot was shaking. We traced it back to latency: a lag between our vision-language action model (VLA) predicting action chunks and the robot executing those chunks. By the time our robot had carried out specific actions, our VLA's predictions were already stale. This delay came from three sources: our VLA's model inference time, a leader–follower lag during teleoperated data collection, and asynchrony between the different cameras our physical AI system was using. To fix this problem, we measured the total latency and shifted the prediction target forward. Instead of waiting for one action chunk to be carried out before making the next prediction, we adopted asynchronous inference, issuing the next prediction before the current action chunk was completed. This approach reduced velocity discontinuity by 64.9% and acceleration jerk by 30.8% on our physical AI system with no added inference cost. Learn more about this problem and how we solved it in our latest tech blog: chefrobotics.ai/post/latency-a… #physicalai #robotics #techblog
English
4
0
3
324
Max Wittstamm
Max Wittstamm@MWtstm·
@campedersen Depends what you need. Text-to-FeatureScript via Claude + Onshape gets you parametric CAD you can actually modify. Leaf71 if you want physics-grounded geometry. Most tools give you a mesh - check whether the output is editable or a dead solid before committing.
English
0
0
0
19
🩷
🩷@campedersen·
Hey guys is anyone working on text to CAD? What’s the best tool
English
6
0
3
645
Max Wittstamm
Max Wittstamm@MWtstm·
@soft_servo Nice. Text → URDF → IK closes the design-to-motion loop for the robot itself. The next step: text → product CAD → assembly sequence → assembly trajectory. SDF analysis on the product, not the robot. Sequence derivation, not path planning. Working on this - DM if interested.
English
0
0
0
99
Jake (in sf)
Jake (in sf)@soft_servo·
Updates to text-to-cad: • New robot-motion skill, supports ros2 / moveit2 for IK and path planning • Refactored harness into standalone skills • 3mf exports • More themes I also deployed a demo so you can play around with the robot arm! Link below
English
6
12
115
5.8K
Max Wittstamm
Max Wittstamm@MWtstm·
@chris_j_paxton Dexterous sim-to-real closing fast. The next constraint surfaces one layer up: translating a CAD design into the contact sequence and force profile the robot needs to execute it. Geometry is solved faster than the work instruction that follows it.
English
0
0
0
26
Max Wittstamm
Max Wittstamm@MWtstm·
@Yuvrajcrypto01 @vivianrobotics Real-world correction data beats sim on edge cases - no argument. The gap I'd watch: raw operator sessions capture motion, not intent. Deriving assembly sequence, fixture needs, and tooling from that data is where the pipeline still needs work.
English
0
0
1
8
Yuvraj | (❖,❖)
Yuvraj | (❖,❖)@Yuvrajcrypto01·
Physical AI won’t be born inside simulations it will be built through real world action PrismaX is building a live intelligence engine where human operators train machines directly in real environments Every movement adjustment decision and correction becomes valuable data that teaches robots how to understand respond and perform in the physical world ◻️Real World Learning > Simulated Guesswork Training inside controlled virtual spaces creates limits but learning in messy changing unpredictable environments creates intelligence that is practical reliable and ready for deployment from day one ◻️ Human Skill Becomes Scalable Intelligence Every operator session captures timing judgment intent strategy and problem solving patterns What was once trapped inside human experience becomes reusable training data for AI systems at scale ◻️ Repetition Builds Higher Order Autonomy Small repetitive actions are the raw material of advanced robotics When enough of these actions are learned and connected together robots begin performing complex multi step workflows with independence and precision ◻️ Mistakes Become the Fastest Teacher Failures reveal hidden weaknesses unseen scenarios and decision gaps By learning from corrections and recoveries robots improve resilience adaptability and performance far faster than success alone could provide ◻️Global Usage Creates a Compounding Moat Every new participant adds knowledge to the network More human input creates smarter models smarter models improve robot performance and better performance attracts even more adoption This is more than automation Human interaction is becoming the foundation layer of Physical AI where digital intelligence finally gains real world bodies @PrismaXai
Yuvraj | (❖,❖) tweet media
Yuvraj | (❖,❖)@Yuvrajcrypto01

PrismaX just dropped one of the clearest frameworks for Physical AI data creation The biggest takeaway models dont replace data pipelines they amplify them ◻️ Teleoperation = Gold Standard Direct human control over robots creates the highest quality action data Operators can intentionally shape scenarios edge cases and behaviors Downside expensive slower and requires both human + robot every session ◻️ Egocentric Video = Scalable Fuel First person human video is cheap smooth and diverse Great for learning intent motion priors and task understanding But raw action labels are noisy and most vendors fail to separate signal from chaos ◻️ UMI / Reverse Teleop = Middle Layer Better scalability lower capex smoother collection But embodiment mismatch + camera placement issues mean it cant fully replace premium teleop data Strong supplement not final answer ◻️Sim & World Models = Amplifiers Not Sources Simulation and world models can multiply existing data efficiency ~10x helping generalization and policy learning But without grounded real world data first they collapse into synthetic guesswork ◻️ The Real Moat = Data Quality Infrastructure Data is valuable expensive and full of slop Winning isnt collecting the most data its filtering noise validating signal and matching it across real embodiments Thats where PrismaX positions itself Vetted robots + embodiment access + eval pipelines + clean data loops The future of robotics wont be built by bigger models alone Itll be built by whoever owns the best bridge between human intelligence and machine execution | @PrismaXai | @vivianrobotics @shayebackus |

English
7
1
15
214
Max Wittstamm
Max Wittstamm@MWtstm·
@asimahmed @NianticSpatial @SpatialJP High-fidelity scene representation solves localization and context. The next hard layer: deriving from that scene what a robot needs to do - event sequence, reachability, fixture constraints. Gaussian splats give you the world. Work instructions are still hand-authored.
English
0
0
0
310
asim ᯅ
asim ᯅ@asimahmed·
at @nianticspatial, we're building the real-world model for physical AI. we believe the next generation of AI will move beyond the screen, into real-world environments where some of the hardest problems need to be solved. with high-fidelity gaussian splats as the foundation – robots, AI agents, & autonomous systems can see, localize, understand, & operate in the environments that matter most.
English
17
58
519
41.4K
Max Wittstamm
Max Wittstamm@MWtstm·
@osaAtwi Good point. We need a benchmark that serves a physical use case including the manufacturing. CAD data is fundamentally different to a digital asset for the metaverse.
English
0
0
0
58
OsamaAtwi
OsamaAtwi@osaAtwi·
Many are building CAD AI. Nobody can prove their model works. The reason: no shared benchmarks. Every demo is cherry-picked, every claim is unverifiable, and "look at this cool render" A good approach is using parts increasing in complexity to see where the models succeed and where they fail. Similar to what @emm0sh did with @sendcutsend (but for other reasons) @OpenAI functional 3D generation feels like a missing eval. Worth considering.
OsamaAtwi tweet media
English
17
27
277
20.6K
Max Wittstamm
Max Wittstamm@MWtstm·
@JonMSchwartz Supply constraint is right, but it compounds. Deployment speed depends on how fast you can derive task sequences from new customer environments. Installation and task performance hit a ceiling when every new site needs custom workflow engineering.
English
0
0
0
103
Jon Miller Schwartz
Jon Miller Schwartz@JonMSchwartz·
I'm noticing a trend in companies that deal with atoms. Same way every software company now has an AI transformation initiative, every atoms company is getting top-down pressure to push physical AI transformation. Customers are showing up to sales calls with lists of tedious, repetitive, manual tasks they want our robots to automate. The forward bottleneck won't be customer demand. It'll be the ability to scalably produce and deploy useful robots. At Ultra, we're on a mission to supply the world of atoms with the most useful and deployable robots. To do that, we're focused on: - Installation speed - Task performance - Robot availability (if you don't have robots to sell/deploy, someone else will beat you to it) Demand is the easy part. Supply is the game.
English
7
10
83
12.4K
Max Wittstamm
Max Wittstamm@MWtstm·
@swstica @dimensionalos @rivr_tech Infrastructure opening is real. The next pressure point: once design and assembly both get cheap, the bottleneck moves to deriving the work sequence from the geometry. That layer hasn't had its open-source moment yet.
English
0
0
0
49
Swastika Yadav
Swastika Yadav@swstica·
- asimov open sourced their humanoid robot. - 3d printing is making custom hardware accessible - china just had its biggest robotics exhibition. - tools like @dimensionalos are connecting ai agents with physical world. - amazon acquired @rivr_tech, a swiss robotic startup for doorstep delivery. the infrastructure is opening, costs are dropping, and the developer tooling is finally catching up to the hardware. watch out!
Asimov@asimovinc

We're open-sourcing Asimov v1, a humanoid robot. With Asimov v1, you can build, train on, and make it your own humanoid robot. It's the first step of building a humanoid labor force for the rest of us. Asimov v1 is 1.2 m tall, 35 kg, with 25 actuated degrees of freedom. Structural parts machined in 7075 aluminium and 3D-printed in MJF PA12 nylon. We're releasing the mechanical design and simulation files. Ready for locomotion policy training out of the box. The BOM is open too. Source everything yourself, or order the DIY Kit. All components, ready to assemble. $499 deposit, $15,000 target price. Ships end of summer 2026. GitHub: github.com/asimovinc/asim… Manual: manual.asimov.inc DIY Kit: asimov.inc/diy-kit Most humanoid robots are controlled by the companies that build them. Asimov v1 is built for the rest of us. Build it, test it, and share your feedback with the community.

English
2
4
28
3.8K
Max Wittstamm
Max Wittstamm@MWtstm·
@MecAgent Text-to-CAD collapses design iteration from weeks to hours. The second-order effect: bottleneck shifts from geometry generation to knowing which geometry is assemblable. DFA constraints don't write themselves yet.
English
1
0
1
55
MecAgent
MecAgent@MecAgent·
Since everyone seems to be talking about text-to-cad...
English
2
0
10
1.3K
Max Wittstamm
Max Wittstamm@MWtstm·
@bryceagrant1 @iclr_conf Mechanistic interpretability on VLAs is underexplored and overdue. If we can't read why a VLA chose a particular motion primitive, we can't trust it on varied assembly tasks. Curious what features your study found doing the most work in action selection.
English
0
0
0
32
Bryce Grant @ ICLR 2026
Bryce Grant @ ICLR 2026@bryceagrant1·
Presenting two mechanistic interpretability works today at @iclr_conf Giving a talk from 2-2:15 at the Multimodal Intelligence workshop (204C) on our work “Not All Features are Created Equal: A Mechanistic Study of Vision-Language-Action Models.” Will also be at the Unifying Concept Representation Learning workshop (209) presenting “Gluing Local Contexts into Global Meaning: A Sheaf-Theoretic Decomposition of Transformer Representations.” Feel free to stop by if you want to talk VLAs, Robotics, or Algebraic Topology!
English
1
0
0
41
Max Wittstamm
Max Wittstamm@MWtstm·
@Vader_AI_ VLA models give you flexible path execution. The gap before that: deriving the event sequence from the part geometry itself - what to grasp, in what order, with what fixture. Without that derivation, VLA has nowhere to start.
English
0
0
0
59
Vader
Vader@Vader_AI_·
Physical AI is not built from data alone. Raw demonstrations are only the beginning. To turn physical data into useful models, teams need training workflows designed for embodied AI: 🌎 World Action Models 👁️ Vision-Language-Action models ☑️ Task-specific policy networks
GIF
English
5
5
45
2.3K
Max Wittstamm
Max Wittstamm@MWtstm·
@0xRiRoyal @StrikeRobot_ai Data flywheel framing is right, but the compounding only kicks in if the data is labeled at the event level - not just motion capture. Raw trajectories train policies. Annotated assembly events train something you can actually improve by design.
English
0
0
0
9
riRoyal.Base.eth
riRoyal.Base.eth@0xRiRoyal·
what a day, A lot of Physical AI projects will be judged by the robot. The harder question is who owns the learning loop. That is where Strike Robot gets interesting. @StrikeRobot_ai is building a humanoid intelligence platform, but the deeper play looks like data infrastructure for embodied AI. robots generate motion. perception. interaction data that can be captured and improved over time. → a decentralized data marketplace gives contributors a way to monetize that data instead of leaving the upside to closed labs. → SafeGuard ASF turns that loop into a real product for high risk industrial inspection and response. → the SR token sits at the center of access, incentives, and ecosystem coordination. Right now matters because the system is moving from concept to participation. Epoch 2 is live, the token just printed its all time high this week, and the docs were updated days ago. The winners in Physical AI may not be the ones with the best demo. They may be the ones that own the data flywheel.
riRoyal.Base.eth tweet media
English
109
23
150
1K