Ben Giudice

30 posts

Ben Giudice banner
Ben Giudice

Ben Giudice

@ben_giudice

Perth, Western Australia Katılım Ocak 2026
61 Takip Edilen36 Takipçiler
Ben Giudice retweetledi
Ed Henderson
Ed Henderson@ed0henderson·
1B humanoids by 2050 (Morgan Stanley 2025). 32T spent on wages for physical labor each year (World Bank 2025). 900x drop in the cost of inference since 2021 for certain AI models (Epoch AI). These were the stats were led with for our presentation at @_Arrayah showcase day, live demos always win! #robotics #physicalAI #AI #physicalAI
Ed Henderson tweet media
English
3
4
14
1.1K
Ben Giudice retweetledi
sam
sam@SamuelBeek·
If you wanna get started in hardware - just get this kit (and @schematikio)
English
8
9
112
6.7K
Ben Giudice retweetledi
Ed Henderson
Ed Henderson@ed0henderson·
Robot jenga 🧩 Things that happened today: - Broke one of our linear grippers (guess that's why force feedback exists). - @ben_giudice set up two parallel static bimanual workstations so we can pair compete on robot training. - spoke to @sanskxr02 who's creating awesome data pipelines that leverages internet scale data. Chat to him! - spoke to someone creating non-neodymium motors to de-risk supply chains. #robotics #physicalAI #robot #embodiedAI
English
3
2
22
751
Ben Giudice retweetledi
Ed Henderson
Ed Henderson@ed0henderson·
For rapid prototyping of low cost 3D printed robot arms what setups & settings are people using? @ben_giudice and I use: - 2 x Bambu P1S (1 with AMS) - Started with PLA+ but now mainly use PLA basic (have had some breakages). - Tree support - 2 x skirt loops - 15% infill - Bed temp 55˚C - Nozzle temp 200-210˚C
English
2
2
10
1.1K
Ben Giudice retweetledi
Ryan Chan
Ryan Chan@Ryan_Resolution·
After building 25+ XLeRobots for hackathons, we redesigned almost every structural part except the arms and head. And we’re open sourcing the hardware upgrade behind our XLeRobot build. It’s the actual internal print file we use for customer builds and hackathons. We made it: Easier to print - complete .3mf with tested on Bambu P1S/P2S/H2S/H2D. Easier to build - square side-loaded nut traps that don't fall out during disassembly, snap-on driver board covers (no tools), dedicated USB hub mounts for clean cable routing, quick-release power bank mount, etc. More stable base - bearing-supported omni wheel modules so the axle is loaded from both sides, replacing the stock LeKiwi chassis that wobbles under full XLeRobot weight. Thanks to: @IsaacSin12 @ThomasSchicksal @xu280589 @QILIU9203 @VectorWang2 @LeRobotHF GitHub repo in reply. Technical specs in thread. Here is what we updated:
Ryan Chan tweet media
English
14
41
345
19.1K
Ben Giudice retweetledi
stash
stash@stash_pomichter·
Our robot dog guards the office now with autonomous dimOS agents. Prompt: “Patrol the office continuously, if you see someone in a hoodie follow them, sound the alarm, and alert the police” Fully open source. Vibecode the world in natural language.
English
15
93
866
49.7K
Ben Giudice retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
7K
58.1K
20.8M
Ben Giudice retweetledi
LeRobot
LeRobot@LeRobotHF·
Releasing the Unfolding Robotics blog! Time to unfold robotics: we trained a robot to fold clothes using 8 bimanual setups, 100+ hours of demonstrations, and 5k+ GPU hours. Flashy robot demos are everywhere. But you rarely see the real story: the data, the failures, the engineering. We’re sharing everything: code, data, and details in the blog → huggingface.co/spaces/lerobot…
English
30
140
786
274.6K
Ben Giudice retweetledi
Ed Henderson
Ed Henderson@ed0henderson·
What happens when you try to control a SO-101 robot arm using only Claude Code? No VLA, no LeRobot, no policy. Just a VLM reading camera feeds and writing servo commands. The goal was simple: get the arm to stand upright. Here's what we learned: 1) Visual perception was surprisingly competent. Claude could interpret frames from both the wrist cam and a MacBook camera, identify joint states, and reason about what corrections to make. The vision-to-reasoning pipeline worked. 2) It had no persistent motor state. Claude generated an action trajectory, executed it, then released torque. The arm became a limp biscuit between inference cycles. We fixed this. 3) From a single frontal camera Claude had no reliable representation of the side plane. It couldn't tell what "vertical" meant. I had to physically move my MacBook to give it a side view before it could reason about the arm's orientation in 3D. 4) Latency is the fundamental bottleneck. We were running at roughly 0.5 to 1Hz. Closed-loop servo control needs 50 to 100Hz. A few orders of magnitude too slow for anything resembling continuous feedback. What Claude was doing is closer to a vision-conditioned code generator. It reasons about what it sees, writes a motor program, executes it open-loop, then looks again. I wonder if token generation speed could ever get to a point where this technique would even be remotely useful? #robotics #embodiedAI #physicalAI #SO101 #claudecode
English
13
6
93
9.2K
Ben Giudice retweetledi
Ed Henderson
Ed Henderson@ed0henderson·
This significantly lowered the barriers to entry for building in physical intelligence. Thank you for open sourcing @LeRobotHF, freaking insane! My biggest takeaways: 1. Data quality matters more than everything else. Algorithmic improvements moved success rates by 5 to 20 points. Fine-tuning on a curated dataset one-fifth the size (1,200 vs 5,688 episodes) moved it by 50. 2. Strategy alignment matters more than data volume. Operators converging on a single consistent folding technique with the same grip points, sequence, and timing led to significant improvements. 3. Match your action representation to pretraining. Switching to relative trajectory actions (offsets from current state, following the UMI approach) jumped π0.5 from 20% to 35% on its own. π0.5 was pretrained with relative actions, so fine-tuning against that regime was fighting the model's priors. 4. Learned reward models help with data curation. SARM uses a CLIP backbone to predict task progress from 0 to 1 at every timestep. It replaces manual curation. 5. From HF Team: "Record/Teleoperate at higher frequency. We’d record at 50 fps instead of 30 fps if we had to do it again. Folding is dynamic and higher record rates capture transitions better." The model sees the intermediate states more clearly, which means it can learn a sharper policy around the moments that actually matter for task success. #embodiedAI #physicalA #robotics @LeRobotHF @huggingface #robot
LeRobot@LeRobotHF

Releasing the Unfolding Robotics blog! Time to unfold robotics: we trained a robot to fold clothes using 8 bimanual setups, 100+ hours of demonstrations, and 5k+ GPU hours. Flashy robot demos are everywhere. But you rarely see the real story: the data, the failures, the engineering. We’re sharing everything: code, data, and details in the blog → huggingface.co/spaces/lerobot…

English
4
10
60
7.7K
Ben Giudice retweetledi
Ed Henderson
Ed Henderson@ed0henderson·
We need your support to convince investors to buy some robot arms. Add your name and signature, link in comments! 🦾 Right now, builders in Australia have almost zero access to robotic hardware designed for dexterous manipulation. Countries like China and the US have hundreds / thousands of arms purpose-built for physical AI. We don't. This directly hinders our ability to solve real physical problems + build companies in the space. @ben_giudice and I have been building with @LeRobotHF's SO-101. It's a great entry point, but it maxes out at 400g. This is very limited when it comes to real world tasks. The hardware we need is @enactic_ai's OpenArm. It's a 7-DOF, open-source, humanoid-style bimanual arm pair with 6kg payload per arm. That's 15x more load! It's fully open source hardware + software, and designed for the kind of dexterous manipulation that vision language action models are trained on. We want to install a leader-follower pair in the o1 Hardware Lab here at Arrayah (Syd Hacker House). If you sign to show your support, we'll make sure you get access to the arms as part of the open lab. ✍️ Let's make it happen. 🚀 #robotics #physicalAI #embodiedAI
English
10
7
55
4.2K
Ben Giudice retweetledi
Ed Henderson
Ed Henderson@ed0henderson·
Finally got around to reading @JacobZietek's essay. Would recommend. A few thoughts: 1. @ben_giudice and I often discuss deployment challenges in our home country Australia. We have two globally competitive primary industries (i) mining (iron ore) (ii) agriculture (crops + beef) that make up to ~15% of GDP. These will benefit hugely from physical intelligence / robotics. Deployments in each of these industries, especially in harsher environments (e.g. The Pilbara) will create unique tests of reliability. 2. Having worked in an analogous industry (i.e. cultured meat) which similarly had research heavy origins (in tissue engineering / cellular biology) but went through an operational scale up ~2015-2022, I appreciate the thesis of this essay. I saw science led organisations get caught in the trap of designing unscalable complex bioprocesses with large #'s of steps to final product. Research often values the best solutions; the market values something they can use immediately. 3. For consumer robotics products the collision of hard tech with creativity & design won't be optional, IMO it will be essential. Creativity & design have a way of softening the whiplash that new technology creates through familiarisation, anthropomorphisation, humour, aesthetics & beauty. 4. As an operator I'm biased but the idea of getting to take technology and scale 100000x is such an exciting challenge and is what startups are well positioned to solve a it relies on commercials, speed & execution.
Jacob Zietek@JacobZietek

*per captia a16z.news/p/robotics-nee…

English
0
1
2
284
Ben Giudice retweetledi
Ed Henderson
Ed Henderson@ed0henderson·
I recorded my first ever training episode for a robotic arm!!! 🦾🦾 One step closer to contributing to the dissemination of physical intelligence. Feeling sooo excited!! Any tips for how I can produce high quality datasets? Currently using @LeRobotHF, @modal, @huggingface and @BuildRerun to train an ACT (Action Chunking with Transformers) imitation learning policy. Starting small. #physicalAI #embodiedAI #robot #robotics
English
5
10
70
3.2K
Ben Giudice retweetledi
Ed Henderson
Ed Henderson@ed0henderson·
My current workspace setup for robotic arm demonstrations: - 2-MP 30 fps 100˚ wrist cam - GoPro Hero Black 10 on wide web cam mode - Iphone 17e Let the training begin. Note: Claude Code discovered and turned on my iPhone camera itself. #robotics #physicalAI #embodiedAI
English
9
8
29
1.6K
Antoine 🤖
Antoine 🤖@antoinemarcel·
@ed0henderson @ben_giudice I think black will win ! And I suggest to print the gripper in green you will see huge improvement on policies
English
2
0
1
43
Ed Henderson
Ed Henderson@ed0henderson·
Which arm will win (white or black)? @ben_giudice and I just upgraded our VLA training environment. Still with a heavy dose of scrappy. Maybe we were too naive but we have been very shocked by how non-generalisable off the shelf VLA models are. The fact you have to "control" your environments to train a good policy seems quite counterintuitive. But for now the learnings are the most important. #physicalAI #robotics #embodiedAI
Ed Henderson tweet media
English
5
1
6
560