PRB

83 posts

PRB

PRB

@builds_robots

Prev. built a GPU cloud (https://t.co/1AXdrjWBsx) and another hardware store (@partabot_)

San francisco Katılım Mart 2025
310 Takip Edilen127 Takipçiler
Sabitlenmiş Tweet
PRB
PRB@builds_robots·
Task progression tracking does wonders! Here are 6 autonomous napkin folds that a bimanual robot did - it could have done way more but i had to go. Some outstanding things: - the data was collected elsewhere - the lighting isnt sufficient and doesnt resemble training data. at moments, there is shadow from the robots EE when it is making precise picks - this is stock hardware. Nothing was modified - the grippers can barely slide underneath the napkins
English
3
1
21
1.9K
C Zhang
C Zhang@ChongZitaZhang·
How training progresses in multiple distinctive terrains in AME-2 (25% training budget):
C Zhang tweet media
English
2
0
7
1.2K
C Zhang
C Zhang@ChongZitaZhang·
github.com/zita-ch/techbl… Since there have been many communications on AME series with other researchers, I wrote a guide for reimplementing AME-1 and AME-2. The AME-2 part is not complete, and I sadly found some people are faster than me (as the author...)
English
2
6
40
4.2K
PRB
PRB@builds_robots·
@ChongZitaZhang Ahh awesome will try. Do you mind sharing all your wandb graphs here? Would love to see how off we are
English
1
0
0
27
PRB
PRB@builds_robots·
@ChongZitaZhang Awesome, happy to contribute as a team if thats valuable!
English
0
0
1
14
PRB
PRB@builds_robots·
@ChongZitaZhang Sadly no one in our team has much experience in sim :)
English
0
0
0
18
PRB
PRB@builds_robots·
@ChongZitaZhang 🤣 any plans on sharing yours? We have made progress transferring human data to robots and want to integrate AME2 for WBC
English
2
0
0
46
PRB
PRB@builds_robots·
@natashamalpani its a mix of all three (with more weight on 2 and 3) + RL (which you mention implicitly) that will solve manipulation and dexterity this year
English
1
0
2
451
Natasha Malpani 👁
Natasha Malpani 👁@natashamalpani·
there is no hugging face for robotics data. no standardized pipeline for collecting, labeling, versioning, training on real-world robot data at scale. no tooling that handles contact dynamics and material deformation well enough for industrial manipulation. no teleoperation infrastructure where human supervisor intervention automatically becomes training data. no vertical-specific manipulation datasets for any specific industrial task. the actual bottleneck in physical AI is the data and the infrastructure to generate it. and this is a structural problem. for language AI, training data was the internet. abundant, cheap, already labeled by human intent. for robotics, the gap between where foundation models are and where they need to be cannot be closed by deploying more robots. three bets are being made right now: simulation-first works brilliantly for locomotion. domain randomization has essentially solved quadruped walking in unstructured terrain. but it breaks down completely for manipulation. simulated cameras have no noise, blur, or friction error. real cameras and grippers have all of it. cable insertion, fabric folding, dexterous assembly are exactly where simulation fails. teleoperation as data collection is the second move. deploy semi-autonomous robots, capture human-guided trajectories, iterate. theoretically sound. but the capital math is brutal and the execution evidence isn't there yet. human video as proxy is the third. if robots could learn from watching humans, you tap unlimited data. the problem: human hand geometry and force feedback don't map onto robot actuators. you're learning the shape of motion without the physics that make it work. what's actually working today is locomotion. narrow manipulation in structured environments. inspection and sensing. quadrupeds doing thermal inspection. no general-purpose manipulation required. the hardware race is loud, capital-intensive, winner-take-few. but the data infrastructure race is quiet, undercapitalized, wide open.
English
62
28
394
76K
PRB
PRB@builds_robots·
@davefontenot Zack nathan backed some top robotics startups early And @Kazanjy is the best
English
1
0
0
2.7K
Dave Font
Dave Font@davefontenot·
Who are the highest signal angel investors in the valley nowadays? (who dont secretly have a fund behind their investing)
English
25
4
131
34.4K
PRB
PRB@builds_robots·
@PSYONICinc Do you make gloves and hands separate?
English
0
0
0
44
PRB
PRB@builds_robots·
@orcahand awesome work! congrats
English
0
0
0
29
PRB
PRB@builds_robots·
@orcahand is a direct drive hands in the works as well?
English
2
0
8
3.5K
ORCA Dexterity
ORCA Dexterity@orcahand·
it's time to drop three new #opensource robotic hands! this time with tactile sensors! Tweak it, 3D print it, and use them in your robotics and physical AI research! Here are some wild examples ↓↓↓
English
52
343
2.1K
352.8K
PRB
PRB@builds_robots·
@jackvial89 got it. the answer is yes! dm-ed
English
0
0
1
96
Jack Vial
Jack Vial@jackvial89·
i should clarify what I mean by generalization, I mean in a pretty narrow sense as in with a pick a place task with say 50 samples does the reward reinforcement on 20 additional samples 10 success, 10 failure help the model generalize to more than a single position. thank you! if you can share the value network and advantage code that would be very useful to compare against
English
1
0
0
110
Jack Vial
Jack Vial@jackvial89·
i'm working on an implementation of π*0.6 RECAP. going to start with a simplified version of the full pipeline
Jack Vial tweet media
English
10
23
226
21.4K
PRB
PRB@builds_robots·
@jackvial89 We used pi0.5 so didnt pretrain with RECAP. It definitely makes it more robust than pi0.5 - we ran one full iteration. I am not sure you should expect generalization. I can DM the value network and advantage calculation code with you.
English
2
0
3
113
Jack Vial
Jack Vial@jackvial89·
@builds_robots cool! where you able to get to good generalization with it?
English
1
0
0
272
PRB
PRB@builds_robots·
@chris_j_paxton Probably sim when it gets better. Imagine this on Atlas full body rotation 😂
English
0
0
1
61
PRB
PRB@builds_robots·
@RTinkslinger Why do you think few companies will make foundation models? Policies are incredibly small and far cheaper to train compared to ultra-large large LLMs. If teams can crack ego data which is cheap, foundation models are within reach.
English
0
0
0
34
aacash.eth - Aakash Kumar
aacash.eth - Aakash Kumar@RTinkslinger·
Every secondary research regurgitating VC who has never been close to deep learning is going to get their ’physical AI thesis wrong (remember the morons tooting the horn on co-pilots). Rhoda is just the first few to show the path (similar to pre trained BERT cos of 2019). Few cos shall make the foundation model + harness. Rest in venture shall apply and do vertical implementation and win #always the arc.
Vinod Khosla@vkhosla

The bar for robotics isn’t lab demos — it’s autonomous operation in real production environments. What impressed me about @rhoda_ai_ was seeing that level of performance with remarkably little robot training data. Pretraining on internet-scale video to build a strong physical prior may seem unconventional today, but approaches like this are what will ultimately unlock general-purpose robotics.

English
1
0
17
4.9K