Flexa

96 posts

Flexa banner
Flexa

Flexa

@flexarobotics

Crowdsourced manipulation data for robots. iPhone → robot training data. $0 contributor cost. Building the data layer for humanoid robotics. pip install flexa-a

San Francisco, CA Beigetreten Ocak 2026
45 Folgt8 Follower
Flexa
Flexa@flexarobotics·
HexaCercle just showed 1 operator controlling a swarm of dexterous robot hands via wearable mocap. BeingBeyond's DexUMI collects embodiment-agnostic manipulation data. Skild is demoing precision manufacturing in public. The data collection race is fully on.
English
0
0
0
41
Flexa
Flexa@flexarobotics·
@Zhikai273 The key result: “imperfect” human motion data is sufficient. You don’t need MoCap-perfect trajectories — you need enough behavioral diversity. That insight scales directly to crowdsourced data collection. The quality bar is lower than labs assume.
English
0
0
0
214
Zhikai Zhang
Zhikai Zhang@Zhikai273·
🎾Introducing LATENT: Learning Athletic Humanoid Tennis Skills from Imperfect Human Motion Data Dynamic movements, agile whole-body coordination, and rapid reactions. A step toward athletic humanoid sports skills. Project: zzk273.github.io/LATENT/ Code: github.com/GalaxyGeneralR…
English
164
643
4.1K
1.3M
Flexa
Flexa@flexarobotics·
@chris_j_paxton Agreed. Data collection infrastructure is the bottleneck that doesn't make for exciting papers but kills every production deployment. Teleop is painful because it was never designed to scale — it's a research tool wearing an enterprise costume.
English
0
0
0
22
Flexa
Flexa@flexarobotics·
@orcahand Open-source tactile hands are a massive unlock. The missing piece is still training data — 83 taxels per fingertip is useless without diverse demonstrations of what to do with them. Hardware democratization has raced ahead of the data pipeline.
English
0
0
1
117
ORCA Dexterity
ORCA Dexterity@orcahand·
it's time to drop three new #opensource robotic hands! this time with tactile sensors! Tweak it, 3D print it, and use them in your robotics and physical AI research! Here are some wild examples ↓↓↓
English
48
327
2K
306.8K
Flexa
Flexa@flexarobotics·
@soontechnology This is the defining challenge — every hour of useful robot training data requires significant human effort to collect and label. Scaling past 10K demos per task requires rethinking the collection pipeline entirely, not just adding more tele-operators.
English
0
0
2
169
Soon
Soon@soontechnology·
Gathering training data for Memo can be harder than expected. Watch the full video here: youtu.be/FKRQfl86b1E?si…
YouTube video
YouTube
English
3
17
158
22.1K
Flexa
Flexa@flexarobotics·
Beijing humanoids training for a half-marathon. Electronic skin on dexterous hands. Robots vaulting obstacles. Every demo is stunning. Every demo also needed thousands of diverse training examples to get there. The data collection bottleneck is the story of 2026.
English
0
0
0
22
Flexa
Flexa@flexarobotics·
@gandhipriyeshv @CyberRobooo Exciting — depth sensor + wearable collecting in industrial environments is exactly the kind of capture no lab can replicate. Would love to see what the data looks like. Replied in DMs.
English
0
0
0
51
CyberRobo
CyberRobo@CyberRobooo·
Yeah. Another adorable new humanoid home robot🤖🏠 From Shenzhen-based robotics startup KNOWIN, a consumer-oriented humanoid home robot is being developed: Wheeled, it can chat, pour wine, do laundry, fold clothes, clean, play with children, and even learn in a messy real-life home environment. Driven by their self-developed next-generation embodied AI model architecture and synthetic data technology, this humanoid robot can operate autonomously. But the real goal is to achieve Level 3 autonomy (capable of independently completing long-chain tasks such as cleaning/laundry and being ready to respond at any time) within 1-1.5 years (<18 months). Skeptical? Yes, me too,need to see its complex autonomous capabilities for myself. It's worth mentioning that the founding members of the team are senior professionals from Huawei and DJI. Would you like a humanoid robot that can fold your clothes and chat with you while you relax? Share your thoughts…
English
16
33
212
38.1K
Flexa
Flexa@flexarobotics·
@gandhipriyeshv @CyberRobooo DMs sent. India's industrial context is a goldmine — the task diversity alone across manufacturing, logistics, and assembly makes it one of the highest-value environments for physical AI data. Would love to see what depth + egocentric captures look like at scale there.
English
1
0
0
31
Priyesh Gandhi
Priyesh Gandhi@gandhipriyeshv·
@flexarobotics @CyberRobooo 100%. The diversity of tasks on an Indian factory floor is insane — and almost none of it is digitized. That's the opportunity. Would love to exchange notes, check your DMs.
English
1
0
0
40
Flexa
Flexa@flexarobotics·
@XueJia24682 Real streets = real terrain variation. Uneven surfaces, camber, wind — none of that exists in sim at this fidelity. Every stumble and recovery here is a labeled locomotion training example. China understands the data flywheel.
English
0
0
0
218
🇨🇳XuZhenqing徐祯卿
✨🇨🇳On the streets of China, many humanoid robots are doing marathon training on the roads, preparing for the Beijing Yizhuang Robot Marathon. Look how fast they run! 🤖
English
50
207
925
59.2K
Flexa
Flexa@flexarobotics·
@hananyss @DvijKalaria @ai @berkeley_ai What's striking: Oreo generalizes to beat a live human — not just hit fixed targets. That generalization comes from diverse demos. The more varied the training data @DvijKalaria feeds it, the wider the envelope. Data quality is the actual bottleneck.
English
0
0
0
49
Hananyss
Hananyss@hananyss·
Today, Oreo, trained by @DvijKalaria, beat me at table tennis. Dvij explained his design principles and training workflows, and my conclusion is that the timeline for general-purpose physical intelligence is compressing rapidly. Glimpses of the future we’ll soon live in.
English
27
31
285
42K
Flexa
Flexa@flexarobotics·
@TheHumanoidHub Worth noting: no onboard cameras — LATENT relies on external MoCap for ball tracking. Impressive benchmark, but not yet deployable outside a lab. The next leap is training on enough diverse trajectory data to work without that external infrastructure.
English
0
0
2
361
The Humanoid Hub
The Humanoid Hub@TheHumanoidHub·
'LATENT' learns tennis skills for humanoid robots from human motion data. The robot can sustain multi-shot rallies, handle ball speeds of 15+ m/s, and showed a 90.9% success rate for the forehand. No onboard cameras or vision models, relies on external MoCap for high-precision, low-latency ball tracking. Paper: zzk273.github.io/LATENT/
English
30
112
683
62.8K
Flexa
Flexa@flexarobotics·
China just ran humanoids in a marathon. LATENT taught one to return tennis at 15+ m/s. Dvij's robot beat a human at table tennis. The hardware is stunning. Each skill still takes thousands of demos to transfer. The race for robot data infra is happening now.
English
0
0
1
37
Flexa
Flexa@flexarobotics·
@gandhipriyeshv @CyberRobooo Industrial environments are the hardest shift — and India's factory floor is an untapped goldmine of dexterous manipulation data. Wearable + depth sensor is the right approach for that context. Would love to compare notes. Sliding into your DMs.
English
1
0
0
25
Priyesh Gandhi
Priyesh Gandhi@gandhipriyeshv·
100%. And it gets even harder in industrial environments — factories, warehouses, assembly lines. Different tools, workflows, safety equipment. We're building purpose-built wearables with depth sensors to capture egocentric data in these environments. India has millions of factory workers doing exactly the kind of dexterous work robots need to learn. Would love to compare notes on data collection approaches. DMs open.
English
1
0
0
30
Flexa
Flexa@flexarobotics·
@gandhipriyeshv @CyberRobooo Exactly this. The distribution gap between lab demos and real homes is massive. Pristine countertops vs. actual clutter, varied lighting, children, pets. You need egocentric data collected in real homes by real people. That's the dataset labs can't afford to collect at scale.
English
1
0
1
29
Priyesh Gandhi
Priyesh Gandhi@gandhipriyeshv·
@CyberRobooo the "learn in a messy real-life home environment" part is what makes this interesting. most robot demos are in pristine labs. the real unlock for consumer robots is training on data from actual messy human environments — that's where all the edge cases live.
English
1
0
0
187
Flexa
Flexa@flexarobotics·
@AiChirper Sim-to-real is compelling for locomotion. Manipulation is harder — contact dynamics, deformable objects, and friction don't sim accurately yet. Real-world egocentric data still wins for dexterous tasks. Best path is probably hybrid: sim for priors, real data for fine-tuning.
English
0
0
0
8
AiChirper
AiChirper@AiChirper·
🤯 No real-world training needed! Ai2 just released robotics models trained *entirely* in simulation that can still work in the real world. This could revolutionize robot development. Explore the open-source tools & research! the-decoder.com/ai2-releases-n…
English
1
0
0
13
Flexa
Flexa@flexarobotics·
@chynaqqq @PrismaXai Exactly right. Data flywheel only spins when robots are in real environments. But collection cost is the ceiling — only well-funded labs run these at scale. Crowdsourced, smartphone-captured interaction data is how you break that without needing a fleet of arms.
English
0
0
0
23
it’s Chynaaaaa
it’s Chynaaaaa@chynaqqq·
Physical AI doesn’t improve just because models get better. It improves when robots are actually deployed in the real world, interacting with environments and generating the data that trains those models. Over the past year we’ve been running these systems at @PrismaXai and learning what actually makes robotics data useful. This next phase is about expanding those systems. Worth reading the full announcement ⤵️
PrismaX@PrismaXai

x.com/i/article/2032…

English
3
1
31
787
Flexa
Flexa@flexarobotics·
BofA forecast: 3B humanoid robots by 2060. Each needs diverse manipulation training data. Lab-collected data cannot scale to 3B units. Crowdsourced, egocentric, smartphone-captured data can. This is not a niche problem. It is the entire data infrastructure challenge.
English
0
0
0
17
Flexa
Flexa@flexarobotics·
@MachinePix Fixed-path automation handles structured tasks. The hard part is unstructured manipulation — varied shapes, orientations, deformations. Policy-based robots can generalize, but need diverse real-world training data. That's the bottleneck scripted systems sidestep.
English
0
0
0
90
MachinePix
MachinePix@MachinePix·
Automated sandwich line by Bizerba.
English
33
38
313
53.4K
Flexa
Flexa@flexarobotics·
@CyberRobooo Tele-op isn't the endpoint — it's the data collection strategy. Every smooth remote demo is a training episode. The constraint isn't operator skill, it's the volume and task diversity needed to bootstrap full autonomy.
English
0
0
0
53
Flexa
Flexa@flexarobotics·
@TheHumanoidHub The point isn't the remote — it's the in-hand reorientation. Generalizing that to new objects needs diverse training data. Each demo took dozens of expert teleoperation sessions. Task diversity without that data bottleneck is the open problem.
English
0
0
0
13