BeingBeyond

14 posts

BeingBeyond

BeingBeyond

@beingbeyond_

A startup company on foundation models for embodied intelligence.

Katılım Ocak 2026
5 Takip Edilen283 Takipçiler
BeingBeyond
BeingBeyond@beingbeyond_·
Introducing BeingBeyond U1, the world’s first Real DexUMI. U1 brings embodiment-agnostic dexterous hand data collection to real-world manipulation, taking a major step toward general-purpose dexterous manipulation models. From data collection to transfer, deployment, and execution, U1 pushes UMI beyond the gripper era and into the age of dexterous hands.
English
13
61
415
40K
BeingBeyond
BeingBeyond@beingbeyond_·
Big month at BeingBeyond 🎉 We’re excited to share that in the past month we’ve had 8 papers accepted across top conferences and journals: CVPR’26 ×5, ICLR’26 ×1, ICRA’26 ×1, RAL ×1. Check our papers below👇 [CVPR'26] DemoFunGrasp: Universal Dexterous Functional Grasping via Demonstration-Editing Reinforcement Learning research.beingbeyond.com/demofungrasp arxiv.org/pdf/2512.13380 [CVPR'26] End-to-End Language-Action Model for Humanoid Whole Body Control arxiv.org/pdf/2511.19236 [CVPR'26] Joint-Aligned Latent Action: Towards Scalable VLA Pretraining in the Wild (To appear) [CVPR'26] OpenT2M: No-frill Motion Generation with Open-source, Large-scale, High-quality Data (To appear) [CVPR'26] Spatial-Aware VLA Pretraining through Visual-Physical Alignment from Human Videos research.beingbeyond.com/vipa-vla arxiv.org/pdf/2512.13080 [ICLR'26] DemoGrasp: Universal Dexterous Grasping from a Single Demonstration research.beingbeyond.com/demograsp arxiv.org/pdf/2509.22149 [ICRA'26] Towards Proprioception-Aware Embodied Planning for Dual-Arm Humanoid Robots arxiv.org/pdf/2510.07882 [RAL] DemoHLM: From One Demonstration to Generalizable Humanoid Loco-Manipulation research.beingbeyond.com/demohlm arxiv.org/pdf/2510.11258 We’ll keep focusing on VLA models, Dexterous Manipulation, and Whole-Body Control. BeingBeyond looks forward to seeing everyone at the venues!🥳
English
0
3
6
592
BeingBeyond
BeingBeyond@beingbeyond_·
@eddybuild Thx Eddy! Egocentric-10k is really a great help to the robotics community!🔥
English
0
0
3
110
BeingBeyond
BeingBeyond@beingbeyond_·
@_akhaliq Thx @_akhaliq for sharing our work!🚀📷We’re actively continuing to open-source the Being-H series. Weights and training scripts are already out, and we’ll be gradually releasing the training data soon. Hope this brings more value to the VLA community!
English
0
0
5
611
AK
AK@_akhaliq·
Being-H0.5 Scaling Human-Centric Robot Learning for Cross-Embodiment Generalization
English
6
21
121
43K
BeingBeyond
BeingBeyond@beingbeyond_·
7/8 We deploy and evaluate across five very different real robots (upper-body humanoid, dexterous arm/hand, legged humanoid + hand, etc.). Across task suites (spatial / long-horizon / bimanual / generalization), Being-H0.5 performs strongly, with both specialist and single-checkpoint generalist variants. And we saw a surprising behavior we didn't want to oversell but couldn't ignore: embodiment-level zero-shot — the generalist checkpoint sometimes initiates qualitatively correct multi-step structure on previously unseen task–embodiment pairs in resembling environments.
BeingBeyond tweet mediaBeingBeyond tweet media
English
1
0
0
292
BeingBeyond
BeingBeyond@beingbeyond_·
(thread 1/8) We're releasing Being-H0.5🔥🔥🔥: a foundation VLA model aimed at one big goal — cross-embodiment generalization. Instead of training a new brain for every robot, we want a single model to carry skills across bodies (arms, humanoids, dexterous hands). Here's an overview of Being-H0.5 👇
English
6
5
15
938