
First empirical evidence that VLA models scale with massive real-world robot data. VLA foundation models promise robots that can follow natural language instructions and adapt to new tasks quickly. However, the field has lacked comprehensive studies on how performance actually scales with real-world data. This new research introduces LingBot-VLA, a Vision-Language-Action foundation model trained on approximately 20,000 hours of real-world manipulation data from 9 dual-arm robot configurations. Scaling pre-training data from 3,000 hours to 20,000 hours improves downstream success rates consistently, with no signs of saturation. More data still helps. The architecture uses a Mixture-of-Transformers design that couples a pre-trained VLM (Qwen2.5-VL) with an action expert through shared self-attention. This allows high-dimensional semantic priors to guide action generation while avoiding cross-modal interference. On the GM-100 benchmark spanning 100 tasks across 3 robotic platforms with 22,500 evaluation trials, LingBot-VLA achieves 17.30% success rate and 35.41% progress score, outperforming π0.5 (13.02% SR, 27.65% PS), GR00T N1.6 (7.59% SR, 15.99% PS), and WALL-OSS (4.05% SR, 10.35% PS). In simulation on RoboTwin 2.0, the model reaches 88.56% success rate in clean scenes and 86.68% in randomized environments, beating π0.5 by 5.82% and 9.92% respectively. Training efficiency matters for scaling. Their optimized codebase achieves 261 samples per second per GPU on an 8-GPU setup, representing a 1.5-2.8× speedup over existing VLA codebases like StarVLA, OpenPI, and DexBotic. Data efficiency is equally impressive: with only 80 demonstrations per task, LingBot-VLA outperforms π0.5 using the full 130-demonstration set. This is the first empirical demonstration that VLA performance continues scaling with more real-world robot data without saturation, providing a clear roadmap for building more capable robotic foundation models. Paper: arxiv.org/abs/2601.18692 Learn to build effective AI agents in our academy: dair-ai.thinkific.com







