

NadiaJiang
564 posts

@jiangbonadia
🌟 GitHub Star|Head of Growth & Ecosystem @Robbyant_brain @Ant_OSS | Board Chair @KAIYUANSHE | Builder @ZenMuxAI @AnswerDev | Global Tech Speaker



⚡️New on ZenMux: LLaDA2.1-flash 100B diffusion LLM from @TheInclusionAI . → Error-correcting editable generation → Speed Mode: ultra-fast inference → Quality Mode: competitive performance → RL tailored for 100B-scale dLLM 🔗 zenmux.ai/inclusionai/ll… 🔗 huggingface.co/inclusionAI/LL…


Riding through a bustling Chinese New Year celebration — lion dances, dragon parades, fireworks lighting up the sky — this entire world is generated by @robbyant_brain's LingBot-World! 🐴🧨 Navigate freely with WASD in an AI-generated interactive world. This is the power of world models. Happy Year of the Horse to everyone! 🎆 #LingBotWorld #WorldModel #LunarNewYear #AI





🌍 Reality is expensive. Simulation is the shortcut. But what if the simulation could think, respond, and remember? Today, we open-source LingBot-World, an interactive world model built on @Alibaba_Wan Wan2.2! 🔥 We’re pushing the limits of: 🔷 High-Fidelity Simulation & Precise Control 🔷 Long-Horizon Consistency & Memory 🔷 Modeling Physical & Game Worlds It that can generate nearly 10 minutes of controllable, physics-grounded simulation in real-time. A digital training ground for embodied AI. 👇 #WorldModel #EmbodiedAI #OpenSource #Simulation #Robotics

🧠 What if one AI brain powers all robots? Retraining for every new embodiment is the biggest scaling pain in embodied AI—we’re fixing it. Today, we open-source LingBot-VLA: a Vision-Language-Action model built on @Alibaba_Qwen Qwen-2.5-VL and pre-trained on 20,000 hours of real-world data across 9 distinct robot embodiments. New SOTA for cross-embodiment generalization unlocked. #EmbodiedAI #Robotics #VLA #OpenSource


🧬 Introducing LLaDA2.0, for the first time scaled to 100B, as a Discrete Diffusion LLMs (dLLM)! Featuring 16B (mini) and 100B (flash) MoE versions. With 2.1x faster inference than AR models and superior performance in Code, Math, and Agentic tasks, we prove that at scale, Diffusion is not just feasible—it's stronger and faster. 🌊 #AI #LLaDA #Diffusion #OpenSource #dllm














🚀 Ling-1T — Trillion-Scale Efficient Reasoner Introducing Ling-1T, the first flagship non-thinking model in the Ling 2.0 series — 1 Trillion total parameters with ≈ 50 B active per token, trained on 20 T+ reasoning-dense tokens. Highlights → Evo-CoT curriculum + Linguistics-Unit RL for scalable reasoning → Strong efficiency–accuracy balance on complex reasoning tasks → Advanced visual understanding + front-end code generation via Syntax–Function–Aesthetics reward → Emergent tool-use ability (≈ 70 %) with minimal instruction tuning → FP8 mixed-precision + Ling Scaling Law → efficient trillion-scale training Efficient Thinking · Precise Reasoning Ling-1T extends the Pareto frontier of reasoning accuracy vs. cost — a new milestone in open-source trillion-scale intelligence.