InclusionAI

94 posts

InclusionAI banner
InclusionAI

InclusionAI

@TheInclusionAI

AI Lab @AntGroup, we envision AGI as humanity's shared milestone. Our Language Model @AntLingAGI and LLaDA, Embodied AI @robbyant_brain, OSS projects AReaL etc.

Katılım Mart 2025
18 Takip Edilen1.1K Takipçiler
InclusionAI retweetledi
Ant Open Source
Ant Open Source@ant_oss·
⚡️ 892 tokens/s — our 100B diffusion LLM, LLaDA2.1-flash, is now live on @ZenMuxAI! With Token Editing, LLaDA 2.1 goes from research breakthrough to production-ready speed. Diffusion models just got real. Try it via API or Chat 👇 zenmux.ai/inclusionai/ll… #LLaDA #ZenMux #AI #dLLM
ZenMux@ZenMuxAI

⚡️New on ZenMux: LLaDA2.1-flash 100B diffusion LLM from @TheInclusionAI . → Error-correcting editable generation → Speed Mode: ultra-fast inference → Quality Mode: competitive performance → RL tailored for 100B-scale dLLM 🔗 zenmux.ai/inclusionai/ll… 🔗 huggingface.co/inclusionAI/LL…

English
9
59
527
71.7K
InclusionAI
InclusionAI@TheInclusionAI·
Latest updates on our AI Infra puzzle: AReaL v1.0, enables 🦞OpenClaw agents to self-evolve through continuous evolution—zero changes to agent required. More highlights💡: 🏗️ Built in one person-month using our AI Coding Toolkit 🧠 How: Self-evolving data synthesis + async RLVR with data filtering 🤖 AI-Assisted Engineering #openclaw #inclusionAI #RL #opensource 🔗 Explore AReaL v1.0: github.com/inclusionAI/AR…
Ant Open Source@ant_oss

🚀 AReaL v1.0 is here! Evolve your 🦞#OpenClaw agents (or any agent) with RL—zero changes to agent required. We add a transparent proxy that shadows your agent's base_url, capturing all agent–LLM interactions for RL training. Just swap the URL and go. ✅ One-click agentic RL for any existing agent—no modifications needed ✅ Full #opencode recipe: training code, data, infra, and models all open ✅ Archon Engine: PyTorch-native 5D parallelism via pure PyTorch; uv sync and go (zero manual compilation) ✅ torch.compile by default: instant 10% performance boost out of the box 📊 SOTA on tau2Bench: 73.0% pass@1 (Airline) / 98.3% (Telecom) #opensource #inclusionAI #RL 📄 Paper: arxiv.org/abs/2601.22607 🐙 GitHub: github.com/inclusionAI/AR… Train your OpenClaw agent: github.com/inclusionAI/AR…

English
0
1
8
815
InclusionAI retweetledi
Yi Wu
Yi Wu@jxwuyi·
AReaL v1.0 released: Effortless #RL to make your #OpenClaw self-evolve 🚀: •🛠️ One-click agentic RL for any existing agent •📈 Open-source SOTA on tau2-bench •💎 A new PyTorch-native 5D-Parallel Engine Archon •🤖A full #opencode recipe GitHub: github.com/inclusionAI/AR…
Yi Wu tweet media
English
7
40
143
58.4K
InclusionAI retweetledi
Ant Ling
Ant Ling@AntLingAGI·
One cannot build a reasoning model without Reinforcement Learning, ASystem is here to help 🤗 Introducing **AReaL v1.0** stable release: - One-click agentic RL to evolve your agents like those in OpenClaw - A new PyTorch-native 5D-Parallel Archon Engine - A full opencode recipe
Ant Ling tweet media
English
3
16
80
6.2K
InclusionAI retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
🚀 Day-0 support for Ling from @AntLingAGI is live in SGLang. This is a 1T-parameter flagship (63B active) model, trained on 29T tokens with 1M context. ⚡ Hybrid linear attention: ultra-high throughput at massive context 🧠 Composite rewards: frontier-level reasoning with ¼ the tokens 🎯 Bidirectional RL + agent verification: stronger alignment 🤖 Native Agentic RL: SOTA on BFCL-V4, ready for Claude Code/OpenCode Model: huggingface.co/inclusionAI/Li… Try it out with the command:
LMSYS Org tweet media
Ant Ling@AntLingAGI

Introducing Ling-2.5-1T, the new flagship with 1T params (63B active), 29T pre-training corpus & 1M context - Efficiency: Matches reasoning of frontier thinking models using only 1/4 tokens via composite rewards. - Alignment: Bidirectional RL + Agent-based verification for precise instruction following. - Agentic: Native Agentic RL training; SOTA on BFCL-V4 & ready for Claude Code/OpenCode. It leads in tool use & alignment, balancing speed with intelligence. ⚖️

English
1
9
36
8.8K
InclusionAI retweetledi
Ant Ling
Ant Ling@AntLingAGI·
Introducing Ling-2.5-1T, the new flagship with 1T params (63B active), 29T pre-training corpus & 1M context - Efficiency: Matches reasoning of frontier thinking models using only 1/4 tokens via composite rewards. - Alignment: Bidirectional RL + Agent-based verification for precise instruction following. - Agentic: Native Agentic RL training; SOTA on BFCL-V4 & ready for Claude Code/OpenCode. It leads in tool use & alignment, balancing speed with intelligence. ⚖️
Ant Ling tweet media
English
23
55
521
186.9K
InclusionAI retweetledi
Ant Ling
Ant Ling@AntLingAGI·
We connected our latest reasoning model Ring-2.5-1T to @openclaw to see how it performs as a personal agent. The outcome was impressive. It’s a fluid experience. Our engineers also tried several other characters. We kept the "proper" ling-claw 🦉 Check out the video!
Ant Ling@AntLingAGI

🚀 Unveiling Ring-1T-2.5 The first hybrid linear-architecture 1T thinking model. -Efficient: Hybrid linear breakthrough (10x lower memory) -Gold Tier: IMO25 (35/42) & CMO25 (105/126) -Agentic: Natively with Claude Code & OpenClaw -Open SOTA: IMOAnswerBench,GAIA2-search & more!

English
6
16
188
18.1K
InclusionAI
InclusionAI@TheInclusionAI·
Introducing Ring-1T-2.5 with high efficiency in planning and multi-step tool collaboration!🎉 Try out the first open-source trillion-parameter reasoning model based on hybrid linear attention architecture: 🤗 Hugging Face: huggingface.co/inclusionAI/Ri… 🤖 ModelScope: modelscope.cn/models/inclusi… #inclusionAI #opensource #LRM
Ant Ling@AntLingAGI

🚀 Unveiling Ring-1T-2.5 The first hybrid linear-architecture 1T thinking model. -Efficient: Hybrid linear breakthrough (10x lower memory) -Gold Tier: IMO25 (35/42) & CMO25 (105/126) -Agentic: Natively with Claude Code & OpenClaw -Open SOTA: IMOAnswerBench,GAIA2-search & more!

English
2
3
53
3.3K
InclusionAI
InclusionAI@TheInclusionAI·
Meet the official version of Ming-flash-omni 2.0✨! Powered by the Ling-Flash-2.0 architecture (100B-A6B MoE), Ming-flash-omni 2.0 focuses on optimizing capabilities across the following key domains: 💡Expert-level Multimodal Cognition 🎧Immersive and Controllable Unified Acoustic Synthesis ✍️High-Dynamic Controllable Image Generation and Manipulation #opensource #AGI #LLM #inclusionAI
Ant Ling@AntLingAGI

One for all, and all for one 🧧 Introducing Ming-flash-omni-2.0: A specialist in every domain, unified as a capable generalist. A gift from Ling =) - Unified Acoustic Synthesis: Speech, audio, and music combined for unbounded creativity; - "Seeing" to "Knowing": Moving beyond input to true deep semantic understanding; - Native Visual Fusion: Seamless generation, editing, and segmentation;

English
0
0
4
409
InclusionAI
InclusionAI@TheInclusionAI·
🚀 We are proving diffusion models can challenge autoregressive dominance. Introducing LLaDA2.1! Two versions are released for next-level generation performance: 🔹 LLaDA2.1-Mini (16B) — Fast and efficient 🔹 LLaDA2.1-Flash (100B) — Maximum performance, achieving up to 892 tokens/sec on complex coding tasks 🌟Highlights: 1️⃣Error-Correcting Editable (ECE) engine 2️⃣Dual model design: Speedy Mode & Quality Mode 3️⃣First large-scale RL framework for a 100B-parameter diffusion model 🤗 HuggingFace: huggingface.co/collections/in… 📖 Technical Report: github.com/inclusionAI/LL… 💻 GitHub: github.com/inclusionAI/LL… #LLaDA #LLM #dLLM
Ant Open Source@ant_oss

What if an LLM could EDIT its own tokens in real-time, not just generate them? 🤯 Introducing LLaDA2.1 — a diffusion model that breaks from autoregressive dominance. It drafts fast, then fixes its own mistakes on the fly with Token-to-Token editing. The result? 892 tokens/sec on a 100B model. 🔥 ⚡ 892 TPS on HumanEval+ (coding) ⚡ 801 TPS on BigCodeBench 🧠 Real-time self-correction via T2T editing ✅ @lmsysorg SGLang Day 0 support — production-ready now A "non-consensus" architecture now challenging the mainstream. Open-sourced TODAY. 👇 #LLaDA #TokenEditing #OpenSource #LLM #dLLM

English
1
0
6
425
InclusionAI
InclusionAI@TheInclusionAI·
🌍Today, we introduce LingBot-World, an interactive world model. From perception (LingBot-Depth) to action (LingBot-VLA) to imagination (LingBot-World), we are building the foundational stack for embodied intelligence. #inclusionAI #opensource #worldmodel #LingBot
Robbyant@robbyant_brain

🌍 Reality is expensive. Simulation is the shortcut. But what if the simulation could think, respond, and remember? Today, we open-source LingBot-World, an interactive world model built on @Alibaba_Wan Wan2.2! 🔥 We’re pushing the limits of: 🔷 High-Fidelity Simulation & Precise Control 🔷 Long-Horizon Consistency & Memory 🔷 Modeling Physical & Game Worlds It that can generate nearly 10 minutes of controllable, physics-grounded simulation in real-time. A digital training ground for embodied AI. 👇 #WorldModel #EmbodiedAI #OpenSource #Simulation #Robotics

English
0
2
5
433
InclusionAI
InclusionAI@TheInclusionAI·
Here comes the latest release! 👋Meet LingBot-VLA, another foundational block for embodied intelligence. More to come this week! #inclusionAI #LingBotVLA #EmbodiedAI #OpenSource
Robbyant@robbyant_brain

🧠 What if one AI brain powers all robots? Retraining for every new embodiment is the biggest scaling pain in embodied AI—we’re fixing it. Today, we open-source LingBot-VLA: a Vision-Language-Action model built on @Alibaba_Qwen Qwen-2.5-VL and pre-trained on 20,000 hours of real-world data across 9 distinct robot embodiments. New SOTA for cross-embodiment generalization unlocked. #EmbodiedAI #Robotics #VLA #OpenSource

English
0
0
4
229
InclusionAI
InclusionAI@TheInclusionAI·
🚀🚀🚀
slime@slime_framework

Ant AQ-Team @AQ_MedAI @TheInclusionAI and SGLang RL Team @sgl_project just helped land Kimi-K2-Instruct RL on slime — fully wired up and running on 256× H20 141GB 🚀 Huge shout-out to @yngao016, @menlzy, @Yonah_x from AQ Team and @Ji_Li_233, @Yefei_RL from the SGLang RL Team for making this happen ❤️ Kimi-K2-Instruct RL is live on slime. Kimi-K2-Thinking next? 👀 Details & configs in PR: github.com/THUDM/slime/pu…

ART
0
0
9
2.5K