InclusionAI

113 posts

InclusionAI banner
InclusionAI

InclusionAI

@TheInclusionAI

AI Lab @AntGroup, we envision AGI as humanity's shared milestone. Our Language Model @AntLingAGI and LLaDA, Embodied AI @robbyant_brain, OSS projects AReaL etc.

Katılım Mart 2025
18 Takip Edilen1.4K Takipçiler
InclusionAI retweetledi
Ant Ling
Ant Ling@AntLingAGI·
One more thing: we welcome you to join the AntLing Discord community! 🤗🥳🥳🥳 discord.gg/jQtDsU5J6C
English
0
1
9
766
InclusionAI retweetledi
ModelScope
ModelScope@ModelScope2022·
So excited to announce Ling-2.6-1T is now live on ModelScope!🔥 This 1T parameter model is built for complex agent workflows, multi-step execution and long-context understanding. It truly delivers in production. 📊The benchmarks speak for themselves: - AIME26 — leads all non-reasoning models - SWE-bench Verified, TAU2-Bench, BFCL-V4, PinchBench — first-tier open-source - ~16M tokens on Artificial Analysis full eval — same efficiency story as Ling-2.6-flash Works with Claude Code, OpenClaw, OpenCode & CodeBuddy ✅ SGLang & vLLM ready · Open weights available now 🚀 Explore on ModelScope 👇 modelscope.cn/models/inclusi… modelscope.ai/models/inclusi…
ModelScope tweet media
English
4
11
143
9.8K
InclusionAI retweetledi
ZenMux
ZenMux@ZenMuxAI·
📢 Ling-2.6-1T from @TheInclusionAI is live on @ZenMuxAI 🔥 Ant Group's trillion-param MoE flagship. 50B active per token. Open-source SOTA on SWE-bench Verified, AIME & agent benchmarks — no thinking tokens needed ⚡️ 🔗 zenmux.ai/inclusionai/li…
ZenMux tweet media
English
11
11
139
7.9K
InclusionAI retweetledi
ModelScope
ModelScope@ModelScope2022·
✨ Inclusion AI's LLaDA2.0-Uni is open-source! A single MoE-based diffusion LLM that unifies visual understanding and image generation — natively, in one model. Download on ModelScope 👉 modelscope.ai/models/inclusi… Built on a single Mask Token Prediction paradigm, LLaDA2.0-Uni handles: 🖼️ Text-to-image synthesis at 1024×1024, with the option to "think" before drawing 🔍 Visual question answering, captioning, and document understanding on par with dedicated VLMs ✏️ Instruction-driven image editing — single or multi-reference, faithful to original details 🎨 Interleaved text-image reasoning, opening the door to a new class of multimodal chains Released under Apache 2.0 — paper, code, and weights all open. 📄 modelscope.ai/papers/2604.20… 🔗 github.com/inclusionAI/LL…
ModelScope tweet media
English
3
6
58
7.8K
InclusionAI retweetledi
DailyPapers
DailyPapers@HuggingPapers·
LLaDA2.0-Uni from Inclusion AI Unified diffusion LLM for multimodal understanding and generation. Features MoE backbone with SigLIP-VQ tokenizer enabling 8-step image generation and native interleaved reasoning.
DailyPapers tweet media
English
2
12
48
3.1K
InclusionAI retweetledi
DailyPapers
DailyPapers@HuggingPapers·
DR-Venus: a 4B parameter deep research agent trained on only 10K open data Achieves frontier performance on edge-scale devices, outperforming prior 9B agents and narrowing the gap to 30B-class systems through agentic SFT and RL with information gain rewards.
DailyPapers tweet media
English
2
12
46
3.3K
InclusionAI
InclusionAI@TheInclusionAI·
Built for stable, high-speed execution in complex, real-world environments, meet Ling-2.6-1T!🥳Massive leap& top-tier performance, and 🛠️ Engineering-Task-Friendly. We’re unlocking 1 week of free API access. Start testing now at @OpenRouter openrouter.ai/inclusionai/li… Open-source model is on the way! Keep an eye out—more details dropping soon. 👀 #inclusionAI #LLM
Ant Ling@AntLingAGI

🚀 Today, we are launching Ling-2.6-1T, a trillion-parameter flagship model designed for precise instruct task execution. By prioritizing a "Fast-Thinking" mechanism, it delivers SOTA intelligence with ultra-low token overhead, making token efficiency a first-class citizen.

English
7
1
25
2K
InclusionAI
InclusionAI@TheInclusionAI·
Brand new release from our team! 🆕DR-Venus — exploring new possibilities for edge-scale deep research with just 10K open data. 🛠️ Built with a two-stage recipe: 1️⃣ Agentic SFT: strict trajectory cleaning + long-horizon resampling for maximum data value 2️⃣ IGPO RL: turn-level rewards via information gain & format-aware regularization for stable long-horizon execution #inclusionAI #deepresearch #AIAgent Edge-ready GGUF format — deploy on your Mac today. 🐙 Code: github.com/inclusionAI/DR… 📄 Paper: arxiv.org/abs/2604.19859 🤗 Models: huggingface.co/collections/in…
English
5
1
16
729
InclusionAI
InclusionAI@TheInclusionAI·
We’re excited to introduce LLaDA2.0-Uni, the 💥first unified multimodal model in the LLaDA2.0 series. Highlights: 🧠 One paradigm to rule all – With unified block-wise mask token prediction, LLaDA2.0-Uni achieves top-tier performance across visual understanding, high-fidelity image generation, and single/multi-reference image editing. ⚡ Efficient inference – A novel decoding strategy in the dLLM backbone, together with an 8-step distilled diffusion decoder, enables highly efficient inference. (SGLang support soon for even faster inference.) 🔄 Interleaved, intelligent, infinite – Unified discrete representations enable seamless interleaved generation and advanced interleaved reasoning capabilities.
InclusionAI tweet media
Haoxing Chen@Chenhaoxing249

After two months of teamwork, we’re excited to share our team’s latest achievement — LLaDA2.0-Uni, InclusionAI’s first multimodal LLaDA. A unified discrete diffusion LLM built for both understanding and generation across text and images. Highlights: ● One paradigm for VQA, doc understanding, and image generation ● Efficient inference with a new decoding strategy + 8-step distilled decoder ● Interleaved text-image generation enabled by unified discrete representations (SGLang support soon) 🤗 Hugging Face: huggingface.co/inclusionAI/LL… 📷 ModelScope: modelscope.cn/models/inclusi…

English
9
17
142
16K
InclusionAI retweetledi
Artificial Analysis
Artificial Analysis@ArtificialAnlys·
Ant Group's Ling 2.6 Flash scores 26 on the Artificial Analysis Intelligence Index, a 10-point jump from Ling-flash-2.0. It is one of few recent open weights releases focused on non-reasoning capabilities and focuses on a reasonable cost to intelligence ratio. Ling 2.6 Flash is a non-reasoning model from Ant Group's @TheInclusionAI lab. Ant Group's model family comprises three series: Ling (non-reasoning), Ring (reasoning), and Ming (multimodal). Ling-flash-2.0 was the previous flash-tier non-reasoning model. Ling 2.6 Flash is expected to be open weights shortly after release, but as of today the weights have not been released on Hugging Face. Key takeaways: ➤ At 104B total parameters with 7.4B active parameters, Ling 2.6 Flash (26) sits in intelligence near GPT-5.4 nano (Non-Reasoning, 24) and Gemma 4 26B A4B (Non-reasoning, 27), both models with comparable active parameter counts. However, at 18 points behind GLM-5.1 (Non-reasoning, 44), there remains a gap to frontier non-reasoning open weights models ➤ Ling 2.6 Flash is comparatively token efficient, using ~15M output tokens to run the Intelligence Index. This is comparable to Gemma 4 26B A4B (~14M) but a fraction of Qwen3.5 9B (~78M). Compared to models in the similar intelligence tier, Ling 2.6 Flash represents a reasonable efficiency tradeoff, which has positive effects on cost when deployed on larger workloads. At a price of $0.1 / million input tokens and $0.3 / million output tokens, Ling 2.6 Flash costs only ~$23 to run the full Artificial Analysis Intelligence Index. ➤ Gains from Ling-flash-2.0 were driven mostly by improvements agentic capabilities and instruction following. τ²-Bench jumped from 21% to 86% (+65 points), IFBench from 34% to 57% (+23 points), and GDPval-AA Elo from 425 to 783 (+84%). Conversely, GPQA Diamond fell from 66% to 59% (-6 points) and SciCode from 29% to 27% (-2 points). ➤ AA-Omniscience performance is at -66 with 15% accuracy and 96% hallucination rate. This is consistent with the model's small 7.4B active parameter count. Knowledge recall benefits from larger parameter counts, and sub-10B active-parameter models systematically underperform on this metric. Additional model details: ➤ Architecture: MoE, 104B total parameters, 7.4B active parameters ➤ Context window: 262K tokens (doubled from 128K for Ling-flash-2.0) ➤ Pricing: $0.10 / $0.30 per 1M input/output tokens (via Novita API) ➤ License: Weights not yet released ➤ Availability: Third party API through @novita_labs
Artificial Analysis tweet media
English
9
8
187
19.3K
InclusionAI
InclusionAI@TheInclusionAI·
Sharing our joint release with @ant_oss: Q1 2026 Agentic AI Landscape and insight, built on open-source community activity & data, covering 50+ projects in Coding Agents, Personal Assistants, Orchestration Frameworks— from OpenClaw & Claude-Mem to Aden Hive & Paperclip... 📈 4 trends we're witnessing: 1️⃣ From Context Engineering to #HarnessEngineering 2️⃣ Dev tools shifting from human-centric to agent-centric 3️⃣ Self-evolution as a design starting point 4️⃣ Coding Agents now cover the full lifecycle In an era of FOMO, model iterations, product sprints, and funding rounds — openness, sharing, and collaboration remain the developer community's true north. Full report 👉inclusion-ai.org/blog/agentic-l… #AgenticAI #OpenSource #inclusionAI
Ant Open Source@ant_oss

Since last year, we've been building something for developers: a data-driven insight to what matters in the AI open-source development space. 🚀 Today we introduce the Q1 2026 Agentic AI Landscape with @TheInclusionAI - your ecosystem navigation map. What we're releasing today: ✅ 50+ projects mapped — from OpenClaw & Claude-Mem to Aden Hive & Paperclip, covering Coding Agents, Personal Assistants, Orchestration Frameworks ✅ Community insights from 21K+ active developers — extreme power law distribution; indie builders & startups dominate; <10% from big tech More details👉 inclusion-ai.org/blog/agentic-l… #AgenticAI #OpenSource #inclusionAI

English
0
1
5
606
InclusionAI retweetledi
Sayak Paul
Sayak Paul@RisingSayak·
Introducing the first discrete diffusion pipeline for text in Diffusers -- LLaDA2 by @TheInclusionAI 🔥 It follows an MoE architecture w/ 16B total params. It is definitely not SOTA across the board, but it hopefully flips that soon. Check out the links below to know more ⬇️
Sayak Paul tweet media
English
5
16
124
14.4K
InclusionAI
InclusionAI@TheInclusionAI·
A new step on our way to pushing the boundaries of 3D vision, #EmbodiedAI, and #RoboticsAi. Introducing a MASSIVE new RGB-D dataset, 💥released with 3 MILLION carefully annotated RGB-D pairs: ✅2M real-world captured samples ✅1M high-quality rendered samples Try out: huggingface.co/datasets/robby… #inclusionAI #LingBot #opensource
Robbyant@robbyant_brain

🚀 Exciting news for the spatial perception community! 📷 For too long, the lack of large-scale, real-world depth datasets has been a major bottleneck. Today, we are open-sourcing the RGB-D dataset built for training our spatial perception model LingBot-Depth — and it's massive. 👇

English
0
0
5
408
InclusionAI retweetledi
Ant Open Source
Ant Open Source@ant_oss·
⚡️ 892 tokens/s — our 100B diffusion LLM, LLaDA2.1-flash, is now live on @ZenMuxAI! With Token Editing, LLaDA 2.1 goes from research breakthrough to production-ready speed. Diffusion models just got real. Try it via API or Chat 👇 zenmux.ai/inclusionai/ll… #LLaDA #ZenMux #AI #dLLM
ZenMux@ZenMuxAI

⚡️New on ZenMux: LLaDA2.1-flash 100B diffusion LLM from @TheInclusionAI . → Error-correcting editable generation → Speed Mode: ultra-fast inference → Quality Mode: competitive performance → RL tailored for 100B-scale dLLM 🔗 zenmux.ai/inclusionai/ll… 🔗 huggingface.co/inclusionAI/LL…

English
9
59
525
74.7K