Ant Open Source

191 posts

Ant Open Source banner
Ant Open Source

Ant Open Source

@ant_oss

All things open source at Ant Group. We aim to bring high caliber infrastructure FinTech OSS to the community.

Katılım Mart 2024
93 Takip Edilen1.6K Takipçiler
Ant Open Source
Ant Open Source@ant_oss·
@profcelsofontes @ZenMuxAI The model is open-source, but ZenMux is a commercial platform, so you may need to pay for access. The ZenMux platform offers two payment options: a subscription plan and a pay-as-you-go option.
English
2
0
7
734
Ant Open Source
Ant Open Source@ant_oss·
⚡️ 892 tokens/s — our 100B diffusion LLM, LLaDA2.1-flash, is now live on @ZenMuxAI! With Token Editing, LLaDA 2.1 goes from research breakthrough to production-ready speed. Diffusion models just got real. Try it via API or Chat 👇 zenmux.ai/inclusionai/ll… #LLaDA #ZenMux #AI #dLLM
ZenMux@ZenMuxAI

⚡️New on ZenMux: LLaDA2.1-flash 100B diffusion LLM from @TheInclusionAI . → Error-correcting editable generation → Speed Mode: ultra-fast inference → Quality Mode: competitive performance → RL tailored for 100B-scale dLLM 🔗 zenmux.ai/inclusionai/ll… 🔗 huggingface.co/inclusionAI/LL…

English
9
59
527
71.8K
Ant Open Source
Ant Open Source@ant_oss·
🚀 AReaL v1.0 is here! Evolve your 🦞#OpenClaw agents (or any agent) with RL—zero changes to agent required. We add a transparent proxy that shadows your agent's base_url, capturing all agent–LLM interactions for RL training. Just swap the URL and go. ✅ One-click agentic RL for any existing agent—no modifications needed ✅ Full #opencode recipe: training code, data, infra, and models all open ✅ Archon Engine: PyTorch-native 5D parallelism via pure PyTorch; uv sync and go (zero manual compilation) ✅ torch.compile by default: instant 10% performance boost out of the box 📊 SOTA on tau2Bench: 73.0% pass@1 (Airline) / 98.3% (Telecom) #opensource #inclusionAI #RL 📄 Paper: arxiv.org/abs/2601.22607 🐙 GitHub: github.com/inclusionAI/AR… Train your OpenClaw agent: github.com/inclusionAI/AR…
Ant Open Source tweet media
English
0
5
22
2.1K
Ant Open Source
Ant Open Source@ant_oss·
🔍MLLMs are nearsighted: zooming helps but kills speed. What if we teach them to "see" fine details without zooming? Introducing Region-to-Image Distillation(R2I): we train models to internalize "zooming". 🏆 ZwZ-8B achieves SOTA performance on fine-grained perception with zero tool calls. 📊 Plus ZoomBench: human-AI collaboration construction, dual-view "zooming gap" metric Dive into our joint efforts: 📑Paper: huggingface.co/papers/2602.11… 🔗Code: github.com/inclusionAI/Zo… ⚙️ Model&Data: huggingface.co/collections/in… #OpenSource #MLLM #LLMs #inclusionAI
English
2
9
28
5.9K
Ant Open Source
Ant Open Source@ant_oss·
Ring-1T-2.5 is released📣🚀with high efficiency in planning and multi-step tool collaboration. 🔧Try out and explore how this 1:7 MLA + Lightning Linear Attention boosts reasoning speed and exploration! #OpenSource #LRM 🤗 Hugging Face: huggingface.co/inclusionAI/Ri… 📷 ModelScope: modelscope.cn/models/inclusi…
Ant Ling@AntLingAGI

🚀 Unveiling Ring-1T-2.5 The first hybrid linear-architecture 1T thinking model. -Efficient: Hybrid linear breakthrough (10x lower memory) -Gold Tier: IMO25 (35/42) & CMO25 (105/126) -Agentic: Natively with Claude Code & OpenClaw -Open SOTA: IMOAnswerBench,GAIA2-search & more!

English
0
1
3
606
Ant Open Source retweetledi
LMSYS Org
LMSYS Org@lmsysorg·
Congrats to @ant_oss on releasing LLaDA 2.1, a 100B discrete diffusion LLM that breaks the speed–quality tradeoff. Day-0 support is live in SGLang! ⚡ Unified decoding: fast parallel generation & on-the-fly token correction 🎛️ User-controllable modes: ultra-fast decoding & high-fidelity reasoning 🧩 Mask-to-Token + Token-to-Token editing under one framework 🧠 Trained with large-scale block-level RL for SOTA efficiency and performance Related PR: github.com/sgl-project/sg…
LMSYS Org tweet media
Ant Open Source@ant_oss

What if an LLM could EDIT its own tokens in real-time, not just generate them? 🤯 Introducing LLaDA2.1 — a diffusion model that breaks from autoregressive dominance. It drafts fast, then fixes its own mistakes on the fly with Token-to-Token editing. The result? 892 tokens/sec on a 100B model. 🔥 ⚡ 892 TPS on HumanEval+ (coding) ⚡ 801 TPS on BigCodeBench 🧠 Real-time self-correction via T2T editing ✅ @lmsysorg SGLang Day 0 support — production-ready now A "non-consensus" architecture now challenging the mainstream. Open-sourced TODAY. 👇 #LLaDA #TokenEditing #OpenSource #LLM #dLLM

English
1
10
42
9K
Ant Open Source
Ant Open Source@ant_oss·
Ready to experience the future? We're releasing two versions: 🔹 LLaDA2.1-Mini (16B) — Fast and efficient 🔹 LLaDA2.1-Flash (100B) — Maximum performance Both ready to revolutionize software development, content creation, and beyond. 🚀 🤗 HuggingFace: huggingface.co/collections/in… 📖 Technical Report: huggingface.co/papers/2602.08… 💻 GitHub: github.com/inclusionAI/LL… The era of editable, self-correcting LLMs starts now. Join us in redefining what's possible. ✍️
English
1
0
27
1.5K
Ant Open Source
Ant Open Source@ant_oss·
The results? Mind-blowing. 🤯 LLaDA2.1-Flash (100B) hits a peak speed of 892 tokens/second on complex coding tasks. This isn't just incremental — it's a leap forward in generation efficiency. Across 33 rigorous benchmarks, LLaDA2.1 proves that diffusion models can challenge autoregressive dominance.
Ant Open Source tweet media
English
1
0
10
1.6K
Ant Open Source
Ant Open Source@ant_oss·
What if an LLM could EDIT its own tokens in real-time, not just generate them? 🤯 Introducing LLaDA2.1 — a diffusion model that breaks from autoregressive dominance. It drafts fast, then fixes its own mistakes on the fly with Token-to-Token editing. The result? 892 tokens/sec on a 100B model. 🔥 ⚡ 892 TPS on HumanEval+ (coding) ⚡ 801 TPS on BigCodeBench 🧠 Real-time self-correction via T2T editing ✅ @lmsysorg SGLang Day 0 support — production-ready now A "non-consensus" architecture now challenging the mainstream. Open-sourced TODAY. 👇 #LLaDA #TokenEditing #OpenSource #LLM #dLLM
Ant Open Source tweet media
English
48
83
383
369.7K