Alibaba Group
9.9K posts

Alibaba Group
@AlibabaGroup
Driven by passion and imagination, we are at the forefront in AI, Cloud, and e-commerce. Join us on this journey! 🚀✨




🚀 Introducing FlashQLA: high-performance linear attention kernels built on TileLang. ⚡ 2–3× forward speedup. 2× backward speedup. 💻 Purpose-built for agentic AI on your personal devices. 💡Key insights: 1. Gate-driven automatic intra-card CP. 2. Hardware-friendly algebraic reformulation. 3. TileLang fused warp-specialized kernels. FlashQLA boosts SM utilization via automatic intra-device CP. The gains are especially pronounced for TP setups, small models, and long-context workloads. Instead of fusing the entire GDN flow into a single kernel, we split it into two kernels optimized for CP and backward efficiency. At large batch sizes this incurs extra memory I/O overhead vs. a fully fused approach, but it delivers better real-world performance on edge devices and long-context workloads. The backward pass was the hardest part: we built a 16-stage warp-specialized pipeline under extremely tight on-chip memory constraints, ultimately achieving 2×+ kernel-level speedups. We hope this is useful to the community!🫶🫶 Learn more: 📖 Blog: qwen.ai/blog?id=flashq… 💻 Code: github.com/QwenLM/FlashQLA






Do you ever notice the magic hiding in the cracks of reality? Let your wildest thoughts run free. Don't ignore it — the key to infinity is already in your hands. You are the chosen dreamer. Now, ride your imagination, and let it reshape your world. 🐴 “Who Let the Horse Out”, Co-created by DIGITAI and HappyHorse 1.0 🔗happyhorse.com from Alibaba-ATH #happyhorse #HappyHorseAI




🚀 Meet Qwen3.6-27B, our latest dense, open-source model, packing flagship-level coding power! Yes, 27B, and Qwen3.6-27B punches way above its weight. 👇 What's new: 🧠 Outstanding agentic coding — surpasses Qwen3.5-397B-A17B across all major coding benchmarks 💡 Strong reasoning across text & multimodal tasks 🔄 Supports thinking & non-thinking modes ✅ Apache 2.0 — fully open, fully yours Smaller model. Bigger results. Community's favorite. ❤️ We can't wait to see what you build with Qwen3.6-27B! 👀 🔗👇 Blog: qwen.ai/blog?id=qwen3.… Qwen Studio: chat.qwen.ai/?models=qwen3.… Github: github.com/QwenLM/Qwen3.6 Hugging Face: huggingface.co/Qwen/Qwen3.6-2… huggingface.co/Qwen/Qwen3.6-2… ModelScope: modelscope.cn/models/Qwen/Qw… modelscope.cn/models/Qwen/Qw…



🚀 Introducing Qwen3.6-Max-Preview, an early preview of our next flagship model Highlights: ⚡️ Improved agentic coding capability over Qwen3.6-Plus 📖 Stronger world knowledge and instruction following 🌍 Improved real-world agent and knowledge reliability performance Smarter, sharper, still evolving. More Qwen3.6 models to come. Stay tuned! 🔗👇 Blog: qwen.ai/blog?id=qwen3.… Qwen Studio: chat.qwen.ai/?models=qwen3.… API: modelstudio.console.alibabacloud.com/ap-southeast-1…



