
The burst of DeepSeek V3 has attracted attention from the whole AI community to large-scale MoE models. Concurrently, we have been building Qwen2.5-Max, a large MoE LLM pretrained on massive data and post-trained with curated SFT and RLHF recipes. It achieves competitive performance against the top-tier models, and outcompetes DeepSeek V3 in benchmarks like Arena Hard, LiveBench, LiveCodeBench, GPQA-Diamond. 📖 Blog: qwenlm.github.io/blog/qwen2.5-m… 💬 Qwen Chat: chat.qwenlm.ai (choose Qwen2.5-Max as the model) ⚙️ API: alibabacloud.com/help/en/model-… (check the code snippet in the blog) 💻 HF Demo: huggingface.co/spaces/Qwen/Qw… In the future, we not only continue the scaling in pretraining, but also invest in the scaling in RL. We hope that Qwen is able to explore the unknown in the near future! 🔥 💗 Thank you for your support during the past year. See you next year!
















