MK
501 posts

MK
@MeetsKhalid
Founder & CEO of @OnDemandAI

Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support. Note: Cursor accesses Kimi-k2.5 via @FireworksAI_HQ ' hosted RL and inference platform as part of an authorized commercial partnership.



@OpenAI told you chat. told you.


Qwen releases Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B Amazing benchmark evals for that sizes! "More intelligence, less compute" And I love ther take: "a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts."


Introducing M2.5, an open-source frontier model designed for real-world productivity. - SOTA performance at coding (SWE-Bench Verified 80.2%), search (BrowseComp 76.3%), agentic tool-calling (BFCL 76.8%) & office work. - Optimized for efficient execution, 37% faster at complex tasks. - At $1 per hour with 100 tps, infinite scaling of long-horizon agents now economically possible MiniMax Agent: agent.minimax.io API: platform.minimax.io CodingPlan: platform.minimax.io/subscribe/codi…

🚨 Pony Alpha on Arena is Glm-5 from @Zai_org Test : A simple test with a PoC token will reveal which tokenizer it is. The Chinese phrase "锅内倒入植物油烧热" (Pour vegetable oil into the pot and heat it) is a known tokenizer collision or "glitch string" specific to the GLM-4 / GLM-5 tokenizer.

Our teams have been building with a 2.5x-faster version of Claude Opus 4.6. We’re now making it available as an early experiment via Claude Code and our API.







