Kole Sam
6.7K posts

Kole Sam
@Kole_Sam
AI Enthusiast | Product Manager
Los Angeles, CA Beigetreten Eylül 2010
604 Folgt244 Follower

Just implemented Google’s TurboQuant in MLX and the results are wild!
Needle-in-a-haystack using Qwen3.5-35B-A3B across 8.5K, 32.7K, and 64.2K context lengths:
→ 6/6 exact match at every quant level
→ TurboQuant 2.5-bit: 4.9x smaller KV cache
→ TurboQuant 3.5-bit: 3.8x smaller KV cache
The best part: Zero accuracy loss compared to full KV cache.

Google Research@GoogleResearch
Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI
English


