

🧠 Can agent memory scale without losing reasoning? 🔥 We’re excited to share our latest work, SimpleMem, a principled memory framework for LLM agents built around semantic lossless compression. 📉 30× fewer inference tokens 📈 +26.4% avg F1 (vs Mem0) ⚡ 50.2% faster retrieval (vs Mem0) Instead of storing raw interaction history 🗂️ or relying on costly iterative reasoning loops 🔁, SimpleMem treats memory as a structured, evolving representation whose primary objective is 🎯 maximizing information density per token. 📄 Paper: arxiv.org/abs/2601.02553 🔗 Code: github.com/aiming-lab/Sim… 📦 Website:aiming-lab.github.io/SimpleMem-Page/ Nice work @JiaqiLiu835914, Yaofeng Su, @richardxp888, @lillianwei423, and great collab. w/ @cihangxie, Zeyu Zheng, @dingmyu





