Zixi Chen

6 posts

Zixi Chen

Zixi Chen

@chenzixi23

Katılım Ağustos 2025
11 Takip Edilen7 Takipçiler
Zixi Chen retweetledi
Ke Yang
Ke Yang@EmpathYang·
Big PlugMem update 🧠 A plug-and-play memory module for LLM agents — turns raw trajectories into a knowledge graph your agent actually reasons over. 🎉 Accepted to ICML 2026 🔌 Drop it into OpenClaw 🦞, Claude Code, and other agent runtimes 🔍 Visualize memory · test retrieval · replay sessions 🥇 SOTA backbone on LongMemEval & HotpotQA — general enough to build on Paper: arxiv.org/abs/2603.03296 Code: github.com/TIMAN-group/Pl… #ICML2026 #LLM #Agents
Ke Yang tweet mediaKe Yang tweet mediaKe Yang tweet media
English
5
15
47
401.8K
Zixi Chen retweetledi
机器之心 JIQIZHIXIN
机器之心 JIQIZHIXIN@jiqizhixin·
Could LLM agents finally gain truly adaptable, long-term memory? University of Illinois Urbana-Champaign, @Tsinghua_Uni , and Microsoft Research team up for a breakthrough! They introduce PlugMem, a universal plugin memory module that attaches to any LLM agent. Instead of storing raw experiences, PlugMem organizes memories into a knowledge-centric graph, focusing on abstract, decision-relevant information for efficient recall and reasoning. PlugMem consistently outperforms current task-agnostic memory systems and even specialized, task-specific designs across conversational QA, multi-hop knowledge retrieval, and complex web agent tasks. PlugMem: A Task-Agnostic Plugin Memory Module for LLM Agents  Paper: arxiv.org/abs/2603.03296 Our report: mp.weixin.qq.com/s/KJP8te3fufBJ… 📬 #PapersAccepted by Jiqizhixin
机器之心 JIQIZHIXIN tweet media
English
2
4
28
1.8K
Zixi Chen retweetledi
Microsoft Research
Microsoft Research@MSFTResearch·
PlugMem transforms AI agents’ interaction histories into structured, reusable knowledge. It integrates with any agent, supports diverse tasks and memory types, and maximizes decision quality while significantly reducing memory token use: msft.it/6017Qc9vv
Microsoft Research tweet media
English
2
34
39
9K
Zixi Chen retweetledi
Ke Yang
Ke Yang@EmpathYang·
📰New preprint: How can we build a task-agnostic plug-and-play memory module for LLM agents that supports multiple memory types? We present PlugMem🔌🧠, a plugin memory module that works across tasks by turning heterogeneous experience into knowledge. Evaluated unchanged on long-term dialogue🗣️, multi-hop QA🕵️, and web agents🕸️🤖, PlugMem improves performance while using far fewer memory tokens. 📜Paper: empathyang.github.io/files/PlugMem.… 🔨Code: github.com/TIMAN-group/Pl…
Ke Yang tweet media
English
13
64
169
12.2K
Zixi Chen retweetledi
Ke Yang
Ke Yang@EmpathYang·
We’ve been building a task-agnostic memory module for LLM agents — PlugMem. While running experiments across long-horizon QA, multi-hop retrieval, and web agents, we found several unexpected patterns about how memory actually helps (or hurts) decision-making. Code: github.com/TIMAN-group/Pl… Work with an amazing team: @chenzixi23, @XuanHe21, @JizeJiang, the deep learning group @MSFTResearch, @dmguiuc, and @TIMANUIUC. Thread ↓
Ke Yang tweet media
English
7
6
10
664