Jize Jiang

6 posts

Jize Jiang banner
Jize Jiang

Jize Jiang

@JizeJiang

CS PhD @ UIUC

Katılım Ocak 2014
72 Takip Edilen24 Takipçiler
Jize Jiang
Jize Jiang@JizeJiang·
Big thanks to everyone on the team and our mentors 🌟 I’m thrilled that PlugMem has been accepted to ICML 2026. This is a big milestone for our work on memory for evolving agents. What excites me just as much is we are turning PlugMem into something people can actually build with, a truly plug-and-play memory module that works across real agent runtimes and interpretable through visualization interfaces. Making research accessible is part of pushing the frontier 🎆
Ke Yang@EmpathYang

Big PlugMem update 🧠 A plug-and-play memory module for LLM agents — turns raw trajectories into a knowledge graph your agent actually reasons over. 🎉 Accepted to ICML 2026 🔌 Drop it into OpenClaw 🦞, Claude Code, and other agent runtimes 🔍 Visualize memory · test retrieval · replay sessions 🥇 SOTA backbone on LongMemEval & HotpotQA — general enough to build on Paper: arxiv.org/abs/2603.03296 Code: github.com/TIMAN-group/Pl… #ICML2026 #LLM #Agents

English
1
3
63
335.8K
Jize Jiang retweetledi
Microsoft Research
Microsoft Research@MSFTResearch·
PlugMem transforms AI agents’ interaction histories into structured, reusable knowledge. It integrates with any agent, supports diverse tasks and memory types, and maximizes decision quality while significantly reducing memory token use: msft.it/6017Qc9vv
Microsoft Research tweet media
English
2
34
39
9K
Jize Jiang retweetledi
Ke Yang
Ke Yang@EmpathYang·
📰New preprint: How can we build a task-agnostic plug-and-play memory module for LLM agents that supports multiple memory types? We present PlugMem🔌🧠, a plugin memory module that works across tasks by turning heterogeneous experience into knowledge. Evaluated unchanged on long-term dialogue🗣️, multi-hop QA🕵️, and web agents🕸️🤖, PlugMem improves performance while using far fewer memory tokens. 📜Paper: empathyang.github.io/files/PlugMem.… 🔨Code: github.com/TIMAN-group/Pl…
Ke Yang tweet media
English
13
64
169
12.2K
Jize Jiang retweetledi
Ke Yang
Ke Yang@EmpathYang·
We’ve been building a task-agnostic memory module for LLM agents — PlugMem. While running experiments across long-horizon QA, multi-hop retrieval, and web agents, we found several unexpected patterns about how memory actually helps (or hurts) decision-making. Code: github.com/TIMAN-group/Pl… Work with an amazing team: @chenzixi23, @XuanHe21, @JizeJiang, the deep learning group @MSFTResearch, @dmguiuc, and @TIMANUIUC. Thread ↓
Ke Yang tweet media
English
7
6
10
664
Jize Jiang retweetledi
Mingyuan Wu ✈️ NeurIPS 2025
Mingyuan Wu ✈️ NeurIPS 2025@MingyuanWu4·
Can VLMs learn to reason better by drawing on the brilliant thoughts of others. 🔥Our recent work on vision language model reasoning, through carefully designed multimodal memory and retrieval, has been accepted to Main Conference of #EMNLP2025. 💡Inspired by case-based reasoning, we introduce Cache of Thought, a dynamic memory that stores high-quality answers and thoughts from master VLMs. This cache serves as guidance for apprentice VLMs, helping them generate stronger responses to similar multimodal queries. 🍕Enjoy the best of accuracy and efficiency with joint inference framework that integrates VLMs of different sizes. 📝Paper: arxiv.org/abs/2502.20587 This work was created in long-term collaboration with amazing friends. We brainstormed the idea together, split the implementation, sought advice and feedback from our advisors, and spent late nights writing before the deadline. We would like to call this style “research with friends.” We hope to keep it alive until the very end of our PhD journeys, so that we can also keep learning to reason better by drawing on the brilliant thoughts of others.
Mingyuan Wu ✈️ NeurIPS 2025 tweet media
English
0
11
26
3.3K
Jize Jiang
Jize Jiang@JizeJiang·
Excited to introduce VTool-R1! We’ve trained VLMs to “think visually” using RL, blending Python-based 🖼️visual edits with💡textual Chain-of-Thought reasoning. Our trained qwen2.5-VL-32B surpasses GPT-4o on ChartQA & TableVQA, and even the compact qwen2.5-VL-7B significantly narrows the gap. 💭web: vtool-r1.github.io 📝paper: arxiv.org/abs/2505.19255 🌟code: github.com/VTool-R1/VTool… This is a work done collaboratively between some amazing friends. Special thanks to @BowenJin13 for advise and @XingyuFu2 for data and tool sets. Also thank all the open sourced infras. 🙏
Jize Jiang tweet mediaJize Jiang tweet media
English
1
5
19
6.9K