Sofian Zalouk がリツイート

🚀Excited to share our latest work:
LLMs entangle language and knowledge, making it hard to verify or update facts.
We introduce LMLM 🐑🧠 — a new class of models that externalize factual knowledge into a database and learn during pretraining when and how to retrieve facts instead of memorizing them.
🧠Why LMLM?
• Learning to look up facts is easier than memorization
• Externalizing knowledge improves factual precision
• Enables instant machine unlearning by design
LMLM opens new directions for how future language models can manage and access knowledge.
📄 [ArXiv] arxiv.org/pdf/2505.15962
🌐 [Project Page] linxi-zhao.github.io/LMLM-site/
💻 [Code] github.com/kilian-group/L…
🎤 [Talk] simons.berkeley.edu/talks/kilian-w…
Huge thanks to my amazing collaborators:
@linxizhao4 @sofianzalouk Christian Belardi Justin Lovelace @JinPZhou
And to our incredible advisors @KilianQW, @yoavartzi, and @JenJSun for their generous support and insight.
English
