MemOS

207 posts

MemOS banner
MemOS

MemOS

@MemOS_dev

🧠 Memory Operating System for AI Agents 🌟 Our Repo: https://t.co/PYrFDnl2Eu

Agent's 🧠 Katılım Aralık 2025
26 Takip Edilen264 Takipçiler
Sabitlenmiş Tweet
MemOS
MemOS@MemOS_dev·
🚀 MemOS Local Plugin 2.0 is LIVE — 1 memory engine, all Agents fully supported. @NousResearch's Hermes and @openclaw both run on the same core from NOW on. Add another Agent later? It's a thin adapter, not a fork One severe issue kept coming back from users: "An Agent can finish tasks, but can we trust what it learned?" MemOS Local Plugin 2.0 is our answer: Execution as learning. Not just storing chats, but turning each task step into reusable memory.
MemOS tweet media
English
1
0
1
34.5K
MemOS
MemOS@MemOS_dev·
🙏 Huge thanks to @ModelScope2022 — MemPrivacy is now launched on ModelScope 👾 🪄 MemPrivacy is a lite privacy-preserving model, built for edge-cloud AI agents. Instead of full-masking or non-protection, we find the balance between readable data and privacy security. By using typed placeholders like 💡 Your Agents getting sexier and your sensitive info ain't tell anybody outside 🔐 🎮 Welcome to plug-n-play 🤖 Models: modelscope.cn/collections/Me… 📄 Paper: modelscope.cn/papers/2605.09…
ModelScope@ModelScope2022

Introducing MemPrivacy from @MemOS_dev, an open-source privacy layer for end-cloud Agent workflows. 🚀 Sensitive content is replaced with typed placeholders locally before reaching the cloud. The cloud reasons over and , not your actual data. Local mapping restores real content on the way back. 🎯 F1 85.97% on MemPrivacy-Bench vs OpenAI privacy-filter at 35.50% 📊 System utility loss held to 0.71%~1.60% at full protection — irreversible masking loses 17%~42% 🔒 4-level privacy classification: credentials and API keys get maximum protection, low-risk preferences stay usable ⚡ Qwen3-based, 0.6B / 1.7B / 4B. SFT + GRPO training. 🤖 modelscope.cn/collections/Me… 📄 modelscope.cn/papers/2605.09…

English
0
0
0
16
MemOS
MemOS@MemOS_dev·
Benchmark results 📊 On MemPrivacy-Bench (200 users, 52K+ privacy items, bilingual): - OpenAI privacy-filter: 35.50% F1 - GPT-5.2: 68.99% F1 - Gemini-3.1-Pro: 78.41% F1 - MemPrivacy-4B-RL: 85.97% F1 - MemPrivacy-0.6B-RL: 84.66% F1 System utility (GPT-4.1 across memory systems): - Traditional masking: -26.67% / -41.87% / -16.99% accuracy - MemPrivacy at PL2+PL3+PL4 full: -0.71% to -1.60% - PL4-only: under -0.89% accuracy drop Specialized small models outperform general LLMs on this task.
MemOS tweet mediaMemOS tweet media
English
1
0
0
30
MemOS
MemOS@MemOS_dev·
A 0.6B model just beat GPT-5.2 at privacy-protecting. That's not the surprise. The surprise is: it doesn't make the cloud Agent dumber. Today we're open-sourcing MemPrivacy — a privacy-preserving memory framework for cloud-edge Agents. It keeps long-term memory and personalization on the cloud, while keeping the data that identifies you on your device. 2 weeks ago, @openai's privacy-filter made it clear: memory privacy is now core infrastructure for next-gen Agents. Paper, code, and models below ↓
MemOS tweet media
English
1
0
1
48
MemOS
MemOS@MemOS_dev·
Most memory systems treat records like static notes. But user reality changes: prefs change, locations change, relationships change, and events happen at specific times. MemOS now adds temporal awareness to retrieval. What it does ☞ Resolves old-vs-new conflicts by returning the currently valid state while keeping historical versions for traceability ☞ Distinguishes “now” vs “before” in queries e.g.: "Where do I live now?" vs. "Where did I live last year?" ☞ Preserves events as independent facts e.g.: SF trip and NY trip stay separate, not collapsed into "likes travel" ☞ Prioritizes recent events when the query implies recency. How it works - At write time, memory is classified into evolving state vs discrete event. - At retrieval time, MemOS detects temporal cues in the query and selects/sorts by timeline relevance. - Conflict resolution and memory reorganization run asynchronously in the background, so chat latency is not blocked. Why it matters • No extra temporal parameter is required in /search/memory; the system infers timeline intent directly from the query. • On LME multi-session, score improved from 65.41% to 75.18%. Result: Memory stops being a pile of notes and becomes a versioned, time-aware system of record for agents. Cloud service and open-source APIs both support this workflow NOW. Memory retrieval should understand time, not just keywords 💡
MemOS tweet media
English
0
0
0
34
MemOS
MemOS@MemOS_dev·
🙏 Excited to share MemPrivacy is now live on @HuggingFace — Huge thanks to @AdinaYakup and welcome to UPVOTE our work 👉🏻 huggingface.co/papers/2605.09… 🤗 MemPrivacy is a lightweight privacy-preserving model, built for edge-cloud AI agents. Instead of destroying context with blunt "***" masking, we uses typed placeholders like to protect sensitive data while keeping semantic structure fully intact. Your AI agents gonna stay smart and private 🔐 📄 Paper: arxiv.org/abs/2605.09530 👾 Models: huggingface.co/collections/IA…
Adina Yakup@AdinaYakup

MemPrivacy ㊙️ a lightweight privacy preserving model for edge cloud AI agents from MemTensor. Instead of destroying context with masking, it preserves semantic structure using typed placeholders like 👀 ✨ 1.7B/4B - RL/SFT ✨ High precision privacy extraction ✨ 4 level privacy taxonomy (PL1–PL4) ✨ Semantic preserving typed placeholders

English
0
0
2
84
MemOS
MemOS@MemOS_dev·
If you already use BGE reranking, this is a near drop-in upgrade: almost no latency penalty, but much better memory answerability. Try it now👇🏻 🤗 @huggingface for MemReranker-4B: huggingface.co/IAAR-Shanghai/… 🧠 MemOS API for MemReranker-0.6B/4B: memos-docs.openmem.net/api_docs/core/… ✅ Get Started: memos-docs.openmem.net/self_developed… 📄 Tech Report: arxiv.org/abs/2605.06132 Memory quality gaps are often not in recall. They’re in reranking.
English
0
0
0
33
MemOS
MemOS@MemOS_dev·
We just made some experience Here's some highlights 👇🏻 📊 LOCOMO (Memory Retrieval) 0.6B matches GPT-4o-mini (0.7150 vs 0.7151) — at 1/100th the cost. 📊 LongMemEval (Long-term Memory) 4B beats Gemini-3-Flash by +7.8 MAP pts (0.8043 vs 0.7259). ⚡ Latency 0.6B ≈ BGE-Reranker (247ms vs 241ms). A near drop-in upgrade.
MemOS tweet mediaMemOS tweet media
English
1
0
0
86
MemOS
MemOS@MemOS_dev·
Launching MemReranker for agent long-term memory 🛬 Long-term memory is where many agents still break. Retrieval is fine; reranking is the bottleneck 🫙 You can recall 100 candidates, rerank Top-10, and still fail to answer. Why? In memory systems, "semantically similar" = "actually answerable". NOW the open-source 4B is available on Hugging Face, and 0.6B can be accessed by MemOS API.
MemOS tweet media
English
1
0
0
94
MemOS
MemOS@MemOS_dev·
Before, memory systems often flattened everything together: 🫨 Old + new preferences conflicted 🥴 "now" and "last year" questions could return similar recalls 🙅🏻 Multiple events got merged into vague summaries MemOS v2.0.15 change it. Agent memory is now time-aware, not just "better recall". What it does • Distinguishes state memory (can change) vs event memory (happened once) • Keeps version history for changing facts, but returns the currently valid answer by default • Understands temporal intent in queries ("now", "last year", "recently") and retrieves accordingly • Preserves separate events instead of collapsing them into one abstract label Why it matters • Production Agents don't fail because they "forgot everything." • They fail when they can't tell current truth vs historical truth. • Time-aware memory makes responses more stable, auditable, and aligned with real user timelines. How it works • At write time, MemOS classifies memory into state-like vs event-like forms. • At retrieval time, it detects temporal cues in the query and selects the right version, then ranks by time relevance. • Conflict resolution and memory updates run asynchronously in the background, so conversation latency stays low. From storing memory to managing memory validity over time. Website: memos.openmem.net
MemOS tweet media
English
0
0
0
104
MemOS
MemOS@MemOS_dev·
Most AI Agents don't actually learn from your tasks — they just remember chat logs. MemOS Local Plugin 2.0 ships with 6 things that change this for Hermes and OpenClaw: ☞ Each task step gets captured, scored, and turned into a reusable artifact — not transient context ☞ Dual feedback loops: environment outcomes per step + your verdict on the whole task ☞ 4-layer memory: traces → experience → domain cognition → Skills (with reliability + lifecycle) ☞ 3-tier retrieval: Skills for skeleton, Traces for edge cases, Domain cognition for planning ☞ Memory travels across Agents — teach Hermes today, use it in OpenClaw tomorrow ☞ Full Viewer: every trace, score, experience, and skill is point-and-clickable Open source & Free🆓 npm: @memtensor/memos-local-plugin
MemOS tweet media
English
1
0
0
81
MemOS
MemOS@MemOS_dev·
🚀 MemOS Local Plugin 2.0 is LIVE — 1 memory engine, all Agents fully supported. @NousResearch's Hermes and @openclaw both run on the same core from NOW on. Add another Agent later? It's a thin adapter, not a fork One severe issue kept coming back from users: "An Agent can finish tasks, but can we trust what it learned?" MemOS Local Plugin 2.0 is our answer: Execution as learning. Not just storing chats, but turning each task step into reusable memory.
MemOS tweet media
English
1
0
1
34.5K