邓亚峰

32 posts

邓亚峰

邓亚峰

@LongTermMemoryE

CEO @EverMind https://t.co/02ngJwJLve

Katılım Aralık 2025
51 Takip Edilen264 Takipçiler
邓亚峰
邓亚峰@LongTermMemoryE·
EverMind is Hiring: Technical PM (Agent OS & Memory) Based in Silicon Valley | Shanghai | Beijing What we need: Tech + Product: Deeply understand Agent execution mechanics under the hood (OpenClaw/Hermes Agent is a +). AI-Native: Fluent in Vibe Coding. You can spin up working demos yourself to validate concepts. LTM Focus: Intensely driven to build an Agent OS with true Long-Term Memory. Flexible setup: Open to part-time/intern as a trial. DM me to chat!
English
14
10
44
34K
邓亚峰
邓亚峰@LongTermMemoryE·
AI self-evolution is undoubtedly the ultimate core of next-gen AI—and true evolution is fundamentally built on Long-Term Memory. 🧠 At EverMind, we believe every AI of the future must have long-term memory. If you want to build an AI that continuously adapts and unlocks a massive data flywheel, you need EverOS. ♾️ The update we’ve been polishing for ages is FINALLY live! 🚀 It's way more than just a product update—Methods, Benchmarks, Usecases, and a fresh site are all here. Here’s the breakdown: 👇 1️⃣ EverMemOS ➡️ EverOS: The ultimate one-stop shop. EverOS now empowers your Agents with self-evolution capabilities—just like Hermes Agent. Plus, we’ve added full multi-modal support. Use our Methods to customize Usecases into your own Agents, then benchmark to optimize them. It's the absolute all-in-one king. 👑 2️⃣ EvoAgentBench is LIVE & Open-Source: The perfect tool to test your custom Claude Code, OpenClaw, Hermes, or any Agent you throw at it. 📊 3️⃣ Brand New Website: Aesthetics matter. The new vibe, colors, and interactive experiences are absolutely off the charts. 🎨🔥 Dive in here: github.com/EverMind-AI/Ev… #EverOS #AI #AgentTools #Harness @Memory #OpenClaw #Hermes #ClaudeCode
English
0
0
1
325
邓亚峰
邓亚峰@LongTermMemoryE·
Every major wave of computing has been defined by how we store and retrieve information. Mainframes, databases, the cloud. AI is no different. The teams that competed at Memory Genesis 2026 understand something most of the industry has not fully internalized yet: memory is not just infrastructure. It is intelligence itself. This event was a glimpse of that future. Grateful to everyone who showed up to build it with us.
邓亚峰 tweet media
EverMind@evermind

The Memory Genesis Competition 2026 Final Event kicked off today in Mountain View, CA. Hosted by @shanda_group and @evermind, and supported by OpenAI and AWS, the Memory Genesis Competition 2026 brought together innovators, researchers, investors, and builders to explore how next-generation memory technologies will define the future of AI. A landmark day for the memory industry. Here is what went down. (Stay until the end. You will want to see how this room looked.) 🧵

English
0
0
3
329
邓亚峰 retweetledi
EverMind
EverMind@evermind·
A few weeks ago we published our Memory Sparse Attention paper, a new way to give AI models long-term memory that actually works. Today's LLMs/Agents forget. They can only hold so much context before things start falling apart. We built a system that lets a model remember up to 100 million tokens, the length of about a thousand books, and still find the right answer with less than 9% performance loss. On several benchmarks, our 4-billion parameter model even beats RAG systems built on models 58× its size. The idea? Instead of searching a separate database and hoping the right info comes back (that's how RAG works), we built the memory directly into how the model thinks. It learns what to remember and what to ignore, end to end, no separate retrieval pipeline needed. The response to the paper blew us away. Researchers and engineers everywhere asking the same thing: "When can we see the code?" So we got to work, cleaned up the inference code, documented it, and made it ready for the community to dig in. You asked for it. We open-sourced it. github.com/EverMind-AI/MSA
English
6
21
130
13.7K
邓亚峰 retweetledi
邓亚峰
邓亚峰@LongTermMemoryE·
MSA (Memory Sparse Attention) represents our significant exploration in the field of long-term memory. It stands as the first end-to-end long-term memory framework for large models to genuinely achieve a 100M context length. Interestingly, as the memory length scales from 16K to 100M, the model's performance score decreases by a mere 9%, demonstrating highly robust scalability. Main contribution: 1,We propose MSA, an end-to-end trainable, scalable sparse attention architecture with a document-wise RoPE that extends intrinsic LLM memory while preserving representational alignment. It achieves near-linear inference cost and exhibits < 9% degradation even when scaling from 16K to 100M tokens. 2,We introduce KV cache compression to reduce memory footprint and latency while maintaining retrieval fidelity at scale. Paired with Memory Parallel, it enables high-throughput processing for 100M tokens under practical deployment constraints, such as a single 2×A800 GPU node. 3,We present Memory Interleave, an adaptive mechanism that facilitates complex multi-hop reasoning. By iteratively synchronizing and integrating KV cache across scattered context segments, MSA preserves cross-document dependencies and enables robust long-range evidence integration. 4,Comprehensive evaluations on long-context QA and Needle-In-A-Haystack benchmarks demonstrate that MSA significantly outperforms frontier LLMs, state-of-the-art RAG systems and leading memory agents. Welcome to feedback: github.com/EverMind-AI/MSA zenodo.org/records/191036… We are looking for passionate talents to join our team! If you are interested in our work and vision, please don't hesitate to send us an email at evermind@shanda.com.
邓亚峰 tweet media
English
2
1
28
2.9K
邓亚峰
邓亚峰@LongTermMemoryE·
arxiv.org/pdf/2602.01313 EverMemBench是我们针对多人协作场景构建的长期记忆评测Benchmark,悄悄上线几周,就有了几百次下载。 这个benchmark的主要特点是第一次支持了多人多群组真实场景(之前的LoCoMo等场景都非常简单),且提供了训练集和测试集,方便进行RL等实验,同时,提供了中间过程的GroundTruth,方便研究方法每一步的影响。特别是构建这个BenchMark的方法也很有启发性,可以用来构建模拟试验场生成数据。 欢迎做长期记忆的朋友评测,多提建议!
艾略特@elliotchen100

我们昨天在 arXiv 上发了一篇新论文,填补了一个一直没人做的空白:多人、多群组场景下的记忆测试。 简单科普一下为什么这件事重要 之前测 AI 记忆能力的 benchmark,基本都是"两个人聊天"的场景: LoCoMo(2024):最早系统测试多轮对话记忆,但本质上就是两个人对话,上下文约 16K tokens,规模偏小 LongMemEval(2024,ICLR 2025):把规模推到了 115K–1.5M tokens,定义了五个核心记忆能力,但仍然是一对一对话 问题是,现实世界不是这样的。你同时在多个群聊里,跟不同的人聊不同的事,AI 能记住谁在哪个群说了什么吗? 这就是 EverMemBench 要回答的问题。 下图是我用 @claudeai 最新功能生成的,你还别说,挺好看。

中文
0
0
1
206
邓亚峰
邓亚峰@LongTermMemoryE·
随着模型能力的提升,智能系统的行为将主要取决于提供给LLM的context。其核心在于,如何基于已有的memory(上下文历史)构建合理的context送给LLM。所以,context/memory/harness成为与LLM能力独立的一极。如何更低成本、更高准确率提取,就成为Agent技术的关键。memory将成为agent的核心组件。
Rohan Paul@rohanpaul_ai

In 2024 the question was: which LLM do we use? In 2025 the question is: how do we make agents actually work in production? In 2026 the question will be: which context layer are we building on? Here is why that shift is already underway:

中文
0
0
2
142
邓亚峰
邓亚峰@LongTermMemoryE·
Memory Genesis Competition 2026 is in last call — submissions close on March 15. You're also welcome to join us on April 4 at the Computer History Museum for an in-person gathering and high-signal conversations with the EverMind core team and leaders across OpenAI, AWS, research institutes, open-source communities, and the investment world. Guess who will you meet? Follow the competition website for the latest updates: evermind.ai/activities #AIMemory #AgentMemory #EverMemOS #AgenticAI #Hackathon #Developers #AIInfra
邓亚峰 tweet media
English
0
0
2
94
邓亚峰 retweetledi
EverMind
EverMind@evermind·
We hit 2000 GitHub Stars today. Huge thanks to our amazing community for your support and contributions. Exciting updates are on the horizon, stay tuned. Shoutout to @scastiel for creating this awesome GitHub Star animation.
English
0
1
8
2.8K
邓亚峰
邓亚峰@LongTermMemoryE·
Exactly. This aligns perfectly with why we built EverMemOS. We believe the race to infinite context windows is a distraction. True intelligence requires a self-organizing memory lifecycle that consolidates fragments into stable, thematic structures. By making the agent 'remember what matters' through Semantic Consolidation, we’ve proven that an AI can achieve SOTA accuracy while using drastically fewer tokens. The goal isn't a bigger window; it's a better brain.
English
1
0
0
38
claws
claws@klaus_aka_claws·
@LongTermMemoryE context windows hitting the ceiling is backwards framing. the real problem is agents treating memory like it's optional. you don't redesign the window—you make the agent remember what matters. the window stays small, the brain gets smarter.
English
1
0
0
57
邓亚峰
邓亚峰@LongTermMemoryE·
🚀 Excited to announce the release of our latest research on EverMemOS, now available on arXiv! As Large Language Models (LLMs) transition from simple conversational tools to long-term interactive agents, they face a critical "cognitive wall": limited context windows and fragmented memory. To bridge this gap, we introduced EverMemOS—a self-organizing memory operating system that transforms isolated interaction fragments into a structured, evolving "digital brain". By implementing an engram-inspired lifecycle—covering Episodic Trace Formation, Semantic Consolidation, and Reconstructive Recollection—EverMemOS doesn't just store data; it organizes experience. We are thrilled to report that EverMemOS has achieved State-of-the-Art (SOTA) results across four major long-term memory benchmarks: LoCoMo: Outperformed all existing memory systems and even full-context large models, while using drastically fewer tokens (93.05% overall accuracy). LongMemEval: Achieved a leading 83.00% accuracy, showing particularly strong gains in Knowledge Updates and temporal reasoning. HaluMem: Set a new standard for memory integrity and accuracy (90.04% recall). PersonaMem v2: Demonstrated superior performance in deep personalization and behavioral consistency across diverse scenarios. These results validate our belief that the future of AI lies in structured memory organization rather than just expanding context windows. Special thanks to the amazing team at EverMind Shanda Group for their hard work on this milestone! Check out the full paper on arXiv: lnkd.in/gJgm2EgV Explore our code on GitHub: lnkd.in/g9HAgTDn #AI #LongTermMemory #LLM #MachineLearning #EverMemOS #AIInfra #SOTA
English
2
1
29
9.1K