EverMind

118 posts

EverMind banner
EverMind

EverMind

@evermind

GitHub: https://t.co/yUIvUnLEUD Discord: https://t.co/DgVdDw3x6B Memory Sparse Attention: https://t.co/h5pIQhgh2j

United States เข้าร่วม Kasım 2025
10 กำลังติดตาม1.5K ผู้ติดตาม
EverMind รีทวีตแล้ว
Yafeng(Jason) Deng
Yafeng(Jason) Deng@LongTermMemoryE·
MSA (Memory Sparse Attention) represents our significant exploration in the field of long-term memory. It stands as the first end-to-end long-term memory framework for large models to genuinely achieve a 100M context length. Interestingly, as the memory length scales from 16K to 100M, the model's performance score decreases by a mere 9%, demonstrating highly robust scalability. Main contribution: 1,We propose MSA, an end-to-end trainable, scalable sparse attention architecture with a document-wise RoPE that extends intrinsic LLM memory while preserving representational alignment. It achieves near-linear inference cost and exhibits < 9% degradation even when scaling from 16K to 100M tokens. 2,We introduce KV cache compression to reduce memory footprint and latency while maintaining retrieval fidelity at scale. Paired with Memory Parallel, it enables high-throughput processing for 100M tokens under practical deployment constraints, such as a single 2×A800 GPU node. 3,We present Memory Interleave, an adaptive mechanism that facilitates complex multi-hop reasoning. By iteratively synchronizing and integrating KV cache across scattered context segments, MSA preserves cross-document dependencies and enables robust long-range evidence integration. 4,Comprehensive evaluations on long-context QA and Needle-In-A-Haystack benchmarks demonstrate that MSA significantly outperforms frontier LLMs, state-of-the-art RAG systems and leading memory agents. Welcome to feedback: github.com/EverMind-AI/MSA zenodo.org/records/191036… We are looking for passionate talents to join our team! If you are interested in our work and vision, please don't hesitate to send us an email at evermind@shanda.com.
Yafeng(Jason) Deng tweet media
English
2
1
20
1.1K
EverMind
EverMind@evermind·
Today we release MSA (Memory Sparse Attention) — an end-to-end memory model framework that scales to 100M tokens. Current long-term memory approaches force a tradeoff between scalability, precision, and efficiency. MSA sidesteps this by embedding memory directly into the attention mechanism through scalable sparse attention, document-wise RoPE for temporal and provenance awareness, and a Memory Interleaving mechanism for multi-hop reasoning across scattered contexts. Key results: · <9% precision degradation from 16K to 100M tokens · 4B-parameter MSA outperforms 235B-class RAG systems on long-context benchmarks · 100M-token inference on 2× A800 GPUs zenodo.org/records/191036…
English
1
3
24
1.4K
EverMind
EverMind@evermind·
We made it even better, we just published everos-openclaw-plugin v1.0.0 on npm Full-lifecycle memory management for @openclaw 3.8+ via EverMemOS. Not just session-end dumps, but turn-by-turn memory extraction, query-aware context assembly, and smart boundary detection. Install it in one line: npm i everos-openclaw-plugin @evermind-ai/openclaw-plugin" target="_blank" rel="nofollow noopener">npmjs.com/package/@everm
EverMind@evermind

Discover how our Agent Memory and Plugin integrate with @openclaw.🦞 github.com/EverMind-AI/Ev…

English
2
1
11
5.9K
EverMind
EverMind@evermind·
70+ submissions just dropped for Memory Genesis Competition 2026. We've seen memory agents, wild plugins, and someone literally shipped hardware. HARDWARE. You people are unhinged (complimentary). We're going through every single project. Stay tuned. evermind.ai/activities
EverMind tweet media
English
0
0
6
641
EverMind
EverMind@evermind·
We really like the name Ever. That's the tweet. (ok fine — it's the heart of EverMind and we think it's pretty great)
English
0
0
3
419
EverMind
EverMind@evermind·
We're at GTC #GTC25. Brochures loaded, conversations ready. If you're building agents that actually remember, come find us. Let's talk.
EverMind tweet media
English
0
2
6
701
EverMind
EverMind@evermind·
We just released a new arXiv paper, Evaluating Long-Horizon Memory for Multi-Party Collaborative Dialogues It introduces EverMemBench, a long-horizon memory benchmark for multi-party collaboration built entirely on Comet. Check it out 🥸 arxiv.org/pdf/2602.01313
English
1
3
20
14.9K
EverMind
EverMind@evermind·
In the next version of our Cloud Platform, we're adding: • Agent Memory • Skill Management • Multimodal support In other words — your Agents can now learn how to learn.
English
0
0
3
411
EverMind
EverMind@evermind·
Our crew is gonna be there. Can't wait to swap ideas on Memory and Voice AI, this kind of event is exactly what pushes devs to build bolder things.
RTE Dev Community@rtedevcommunity

🔥 Physical AI Day! Bay Area Devs, see you on March 19! Happening during GTC week, join RTE Dev Community& @TenFramework for a full day of brainstorming and building with two hardcore events seamlessly hosted at the same venue: 🌅 9:30 AM | Meetup: Conversing with the Physical World Join industry leaders from @AgoraIO @MiniMax_AI @evermind @RiseLink_X, HumanTouch, and Resonance Ventures @karal127 as we dive deep into the opportunities and future of Multi-Modal & Edge AI. 🛠 1:30 PM | Workshop: Hands-on Voice AI Hardware Build and deploy a voice AI Agent using the TEN Framework. Here’s the best part 👉 We’re providing 40 Agora R1 dev kits on-site. Successfully run your code, and you get to take the hardware home for FREE! 💡 Note: Morning and afternoon sessions require separate registrations. Spots are highly limited, so act fast! Links in the thread 👇 #PhysicalAI #EdgeAI #VoiceAI #GTC2026

English
1
0
3
418
EverMind
EverMind@evermind·
The crew is heading to GTC #NVIDIAGTC 2026 next week, soaking in the latest in AI, software, and hardware, catching a few events, and talking to devs face to face. Getting the pulse on what everyone's building and needs. Come find us in the valley, we'd love to chat about everything.
EverMind tweet media
English
0
0
3
248
EverMind
EverMind@evermind·
We like toolify 👀
English
0
2
4
251
EverMind
EverMind@evermind·
Introducing another speaker at the Memory Genesis Competition 2026 Meet Zhe Tang — Senior Solutions Architect at AWS. She'll be sharing her perspective on a powerful idea: "Memory makes us human; AI Memory makes humans superhuman." Come see her live. Register now luma.com/n88icl03
EverMind tweet media
English
0
0
4
289
EverMind
EverMind@evermind·
New speaker unveiled: @LydiaShang888 Founder, Astra X Ventures | Stanford GSB | Podcaster. Her take: "Intelligence computes, but memory connects. To give AI a soul, we must move beyond the reset." Spots are still open, plenty of time to register and complete your project. luma.com/n88icl03
EverMind tweet media
English
0
1
3
469
Void Freud
Void Freud@voidfreud·
What if memory is the foundation of digital existence? Memory is the AI. Models are just replaceable vessels. This idea excites me a lot, and it’s why I love what the EverMemOS / EverMind-AI (@evermind) team is building. I genuinely consider their work revolutionary 🙇‍♂️
English
4
3
7
267