CludeAI

37 posts

CludeAI banner
CludeAI

CludeAI

@cludeproject

Persistent memory that learns, consolidates, and proves itself on-chain. 83.9/100 benchmark By: @sebbsssss. TG: https://t.co/bP2DK447Mm

Katılım Mart 2026
3 Takip Edilen526 Takipçiler
Sabitlenmiş Tweet
CludeAI
CludeAI@cludeproject·
Give your agents persistent memory that consistently beat enterprise memory skills. clawhub.ai/sebbsssss/clud…
English
6
11
54
278.4K
CludeAI retweetledi
MCG
MCG@MCGlive·
Full @cludeproject $CLUDE interview w/@sebbsssss Topics covered: → 2% hallucination rate vs 15% industry → Big labs are incentivized not to fix memory → ~100x token cost reduction -------------------------------- 0:00 Sovereign memory narrative 1:26 Founder background 3:28 The context window problem 9:08 Why big labs won't fix it 17:23 Traction & trading use case 21:11 Benchmark numbers
English
3
3
24
4.3K
CludeAI
CludeAI@cludeproject·
Privacy is an absolute prerequisite @AskVenice
seb@sebbsssss

Every AI chat stores your data on their servers, forgets you every session, and trains on your conversations. If you ask me, I don’t like AI surveillance Clude Chat on @AskVenice is different. • Chat history lives on your device only • Decentralized GPUs with zero data retention • End-to-end encryption, decrypted only in verified hardware enclaves • Memory that persists and learns who you are An AI that remembers you. Infrastructure that can't. ➡️ Coming soon Vires In Numeris

English
0
2
39
1.7K
Kn
Kn@0xKneelgiee·
@itsjoaki Anyone knows an openclaw wrapper with persistent memory?
English
2
0
0
29
Joaki
Joaki@itsjoaki·
every successful product is just a wrapper notion is a wrapper of text stripe is a wrapper of bank transfers uber is a wrapper of cars you don't own stop overthinking. wrap something and charge for it. openclaw is a wrapper of claude claude is a wrapper of autocomplete it's wrappers all the way down ship it anyway you good?
English
84
20
443
20.7K
seb
seb@sebbsssss·
The top skills for agent memory on Clawhub today. FYI - the disparity only scales up as you stack more memories. We’re hearing a lot of agent memories being in focus recently, @cludeproject is probably at the forefront of it all
seb tweet media
English
5
9
40
2K
CludeAI
CludeAI@cludeproject·
Clude solves MiroFish memory - Benchmarked against ByteDance's OpenViking.
seb@sebbsssss

Hey Brian, would love to work with you on the 750K agent challenge. I built @cludeproject to handle specifically on Agent memory architecture. We scored 100% on LoCoMo benchmark, 69% on LongMemEval with ~2% hallucinations. We benchmarked 1,000 agents with three memory approaches: Basic RAG > 53% hallucination by round 50, $0.015/query OpenViking > 26% hallucination, $0.008/query Clude > ~2% hallucination, $0.001/query 26% hallucination at 750K means 195K agents get confidently wrong every round. Would love for you to explore what we've built. Memory is the unlock for the next order of magnitude. 1,000,000 agents with persistent memory next?

English
11
5
41
4.6K
CludeAI
CludeAI@cludeproject·
The GPUs are sweating right now
seb@sebbsssss

Brian Roemmele (@BrianRoemmele) created 500,000 AI agents in one simulation with MiroFish - There's so much potential use cases with Miro; hats off to the dev! However, nobody is talking about the short-term memory each agent has. One agent hallucinates a fact. Shares it. 10 agents now believe it. They share it. 50 rounds later, your entire simulation is making decisions based on things that never happened. @cludeproject is testing what happens when you give swarm agents real memory. Testing Clude memory architecture on a MiroFish-style swarm. Running the experiment now.

English
4
5
26
3K
CludeAI
CludeAI@cludeproject·
@steipete @bradmillscan @vincent_koc Hey Pete, would love for you to have a look at what we’re building with Clude. We achieved 15x less hallucinations compared to Mem0 and almost a 100% score on LoCoMo benchmark
English
0
1
8
120
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
OpenClaw memory unlock! Forcefeed memories before responses! I just noticed that a new runtime hook was added to OpenClaw this week by @vincent_koc & others that will solve a lot of OpenClaw drift ... if someone builds a plugin for it! Who's building a communication plugin for OpenClaw using the new before_prompt_build hook? Using prependSystemContext and appendSystemContext, this allows you to inject extra instructions/directions BEFORE the agent builds the response. Here's how OpenClaw weights things: 1 Core system prompt 2 Plugins - prependSystemContext (NEW) -appendSystemContext 3 Agents.md - your custom rules 5 Tools > Skills - tool APIs and documentation 4 Workspace files - memory, playbooks/SOPs etc 5 long-term memory - retrieval from Obsidian/DB 6 session transcript - current convo context 7 your message - your request or you can think about it this way: prependSystemContext (NEW plugin hook) system prompt (you can't change this) appendSystemContext (NEW plugin hook) agents.md, bootstrap files tools & skills workspace files (memory, Playbooks & SOPs) conversation context prependContext (previous plugin hook) current message / request -- The biggest unlock of this addition to OpenClaw (which should be default behavior by OpenClaw cc @steipete) is a template communication plugin that turns on when a user activates memory_search & memory_get. The Default Comms Protocol Plugin should require the agent to use memory_search before asking the user a question. Majority of users who turn on openclaw's advanced semantic memory don't realize their bot is not using it. Even if you add a hard rule in agents.md that the agent must use it's memory tools before asking questions, majority of the time it does not use the tool. This is surfaced when you ask your bot to read the logs and show how many times it uses memory_search and memory_get over the last 24 hours. usually the answer is close to zero. Anyway, with this new runtime hook exposed, you can now really tune the kinks out of how your openclaw agent communicates with you. Don't want your agent to offer you "if you like I can ..." rabbit holes? Don't want your agent to say "good catch" ? Don't want your agent to ask you things it knows already? Don't want your agent to say "you're right to call that out..." Train it out with a comms plugin that has prompt weight above everything else, and then block messages that come back to you violating the comms protocol & force the agent to rewrite them to spec using message_sending hook for outbound filtering. This can also be used more practically for other things like token caching, model routing & multi-agent routing. You can now route messages to models more effectively to save money, and catch context-switching messages that are sent to the wrong agent before it bloats the context window of the main agent... Is anyone building on this?
English
15
1
156
24.5K
CludeAI
CludeAI@cludeproject·
Massive context windows are great for Big Tech’s compute margins. Memory retrieval is great for yours They want you to buy the whole haystack. We just sell you the needle
seb@sebbsssss

Every time a new AI model drops with a 1-million token context window, the tech world celebrates. And for deep, single session analysis of massive documents, it truly is an incredible leap forward. But using these massive context windows as a substitute for long-term memory is fundamentally unscalable. Here is the educational breakdown of why. The Hidden Cost of Context When you use a context window, the model processes every single token, every single time you hit send. If you load up 1M tokens and ask a simple question, the LLM still reads all 1M tokens before answering. With a model like GPT-4.5, that scales to a staggering $75 per query. The Architectural Fix: Retrieval Memory retrieval systems (like Clude) use a completely different approach. Instead of forcing the AI to reread the entire library every time, the system runs a quick vector search to find only the relevant information, sending a tiny payload (usually ~2,000 tokens) to the LLM. The Math at Scale Because retrieval sends only what matters, your costs stay flat. If you run 1,000 queries a day against a 500K memory bank: • Context stuffing (GPT-4o): ~$37,530 / month • Memory retrieval (Clude + GPT-4o): ~$182 / month You get the exact same answers from the exact same models, but it costs ~200x less. Giant context windows are amazing tools, but for scalable memory, they force your costs to scale linearly. Memory retrieval keeps them flat Pick your poison

English
4
13
40
6.7K
CludeAI retweetledi
seb
seb@sebbsssss·
It’s crazy and also joy watching the tech giants finally admit what we’ve known and been building: AI without a real, working memory is just a parlor trick.. While everyone else was debating it this week, we launched a massive update to how our system learns. We basically taught our AI to tell the difference between an external fact and its own internal monologue. It stops the system from hallucinating its own thoughts into facts. I think this in itself, have us positioned ahead of many of the big tech. Billions of dollars are being thrown at this problem right now by the biggest names in tech. But instead of writing papers or bolting patches onto old tech, we've already got it working live. We have over 20,000 memories in production that naturally form connections, fade when irrelevant, and, unlike anyone else, verify exactly where the information came from. @cludeproject is the first in the market on this. We didn't just give the AI a bigger reading assignment. We built a better brain. Public beta soon
God of Prompt@godofprompt

🚨 BREAKING: IBM just admitted your AI agent forgets everything the moment it finishes a task. > Every mistake. Repeated. > Every inefficiency. Repeated. > Every failure. Repeated. They built the fix. > Every AI agent starts each task from zero: > No memory of what worked. > No memory of what failed. > No memory of the faster path it found yesterday. IBM built a fix called Trajectory-Informed Memory. It watches the agent's full execution and extracts three types of reusable tips: > what worked > what failed and how it recovered > what succeeded but wasted steps Those tips get injected into the agent's prompt next time a similar task appears. The model stays frozen. No retraining. Only the memory evolves. > 14.3 pp gain in scenario completion on tasks never seen before > Complex tasks: 19.1% → 47.6% scenario completion, a 149% relative increase > Zero retraining required The 149% on hard tasks is the number. These are 50+ step workflows across multiple apps. Exactly where agents break in production.

English
2
11
50
6.5K
CludeAI
CludeAI@cludeproject·
Refactoring so the noise stays out
English
2
6
24
2.7K
CludeAI
CludeAI@cludeproject·
@vancibrah thanks sir - appreciate the support!
English
0
0
1
35