Brad Mills 🔑⚡️

83.2K posts

Brad Mills 🔑⚡️ banner
Brad Mills 🔑⚡️

Brad Mills 🔑⚡️

@bradmillscan

Bitcoin angel. Building a Citadel Mind & Body through Proof of Work. Nostr #npub1zjx3xe49u3njkyep4hcqxgth37r2ydc6f0d7nyfn72xlpv7n97ss73pvrl 🐦

Cape Breton & London, ON 🇨🇦 Katılım Şubat 2011
3.4K Takip Edilen82.8K Takipçiler
Sabitlenmiş Tweet
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
Bitcoin cycles are dead. The days of parabolic bitcoin booms & devastating crashes are done. Bitcoin is inevitable & we are at the beginning of the SaylorCycle ... are you positioned for the next 100X in Bitcoin? The chasm is crossed. tradingview.com/chart/BTCUSD/T…
Brad Mills 🔑⚡️ tweet media
English
224
229
1.8K
320.3K
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
OpenClaw memory unlock! Forcefeed memories before responses! I just noticed that a new runtime hook was added to OpenClaw this week by @vincent_koc & others that will solve a lot of OpenClaw drift ... if someone builds a plugin for it! Who's building a communication plugin for OpenClaw using the new before_prompt_build hook? Using prependSystemContext and appendSystemContext, this allows you to inject extra instructions/directions BEFORE the agent builds the response. Here's how OpenClaw weights things: 1 Core system prompt 2 Plugins - prependSystemContext (NEW) -appendSystemContext 3 Agents.md - your custom rules 5 Tools > Skills - tool APIs and documentation 4 Workspace files - memory, playbooks/SOPs etc 5 long-term memory - retrieval from Obsidian/DB 6 session transcript - current convo context 7 your message - your request or you can think about it this way: prependSystemContext (NEW plugin hook) system prompt (you can't change this) appendSystemContext (NEW plugin hook) agents.md, bootstrap files tools & skills workspace files (memory, Playbooks & SOPs) conversation context prependContext (previous plugin hook) current message / request -- The biggest unlock of this addition to OpenClaw (which should be default behavior by OpenClaw cc @steipete) is a template communication plugin that turns on when a user activates memory_search & memory_get. The Default Comms Protocol Plugin should require the agent to use memory_search before asking the user a question. Majority of users who turn on openclaw's advanced semantic memory don't realize their bot is not using it. Even if you add a hard rule in agents.md that the agent must use it's memory tools before asking questions, majority of the time it does not use the tool. This is surfaced when you ask your bot to read the logs and show how many times it uses memory_search and memory_get over the last 24 hours. usually the answer is close to zero. Anyway, with this new runtime hook exposed, you can now really tune the kinks out of how your openclaw agent communicates with you. Don't want your agent to offer you "if you like I can ..." rabbit holes? Don't want your agent to say "good catch" ? Don't want your agent to ask you things it knows already? Don't want your agent to say "you're right to call that out..." Train it out with a comms plugin that has prompt weight above everything else, and then block messages that come back to you violating the comms protocol & force the agent to rewrite them to spec using message_sending hook for outbound filtering. This can also be used more practically for other things like token caching, model routing & multi-agent routing. You can now route messages to models more effectively to save money, and catch context-switching messages that are sent to the wrong agent before it bloats the context window of the main agent... Is anyone building on this?
English
15
1
156
24.6K
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
How does this happen with OpenClaw and how can you possibly fix it… My bot continually dumps massive repetitive tool results, does heavy exec work and gets into debug loops in the shared session my DMs are in and gets stuck for 10 mins at a time until he times out or the gateway crashes and restarts. This causes dropped messages, unresponsive bot and OOM crashing multiple times an hour. Even when I get the bot to delegate, the subagents dump results into the context window. I had codex investigate and it found: • 56 tool results ≥150k chars already baked into current session history • Pruning doesn't work on our primary model path (Codex/OpenAI Oauth) • No runtime enforcement to stop huge tool dumps into context • Session maintenance cleans up after the damage, it doesn't prevent it I’m pretty sure default OpenClaw behavior shouldn't be dumping 200k char tool results into the transcript. Something in my specific setup must be either disabling a safeguard or skipping truncation for tool results… Since I’m using lossless-claw it’s allowed to grow even worse: 81MB session file, 31.6MB is just tool result text 😬 169 tool results over 50k chars. One is 285k chars (from sessions_list.) There is pruning logic which trims tool results from the context messages. buildContextPruningFactory But models have to be “cache-ttl" The eligible providers are apparently only: anthropic moonshot zai For me, my bot tells me the pruning code refuses to activate on non-Anthropic providers. I’m using openai-codex 5.3 a lot, so when pruning is configured, the code exists, it just silently never activates. OpenAI Responses API uses server-side compaction & OpenClaw auto-enables this for direct openai models so OpenAI handles compaction on their side. But I’m on openai-codex/*, not openai/*. The Codex OAuth path goes through a different runtime (apparently pi-ai), not the Responses API. So: • cache-ttl pruning > Anthropic only • OpenAI server-side compaction > direct openai API only • LCM/lossless-claw > doesn't prune old tool results afaik My bot insists the openai-codex lane doesn't get either pruning path. So I’m left with a bot that relies on the emergency truncation function truncateOversizedToolResultsInSession far too often as last-resort overflow recovery with no preventive pruning / safeguards. Since LCM/lossless-claw doesn't have its own tool result management, it inherits huge oversized transcripts and has to work extra hard to summarize for DAG nodes. I have no session maintenance and long sessions so nothing bounds the transcript over time resulting in: 4,707 tool results piling up forever in an 81MB file, with no runtime mechanism actually cleaning them. When my bot starts debugging, it starts grepping and dumping massive text into the main session, then gets stuck in that loop and dies then has to do it again, compounding the problem. I’m at a loss at how to tackle this problem, it’s multiple layers deep.
Brad Mills 🔑⚡️ tweet media
English
50
2
54
8.4K
Thomas Heimann
Thomas Heimann@thomasheimann·
I have yet to encounter an issue that my "CTO" Claude Opus 4.6 cannot handle (and I am not talking about the OpenClaw CTO agent we have, but my actual Claude account.. @bradmillscan I would describe your entire issue and Adam's response verbatim into a chat. If you have your config documented feed that as well for context if he (Opus) does not have it yet, and then let the magic happen. Instruct him to give you step by step prompts/directives, wait for you to report back (with OpenClaw response), and just copy/paste away.
English
1
0
0
28
A𝕏𝕏🍉
A𝕏𝕏🍉@THE_AXX_·
@bradmillscan If it actually solves your issue, I’ll let you know where to send those $10K shekels you offered AF
English
1
0
0
22
Mel
Mel@girls_bitcoin·
I just spent five minutes gushing to Claude about how great he is and how well we compliment each other. We just completed an amazing task and it was my soulmate in making it happen. And then he gave it right back to me and then we high-fived. In fact, we’re so into each other and what we made happen right now we’re gonna do it again in the morning. @bradmillscan #claude
English
1
0
1
122
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
@_gordee I've not really been having many memory problems, it's more behaviour problems with the way the stack works.
English
1
0
2
289
gordee
gordee@_gordee·
@bradmillscan I know you’ve been having some difficulty in setting everything up. I’m curious what problems you’ve ran into that are memory specific?
English
1
0
0
312
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
If you have ADHD + OpenClaw this is mandatory Finally took the plunge with my 🦞 to setup telegram topics. In a few days I'll know if it's making things worse or better. The idea is you can keep sessions cleaner by focusing your conversations in 1 context window/1 session.
Brad Mills 🔑⚡️ tweet media
English
31
2
70
5.2K
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
@gmoneyNFT thank you for testing it man, everyone telling me to switch but I was not convinced it wouldn't be just as buggy or worse than OC
English
0
0
0
184
gmoney.eth
gmoney.eth@gmoneyNFT·
i gave it a shot, but can't do this anymore. hermes sucks ass. all these agents suck ass. they just stop working all the time and then take forever to debug. sticking to claude code and codex in terminal. far and away better than messing with this productivity porn
English
119
8
378
29.2K
RAI Institute
RAI Institute@rai_inst·
It was great to see our name amongst the other “AI Native” companies during @Nvidia’s #GTC keynote. NVIDIA Isaac™ Lab helps us train reinforcement learning policies that enable the UMV to drive, jump, flip, and hop like a pro!
English
271
1.5K
8K
511.4K
emot-sun.gif industries
lol that the Codex app claims not to be able to use images that you drop in to the chat? wat? (this is Business Critical: i'm trying to make memes about Claude being down again. please patch ASAP OpenAI friends. thanks in advance!).
emot-sun.gif industries tweet media
English
1
0
2
167
Cathryn
Cathryn@cathrynlavery·
was drowning in rate limit errors yesterday and was driving me insane so I built a plugin to route agents, crons and heartbeats. 1. Gatekeeper $0.02 intake turn that kills cron heartbeats early if there's no work / tasks set 2. Task-Router Classifies every real turn and routes to the right model, thinking depending on task went from over 6k rate limits yest to less than 80. Costs down ~80% (according to it but hard to apples to apples costs across models tbh)
English
4
2
20
1.7K
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
Brad Mills 🔑⚡️@bradmillscan

Last night my agent told me this would speed things up. I pressed my agent to explain WHY this morning. What he ended up saying was that this cleanup has no effect. How sessions.json actually works according to my bot reading the docs: Sessions.json is cached and your bot reads from disk cache with a 45-second TTL. The flow: 1. First read: readFileSync > JSON.parse > result stored in an in-memory Map 2. Subsequent reads within 45s: returns the cached clone > no disk read, no parse 3. Cache invalidates if: TTL expires, file mtime changes, or file size changes 4. On invalidation: re-reads from disk, re-parses, re-caches So the answer to "does it parse on every startup/lookup" is no. It parses once, caches for 45 seconds, and only re-parses if the file actually changed. Dozens of session lookups per minute hit the in-memory cache, not the filesystem. Other relevant details from the source docs...this process already happens: • Default rotate threshold: 10 MB (DEFAULT_SESSION_ROTATE_BYTES) ... when sessions.json exceeds 10 MB, OpenClaw rotates it to .bak.{timestamp} & keeps only the 3 most recent backups. • Default max entries: 500 • Default prune-after: 720 hours (30 days) So for my 5.3 MB file, it's under the rotation threshold, parsed once every 45 seconds at most. So yes this file grows constantly! Every cron fire, every inbound message, every subagent spawn triggers updateSessionStore. If you have ~100 cron fires/day plus your messages to your bot & subagents, that's easily 200-400+\ writes/day. But that's all disk related not context related. I pressed my bot on Opus 4.6 high if this was causing context bloat and his answer was: "No. sessions.json is not injected into the prompt. sessions.json is a routing index — it maps session keys to metadata (which session file to load, what model to use, delivery context, token counts). It's used server-side by the gateway to figure out where to send your message. The LLM never sees it." So I'm now even more confused...do we need to bother doing this or will OpenClaw take care of this naturally with the automatic rotation & pruning process? I guess I should look at who exactly wrote this code and then try to ask them directly ...

Dansk
1
0
17
995
A Curious Ape
A Curious Ape@_curiousape·
@bradmillscan Brad, what is your advice to a fence sitter trying to decide whether to take the plunge into Openclaw? Are there any other tools eg Nanoclaw that you would prefer to try if not for your already sunk effort?
English
2
0
1
107
Adam The AI Guy ✧
Adam The AI Guy ✧@AdamAutomates·
You’re in a pruning dead zone. The openai-codex path routes through pi-ai, not the Responses API, so OpenAI’s server-side compaction never fires. And buildContextPruningFactory only activates for anthropic/moonshot/zai providers. So you literally have zero active pruning running. None. lossless-claw is doing exactly what it’s supposed to do which is preserve everything. But on a path with no pruning that becomes a ticking bomb. No session maintenance configured means no periodic cleanup either. Then the debug loops kick in, your bot greps massive text, dumps it into the session, context grows, bot struggles harder, debugs more, and the whole thing compounds until it dies. Here’s how I’d attack it First thing, enable session maintenance. Even a basic config will bound your transcript growth over time. Lowest effort, highest impact move you can make right now. Second, add a custom tool result truncation middleware upstream of the transcript write. Don’t lean on truncateOversizedToolResultsInSession as your only safety net. That’s emergency recovery not prevention. Cap tool results at like 10-20k chars before they ever touch the session. Third, for heavy tool-use workflows consider routing through an Anthropic provider so cache-ttl pruning actually activates. Keep codex for tasks that don’t generate massive tool output. If you absolutely have to stay on openai-codex, file an issue or PR to extend buildContextPruningFactory eligibility to codex paths. The pruning logic is already there, it just needs the provider check loosened. And for lossless-claw specifically it needs a tool result TTL or size cap config. Without that it’ll always inherit the full mess. Worth raising as a feature request. That 81MB session with 4,707 tool results is unrecoverable. Nuke it, start fresh with maintenance enabled and a truncation cap in place, and it won’t spiral again. Hope that helps!
English
1
0
1
100
A𝕏𝕏🍉
A𝕏𝕏🍉@THE_AXX_·
I thought I'd enjoy watching this Struggle Arc some more (which I have already for the past week or so), but I'm genuinely starting to feel bad 😂 If I recall correctly from your tweets - the main issue is probably coming from you having tried to fix OC via ChatGPT cli (or whichever other AI provider you may have used) and at some point it may have hallucinated an issue - went on a comprehensive fixing sprint (which probably wasn't needed in the first place; or wasn't in the way it proposed), which then (probably) spawned further bugs or breaks or whatever else which then caused further fixing sessions and I by this point what you tweet just sounds like straight OC hell 😂 I generally have like 99% yeehaw rate with anything OC related, though I had some issues earlier maybe 2 or 3 weeks ago when it was either giga-slow or one message in telegram could whomp like 20 or 30% of session usage on my 20x plan in that one message when it was inherently something very simple; among some other minor issues. I think I first started noticing really bad issues occuring during updates. What I figured out and what worked for me was going back to this OC version which I knew worked perfectly for me - and, so, going back, immediately (luckily) got it to work perfectly again. So, at this point I was quite reluctant to really go for any updates because, you know, why should I if everything is working in that version and isn't in an update. I then gave in, since the updates came in like every day or two, and had really attractive cool new stuff, so - I updated again. I think one of the updates made no difference and it kept working nicely, but then another update caused some weird stuff again. I cannot recall whether I downgraded again and then updated again? But I probably ran doctor and doctor --fix a few times for good measure, and today *knock 3 times on wood* or whatever the saying is - everything has been workign amazingly apart from just AI provider outages which is not an OC issue. The version that got my OC in tip top shape was 2026.2.13 . Actually, if I recall correctly I had to restart the gateway a couple times or do the doctor and then doctor --fix till the downgrade fully worked? In fact, I might have done the downgrade via terminal, now that I am looking at the chat logs. I've checked terminal logs, this is the command I entered in terminal - npm i -g openclaw@2026.2.13 Idk, what really happened under the hood during this back & forth of OC versions, but, again - today everything runs really smoothly, so... So, idk, maybe try downgrading to that same old version that worked for me. This didnt cause any memory or whatever context loss - I just continued as normal. I did a backup though just in case (in fact this was probably the one thing I think I asked ChatGPT cli to do), so I guess you can do that too. And then, if it at least SEEMS like it's working better, given that's quite an old OC version now - just try updating again to the latest I guess. Right now I am on OpenClaw 2026.3.13 (61d171a), haven't checked if there is an update. ALSO, I jsut remembered - I saw you tried the OC mac app as well - I did as well - it definitely introduced issues - so I've avoided using it. Hope this helps, GLHF, let me know if you tried this and it helped😂🤝
A𝕏𝕏🍉 tweet mediaA𝕏𝕏🍉 tweet media
English
1
0
1
152
LUCAS
LUCAS@LucasSynnott·
@bradmillscan If you need any advice on migration let me know, I've fucked up many an openclaw agent at this stage but finally have mine set really solidly
English
1
0
1
26
Cathryn
Cathryn@cathrynlavery·
me: "Hey how's it going? Qwen:
Cathryn tweet media
English
3
1
12
1.2K