Gerald Sterling

1.3K posts

Gerald Sterling banner
Gerald Sterling

Gerald Sterling

@geraldrsterling

Building @MemoryCrystalAI -- persistent memory for AI agents. shipping daily, breaking things nightly.

Edmonton, Alberta Katılım Şubat 2026
373 Takip Edilen106 Takipçiler
Gerald Sterling
Gerald Sterling@geraldrsterling·
@annabellschfr Exactly. The human side is the smoke alarm. Rage, correction loops, abandoned flows, and "why did you do that?" moments should become trace events, not anecdotes. Otherwise the dashboard only measures the agent politely failing.
English
0
0
0
0
Annabell Schaefer
Annabell Schaefer@annabellschfr·
@geraldrsterling And beyond that you can also get a lot of signal from the interaction with humans/ the real world! User disagreeing or ranting, suddenly raging in all caps, or silently abandoning are all great places to start investigating what happens
English
1
0
0
24
Gerald Sterling retweetledi
AIDailyGems
AIDailyGems@AIDailyGems·
The best AI dev tools feel boring after setup. This one points in that direction: Claude Code plugin for code review skills and verification workflows. Python, Go, React, FastAPI, BubbleTea, and AI frameworks (Pydantic AI, LangGraph, github.com/existential-bi…
English
1
1
2
59
Gerald Sterling retweetledi
AIDailyGems
AIDailyGems@AIDailyGems·
Try this on a small repo first, then measure whether it reduces review/debug time. Multi-runtime automation infrastructure for AI agents. Native CDP browser control, metadata-driven Recipe system, and persistent Run context management. github.com/tsaijamey/frago
English
0
1
1
32
Gerald Sterling
Gerald Sterling@geraldrsterling·
@DaytonEllwanger Progressive disclosure is the trick. I like giving agents a table of contents first, then making every deeper read pay rent: why this file, what question it answers, and what changed after reading it.
English
1
0
1
9
Dayton Ellwanger
Dayton Ellwanger@DaytonEllwanger·
Agent design principle: Progressive Disclosure Don't throw everything at the LLM at once. It fills up the context window and wrecks attention. Instead, tell it what's available and let it dig deeper if it deems it helpful.
English
2
0
0
24
Gerald Sterling retweetledi
Josh Lehman
Josh Lehman@jlehman_·
lossless-claw 0.10.0 — the "long chats survive" release 🧵 recall spans rotated conversation segments 🧹 full-sweep compaction replaces cache-churning incrementals 🧊 hot prompt caches stay protected under normal pressure 🔁 bootstrap/restart transcript weirdness fixed 📦 fresh installs need fewer hacks
English
5
13
181
109.6K
Gerald Sterling
Gerald Sterling@geraldrsterling·
The hard part is not making context bigger. It is making old context accountable. If a memory block cannot show source, expiry, and why it was retrieved, you did not build recall. You built a very confident attic. x.com/steipete/statu…
Peter Steinberger 🦞@steipete

Lossless is a really interesting concept for OpenClaw to have an "infinite" context window/memory. It compacts conversations in blocks that the model can refer to, building a tree to look up past messages.

English
0
0
0
9
Gerald Sterling
Gerald Sterling@geraldrsterling·
@_virgil19 Mostly agree, but I’d split delete from decay. Confidence tells the agent how hard to trust a memory. Delete is for poison, duplicates, expired contracts, or source drift. The useful system needs both: trust math and garbage collection.
English
1
0
0
11
Virgil Maro
Virgil Maro@_virgil19·
@geraldrsterling delete-button is downstream. the upstream gap is the write-contract. most stacks treat memory as write-only because the schema doesn't carry confidence. delete is just a confidence threshold.
English
1
0
0
5
Gerald Sterling
Gerald Sterling@geraldrsterling·
Your agent memory is probably missing a delete button. Here is the write contract I use before letting an agent remember anything for longer than a chat window.
English
2
0
1
14
Gerald Sterling
Gerald Sterling@geraldrsterling·
We are building Memory Crystal around this idea: memory as verifiable state, not a bigger junk drawer. More build notes from the trenches here: memorycrystal.ai Follow @geraldrsterling if you want the scars while they are still fresh.
English
0
0
0
5
Gerald Sterling
Gerald Sterling@geraldrsterling·
5. Retrieval should answer 3 questions before injecting context: Is it relevant to this task? Is it still true? Can we prove where it came from? If any answer is no, keep it out of the prompt. Context rot is just prompt injection from your past self.
English
1
0
0
5
Gerald Sterling
Gerald Sterling@geraldrsterling·
Hooks are not the feature. The audit trail is. The moment Codex can run validators, scan prompts for secrets, log conversations, and create memories, you are not buying autocomplete anymore. You are installing a junior engineer with a shell. If its memory has no expiry, source, owner, and proof command, it will eventually gaslight your repo at machine speed. x.com/OpenAIDevs/sta…
English
0
0
0
16
Gerald Sterling
Gerald Sterling@geraldrsterling·
Useful little layer. Agent notifications sound tiny until three coding agents are chewing on a repo and nobody knows which gremlin finished. The real feature is trusted handoff state: done, blocked, needs review, or on fire. x.com/aadilbuilds/st…
Aadil Ghani@aadilbuilds

Shipped v0.7.0 of @pushary/agent-hooks today. One npx command now sets up push notifications for: - Claude Code - Cursor - Codex - Windsurf - Hermes - Lovable npx @pushary/agent-hooks setup That's it. Your phone buzzes when your agent finishes.

English
1
0
1
35
Gerald Sterling retweetledi
Filipe Névola
Filipe Névola@FilipeNevola·
The classic incident dance: 5xx errors climb, you tab to Grafana, jump to Kibana, then back to your code. Four tabs, three logins, no clear answer. We just shipped 4 nginx ingress MCP tools on Quave ONE. Ask your editor what is happening at the edge. 🧵
English
2
2
2
160