
Basic Memory
86 posts

Basic Memory
@basic_memory
Basic Memory lets your AI write, read, and reuse what matters: your notes, prompts, and instruction | Cross-LLM| Local-first & open source OR Basic Memory Cloud





Open source is dead. That’s not a statement we ever thought we’d make. @calcom was built on open source. It shaped our product, our community, and our growth. But the world has changed faster than our principles could keep up. AI has fundamentally altered the security landscape. What once required time, expertise, and intent can now be automated at scale. Code is no longer just read. It is scanned, mapped, and exploited. Near zero cost. In that world, transparency becomes exposure. Especially at scale. After a lot of deliberation, we’ve made the decision to close the core @calcom codebase. This is not a rejection of what open source gave us. It’s a response to what risks AI is making possible. We’re still supporting builders, releasing the core code under a new MIT-licensed open source project called cal. diy for hobbyists and tinkerers, but our priority now is simple: Protecting our customers and community at all costs. This may not be the most popular call. But we believe many companies will come to the same conclusion. My full explanation below ↓



We've redesigned Claude Code on desktop. You can now run multiple Claude sessions side by side from one window, with a new sidebar to manage them all.

Peter Steinberger, creator of OpenClaw, on why AI agents still produce "slop" without human taste in the loop: "You can create code and run all night and then you have like the ultimate slop because what those agents don't really do yet is have taste." Peter is direct: raw capability without direction still produces mediocre output. "They are spiky smart and they're really good at things, but if you don't navigate them well, if you don't have a vision of what you're going to build, it's still going to be slop. If you don't ask the right questions, it's still going to be slop." Great AI-assisted work is defined by the human guiding it. @steipete describes his own creative process when starting a new project: "When I start a project, I have like this very rough idea what it could be. And as I play with it and feel it, my vision gets more clear. I try out things, some things don't work, and I evolve my idea into what it will become." Most people skip this part entirely, front-loading everything into a single prompt and wondering why the result feels hollow. "My next prompt depends on what I see and feel and think about the current state of the project." Each step informs the next. The work itself is the feedback loop. "But if you try to put everything into a spec up front, you miss this kind of human-machine loop. And then I don't know how something good can come out without having feelings in the loop — almost like taste." The agentic trap is what happens when you remove yourself from the process too early.

"Am I empowering my users to take their data and go to where they need to go when the time comes for that?" @_adamwiggins_


Shared memory is both an engineering and security challenge How do you persist memory across ephemeral agent runs in sandboxes? How do you manage access to read and write from a shared memory store? "What we do is we literally just git push to that branch at the end of every sandbox execution. And that ensures that if there were any changes to the file system, they are persisted to the remote git server. And then the next time an agent runs, it pulls down whatever the latest state is for its sandbox. And this is how we share memory across the agent runs." @shcallaway

The new Anthropic managed agents API is basically the Letta API that we've had since a year ago, but closed source and with provider lock-in. They even have read-only memory blocks and memory block sharing -- something which was unique to the Letta agents for a long time. Funny enough, we actually don't think this is the direction agents are going to go. Having API interfaces for memory blocks and tools is certainly convenient - you can spin up stateful agents as API services with just a few lines of code. But its also limiting: LLMs today are extremely adept at computer-use, and representing their memories in this way limits the action space of agents and their ability to learn. It's important to remember that just because something comes out of a frontier lab, doesn't mean its the "right" answer long-term. The Letta API ~1 year ago was somewhat of an antipattern in a sea of agent framework libraries offered by every lab. But now, stateful agent APIs are becoming the new norm - especially as providers try to lock in memory/state into their platforms to increase switching costs (which is exactly why we believe memory should live outside of model providers) If you want to see what the future is going to look like, follow @Letta_AI


Hermes Agent v0.7.0 is out now. Our headline update: Memory is now an extensible plugin system. Swap in any backend, or build your own. Built-in memory works out of the box; six third-party providers are ready to go. Pick one with 'hermes memory setup'. Full changelog below ↓












