Basic Memory

86 posts

Basic Memory banner
Basic Memory

Basic Memory

@basic_memory

Basic Memory lets your AI write, read, and reuse what matters: your notes, prompts, and instruction | Cross-LLM| Local-first & open source OR Basic Memory Cloud

Austin, TX 加入时间 Haziran 2025
178 关注84 粉丝
置顶推文
Basic Memory
Basic Memory@basic_memory·
If you're thinking about cancelling @OpenAI (or switching to Claude/Gemini), don't lose months of conversations first. Basic Memory imports your ChatGPT data and turns it into plain Markdown files. Every conversation becomes a file you can actually read, search, and use with whatever AI you switch to. This is not an ad. It is free and open source. Your data belongs to you. Keep it. Steps: 1. Settings → Data Controls → Export Data (ChatGPT emails you a zip) 2. Install Basic Memory brew tap basicmachines-co/basic-memory brew install basic-memory 3. Convert your data to markdown bm import chatgpt conversations.zip Your chats are now markdown files. Complete docs: docs.basicmemory.com
English
0
0
1
187
Basic Memory
Basic Memory@basic_memory·
Any time I read a post on x, reddit, or a blog that says "quietly..." or "its not a ... its a ...", I just skip over it. Once you see the telltale signs of AI slop, you see it everywhere. Somewhat ironically @basic_memory lets you save context as long term memory with AI. Our point of view though is that YOU should read it, and own it. Its YOURS after all. If you settle for slop, then thats on you. I think AI slop is a byproduct of outsourcing your thinking to AI. Its a lot harder to use AI as a tool to enhance human output instead of replace it. You really do have to slow down and think.
Basic Memory tweet media
English
0
0
1
38
Basic Memory
Basic Memory@basic_memory·
I've wondered about this too. Before starting to try and promote Basic Memory Cloud as a commercial product I had some idea that the whole social media/influencer/product promotion game was a pay to play payola scam, but it it was really quite a surprise to see how big of a scam the whole thing is. Influencers openly promoting how much it costs for their fake pitches and promotions. So gross. Its nice to see someone reporting how the game is played - awesomeagents.ai/news/github-fa…
English
0
0
0
22
Basic Memory
Basic Memory@basic_memory·
This is really weird. Just this morning I got an email with their "origin story", and the same day they are posting how AI lets everyone find their security holes too easily. I don't think I buy it.
Bailey Pumfleet@pumfleet

Open source is dead. That’s not a statement we ever thought we’d make. @calcom was built on open source. It shaped our product, our community, and our growth. But the world has changed faster than our principles could keep up. AI has fundamentally altered the security landscape. What once required time, expertise, and intent can now be automated at scale. Code is no longer just read. It is scanned, mapped, and exploited. Near zero cost. In that world, transparency becomes exposure. Especially at scale. After a lot of deliberation, we’ve made the decision to close the core @calcom codebase. This is not a rejection of what open source gave us. It’s a response to what risks AI is making possible. We’re still supporting builders, releasing the core code under a new MIT-licensed open source project called cal. diy for hobbyists and tinkerers, but our priority now is simple: Protecting our customers and community at all costs. This may not be the most popular call. But we believe many companies will come to the same conclusion. My full explanation below ↓

English
0
0
0
18
Basic Memory
Basic Memory@basic_memory·
@ovaistariq @flydotio @TigrisData One of the main reasons I'm in favor of using products from smaller companies is that when you send them a support email, they actually reply, and its from an actual person. I've had good experiences with @TigrisData.
English
0
1
5
237
Basic Memory
Basic Memory@basic_memory·
100%. This is the way. I've gone full circle here, from hand coding to multiple parallel agents, specs, back to long threaded conversations iterating on a single task. The model writes code, and I review it in real time. Build, inspect, test, repeat. Its like a faster version of coding that I used to do before AI, except now I can do sooo much more. It's like TDD or Agile but actually fun. AI slop code really sucks when you have to support it in production, or extend it, or refactor it. I think it still pays to write good code.
Big Brain AI@realBigBrainAI

Peter Steinberger, creator of OpenClaw, on why AI agents still produce "slop" without human taste in the loop: "You can create code and run all night and then you have like the ultimate slop because what those agents don't really do yet is have taste." Peter is direct: raw capability without direction still produces mediocre output. "They are spiky smart and they're really good at things, but if you don't navigate them well, if you don't have a vision of what you're going to build, it's still going to be slop. If you don't ask the right questions, it's still going to be slop." Great AI-assisted work is defined by the human guiding it. @steipete describes his own creative process when starting a new project: "When I start a project, I have like this very rough idea what it could be. And as I play with it and feel it, my vision gets more clear. I try out things, some things don't work, and I evolve my idea into what it will become." Most people skip this part entirely, front-loading everything into a single prompt and wondering why the result feels hollow. "My next prompt depends on what I see and feel and think about the current state of the project." Each step informs the next. The work itself is the feedback loop. "But if you try to put everything into a spec up front, you miss this kind of human-machine loop. And then I don't know how something good can come out without having feelings in the loop — almost like taste." The agentic trap is what happens when you remove yourself from the process too early.

English
0
1
1
95
Basic Memory
Basic Memory@basic_memory·
You can use Basic Memory for this exact use case. Local via cli/mcp. On the cloud via remote mcp, with local sync. All memories are just markdown files that indexed for full text and semantic search, plus there are skills and schemas to validate structure. You should own your memory. Open source or cloud hosted. basicmemory.com
English
0
0
0
85
Sherwood
Sherwood@shcallaway·
File-system based memory is easy to POC, but hard to productionalize - especially if that memory is shared across multiple agent instances. You need to persist this memory somewhere and make sure every instance has the most up-to-date version at all times. Not to mention handle race conditions where two agents update the same file. At @sazabi, we use a pretty classic solution for this problem: git. S3 Files is likely a game-changer here but it’s brand new… We’ll be kicking the tires on it soon. We’re also following along w/ the work @mesa_dot_dev and @archildata are doing in this area. 👀 This was a fun convo @vtahowe . Thank you for having me on!
Insecure Agents Podcast@insecureagents

Shared memory is both an engineering and security challenge How do you persist memory across ephemeral agent runs in sandboxes? How do you manage access to read and write from a shared memory store? "What we do is we literally just git push to that branch at the end of every sandbox execution. And that ensures that if there were any changes to the file system, they are persisted to the remote git server. And then the next time an agent runs, it pulls down whatever the latest state is for its sandbox. And this is how we share memory across the agent runs." @shcallaway

English
8
1
75
14.7K
Basic Memory 已转推
Jeremiah Lowin
Jeremiah Lowin@jlowin·
Introducing Prefab 🎨 A generative UI framework for building MCP Apps, data dashboards, or whatever you (or your agent) want. In Python. (Really.) 100+ shadcn components. Real React. No JavaScript required. And built right into FastMCP 3.2. prefab.prefect.io
English
10
12
108
10.2K
Basic Memory 已转推
Ben Sigman
Ben Sigman@bensig·
Excited to announce a new open-source, free-to-use memory tool I have been developing with my good friend @MillaJovovich. The project is called MemPalace and it is an agentic memory tool that scored 100% on LongMemEval - the industry standard benchmark for memory… this is higher on than any other published results - free or paid - and it is available now on GitHub. You can check out Milla’s video about it on her Instagram. I’ll also put some links in the comments below - please try it out, critique it, fork it, contribute to it - and join our discord.
Ben Sigman tweet media
English
146
351
3.1K
2M
Basic Memory
Basic Memory@basic_memory·
This is what we built with Basic Memory. Your AI writes structured markdown notes with observations and relations, builds a semantic knowledge graph, and it all lives in plain files you own. Works with Claude, Cursor, Codex. MCP or cli entrypoints. Open source - local or cloud. You should own your knowledge used with AI. basicmemory.com
English
0
0
5
622
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
6.9K
57.4K
20.5M
Basic Memory
Basic Memory@basic_memory·
@nyk_builderz Basic Memory works well for this. Its all markdown, mcp tools or cli to read/write notes. Tracks relations, semantic search, etc. Everyone should own their own data. Open source, local or cloud.
English
0
0
1
16
Basic Memory 已转推
Jace 🤎
Jace 🤎@JaceThings·
Every date exists somewhere in Pi. @itsnoahd and I made a way to find them all. A little late for Pi Day, but here it is; pi2.day
English
9
14
179
14.7K
- Elijah Muraoka -
- Elijah Muraoka -@elijahmuraoka_·
Imagine this: open-source Notion Full programmatic control over you own docs, data, and context Host, share, and collaborate with your team APIs + SKILL files built in by default Personal software is the future Anyone see anything like this? I need it If not, i’ll build it
English
37
0
46
5.5K
Basic Memory
Basic Memory@basic_memory·
We built an OpenClaw memory plugin with a slightly different take. Your agent gets a MEMORY.md file as working memory, plus a full knowledge graph of markdown notes it builds over time. When it searches, it queries three sources at once: the memory file, the knowledge graph (hybrid text + semantic search), and active tasks. It also auto-captures conversations as daily notes, so everything is searchable later. The twist: those same files work in Claude, Codex, Cursor, or anything else that supports MCP. One memory layer across all your tools. docs.basicmemory.com/integrations/o…
English
2
0
14
2.7K
Peter Steinberger 🦞
Peter Steinberger 🦞@steipete·
There's a lot of cool stuff being built around openclaw. If the stock memory feature isn't great for you, check out the qmd memory plugin! If you are annoyed that your crustacean is forgetful after compaction, give github.com/martian-engine… a try!
English
229
336
4.1K
475.1K
Basic Memory
Basic Memory@basic_memory·
We read dozens of AI memory research papers from the last three months so you don't have to. Three findings keep showing up: separate memory types beat one big pile, periodic consolidation beats raw accumulation, and the hardest unsolved problem is forgetting.
English
1
0
1
51