Daniel Strickland
3K posts

Daniel Strickland
@DanielStricklnd
Product builder in consumer media ⚡️ Founder @scener 💻📱 Previous startup exit to RealNetworks ⌨️ Director of Product at T-Mobile for Ads and Apps

Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.


LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.


I left xAI. & built something that moved with me as fast as the tech moves around me. the future is model agnostic. your context should travel with you. sign up for exclusive early access to ouroboros. the personalization layer. ouro.chat

This may be a dumb question but I’ll ask it here anyways: I can’t find a good way for my various AI chats to automatically sync its conversation history into a structured knowledge base. So that as I update various chats from time to time and refine context, my knowledge base automatically grows with this new info.





✨ YES! I did actually buy an entire house in cash with @RevolutApp Lot of step by step payments of like EUR 200,000 cause was scared if it wouldn't arrive But it all arrived fine and 100% via IBAN instant transfers! Super safe! IBAN transfers are irreversible and super fast, usually seconds I did have to use my lawyer as an escrow and wire it to her because the seller's Portuguese lawyer INSISTED on using a bank check, which was SUPER unsafe, and losing that check meant anyone could cash it (it wasn't earmarked to the seller's name at all) Their lawyer insisted on using a bank check because she didn't trust IBAN bank transfers "they're unsafe", lmfao 100% bullshit Anyway I couldn't get out of that or the whole house purchase wouldn't happen, my lawyer was amazing and saved me in the process by acting as an intermediary for the money We had to walk that check from the lawyer's bank to the notary on the street, and I brought pepper spray in case we got robbed (can't carry a gun here innit), was convinced someone would snatch that check and cash it! But it went fine So YES kinda I did buy an entire house with Revolut!



















