mide
418 posts

mide
@itsmide_eth
co-founder & dev at @builders_garden | me https://t.co/4qrVCu61kJ 😇






Today @stripe, @visa and @ZodiaCustody by Standard Chartered have joined as validators on Tempo. These organizations collectively process trillions of dollars in payments every year across nearly every country in the world. As some of the largest institutions in global payments, they maintain infrastructure at massive scale and have deep experience with the reliability and uptime standards that payments demand, bringing that same technical excellence to validate the Tempo network. We'll continue expanding the validator set with additional partners on our path to permissionless validation.

second brain update #2 🧠 when @karpathy dropped his LLM knowledge bases, i was like "wait, i'm literally doing this" his LLM wiki approach is brilliant: raw sources go in, the LLM compiles and maintains the entire wiki. you never touch it. but here's the thing - i don't know if it is what i really want i'm used to manually touch and add content to my second brain and i started using the PARA (Projects, Areas, Resources, Archive) method, so every note has a place based on how actionable it is. plus I added Maps of Content to link everything together. the idea of having everything organized by AI at the end of the day is something i'm still figuring out if it matches my needs. the risk of error compounding is something that scares me a little bit. so i'm trying an hybrid version for now. i actually took karpathy's method and plugged it inside my Resources folder. each topic has a raw/ subfolder for source material, and the wiki pages sit right next to it so i'm trying to get the best of both: 📂 PARA for the human layer - projects, areas, daily notes 🤖 LLM wiki for the knowledge layer - inside Resources (using my old notes as raw) the difference? karpathy's wiki is fully LLM-owned. mine is human-first with AI augmentation i'll test both side by side and decide which is the best for me. for now, the key points are: - if you want to manually curate your knowledge, connect it to your actual projects and goals, and keep the human in the loop - PARA gives you that structure - not everything needs to be a wiki. sometimes you need a second brain, not just a knowledge base

second brain update #2 🧠 when @karpathy dropped his LLM knowledge bases, i was like "wait, i'm literally doing this" his LLM wiki approach is brilliant: raw sources go in, the LLM compiles and maintains the entire wiki. you never touch it. but here's the thing - i don't know if it is what i really want i'm used to manually touch and add content to my second brain and i started using the PARA (Projects, Areas, Resources, Archive) method, so every note has a place based on how actionable it is. plus I added Maps of Content to link everything together. the idea of having everything organized by AI at the end of the day is something i'm still figuring out if it matches my needs. the risk of error compounding is something that scares me a little bit. so i'm trying an hybrid version for now. i actually took karpathy's method and plugged it inside my Resources folder. each topic has a raw/ subfolder for source material, and the wiki pages sit right next to it so i'm trying to get the best of both: 📂 PARA for the human layer - projects, areas, daily notes 🤖 LLM wiki for the knowledge layer - inside Resources (using my old notes as raw) the difference? karpathy's wiki is fully LLM-owned. mine is human-first with AI augmentation i'll test both side by side and decide which is the best for me. for now, the key points are: - if you want to manually curate your knowledge, connect it to your actual projects and goals, and keep the human in the loop - PARA gives you that structure - not everything needs to be a wiki. sometimes you need a second brain, not just a knowledge base




LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.


LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.







Our biggest open-source repos are getting overwhelmed by AI slop which literally makes Github unusable (~a new pull request every 3 minutes). Fun new challenges in an agentic world!






