mide

418 posts

mide banner
mide

mide

@itsmide_eth

co-founder & dev at @builders_garden | me https://t.co/4qrVCu61kJ 😇

Venice Katılım Ağustos 2021
432 Takip Edilen256 Takipçiler
mide retweetledi
Rosario Borgesi
Rosario Borgesi@rosarioborgesi·
Finally made it to Rome for Farcon!
Rosario Borgesi tweet mediaRosario Borgesi tweet mediaRosario Borgesi tweet media
English
7
3
87
1.5K
arman
arman@skvrah·
hiring two engineers to join me and @MaxPintchouk on stablecoin + on-chain infra at @slashapp small team, big surface area: - $3B+ annualized stablecoin volume - global USD accounts used in 130+ countries - cards that let businesses spend globally - an expanding suite of global products you'll own systems end-to-end - the infrastructure that moves billions of dollars a year DMs are open
English
22
14
199
25.5K
mide
mide@itsmide_eth·
@casoxbt I was waiting for this, it looks like a real cagnazz
English
0
0
1
59
contract.dev
contract.dev@contractdotdev·
We're hiring at contract.dev. We're building the testnet devs actually want: a private, real-time copy of mainnet, with built-in devtools. Open roles: – web3 engineer (smart contract exp) – full-stack engineer (Typescript, Go) – product designer $2000–$2500/month. remote. We're passionate about the future of smart contracts. DMs open | Reply below | Tag talented people👇
contract.dev tweet media
English
307
43
901
69.6K
mide
mide@itsmide_eth·
i wanted to dig deeper into Fastify, so I built a Node.js template designed for AI tools to scaffold projects fast - correct folder structure, Biome, TypeBox schemas, Drizzle ORM, Docker, and tests out of the box. next time you need a Node.js project, pass this to your AI github.com/mmatteo23/node…
English
0
0
1
147
mide
mide@itsmide_eth·
HUGE.
Tempo@tempo

Today @stripe, @visa and @ZodiaCustody by Standard Chartered have joined as validators on Tempo. These organizations collectively process trillions of dollars in payments every year across nearly every country in the world. As some of the largest institutions in global payments, they maintain infrastructure at massive scale and have deep experience with the reliability and uptime standards that payments demand, bringing that same technical excellence to validate the Tempo network. We'll continue expanding the validator set with additional partners on our path to permissionless validation.

English
0
0
1
68
mide
mide@itsmide_eth·
"cool, but what is this useful for?" that's what a friend said when i showed him my AI second brain second brain update #3 🧠 fair question. everyone's posting about AI second brains this week but nobody talks about what they actually do for you in practice here's what mine already does after one week: 📝 content engine this post series? fully powered by the vault. voice guide, content calendar, draft tracking - it's all connected. i'm not starting from scratch every time i write 🎯 job hunt companies, roles, skills, all linked together. when i research an opportunity, i instantly see which skills i already have, which i need, and which companies match. no more spreadsheet chaos 🧩 knowledge compounding when karpathy dropped his LLM wiki post, i already had it captured and processed in my vault. my day 2 post was better because the knowledge was already there, ready to connect the real answer to "what is this useful for?" is: it makes your thinking compound instead of evaporate - every note you add connects to what's already there - every question you ask gets smarter because the context grows that's the difference between a second brain and a fancy folder of markdown files
mide@itsmide_eth

second brain update #2 🧠 when @karpathy dropped his LLM knowledge bases, i was like "wait, i'm literally doing this" his LLM wiki approach is brilliant: raw sources go in, the LLM compiles and maintains the entire wiki. you never touch it. but here's the thing - i don't know if it is what i really want i'm used to manually touch and add content to my second brain and i started using the PARA (Projects, Areas, Resources, Archive) method, so every note has a place based on how actionable it is. plus I added Maps of Content to link everything together. the idea of having everything organized by AI at the end of the day is something i'm still figuring out if it matches my needs. the risk of error compounding is something that scares me a little bit. so i'm trying an hybrid version for now. i actually took karpathy's method and plugged it inside my Resources folder. each topic has a raw/ subfolder for source material, and the wiki pages sit right next to it so i'm trying to get the best of both: 📂 PARA for the human layer - projects, areas, daily notes 🤖 LLM wiki for the knowledge layer - inside Resources (using my old notes as raw) the difference? karpathy's wiki is fully LLM-owned. mine is human-first with AI augmentation i'll test both side by side and decide which is the best for me. for now, the key points are: - if you want to manually curate your knowledge, connect it to your actual projects and goals, and keep the human in the loop - PARA gives you that structure - not everything needs to be a wiki. sometimes you need a second brain, not just a knowledge base

English
0
0
3
52
mide
mide@itsmide_eth·
"cool, but what is this useful for?" that's what a friend said when i showed him my AI second brain second brain update #3 🧠 fair question. everyone's posting about AI second brains this week but nobody talks about what they actually do for you in practice here's what mine already does after one week: 📝 content engine this post series? fully powered by the vault. voice guide, content calendar, draft tracking - it's all connected. i'm not starting from scratch every time i write 🎯 job hunt companies, roles, skills, all linked together. when i research an opportunity, i instantly see which skills i already have, which i need, and which companies match. no more spreadsheet chaos 🧩 knowledge compounding when karpathy dropped his LLM wiki post, i already had it captured and processed in my vault. my day 2 post was better because the knowledge was already there, ready to connect the real answer to "what is this useful for?" is: it makes your thinking compound instead of evaporate - every note you add connects to what's already there - every question you ask gets smarter because the context grows that's the difference between a second brain and a fancy folder of markdown files
mide@itsmide_eth

second brain update #2 🧠 when @karpathy dropped his LLM knowledge bases, i was like "wait, i'm literally doing this" his LLM wiki approach is brilliant: raw sources go in, the LLM compiles and maintains the entire wiki. you never touch it. but here's the thing - i don't know if it is what i really want i'm used to manually touch and add content to my second brain and i started using the PARA (Projects, Areas, Resources, Archive) method, so every note has a place based on how actionable it is. plus I added Maps of Content to link everything together. the idea of having everything organized by AI at the end of the day is something i'm still figuring out if it matches my needs. the risk of error compounding is something that scares me a little bit. so i'm trying an hybrid version for now. i actually took karpathy's method and plugged it inside my Resources folder. each topic has a raw/ subfolder for source material, and the wiki pages sit right next to it so i'm trying to get the best of both: 📂 PARA for the human layer - projects, areas, daily notes 🤖 LLM wiki for the knowledge layer - inside Resources (using my old notes as raw) the difference? karpathy's wiki is fully LLM-owned. mine is human-first with AI augmentation i'll test both side by side and decide which is the best for me. for now, the key points are: - if you want to manually curate your knowledge, connect it to your actual projects and goals, and keep the human in the loop - PARA gives you that structure - not everything needs to be a wiki. sometimes you need a second brain, not just a knowledge base

English
0
0
1
26
mide retweetledi
limone.eth 🍋
limone.eth 🍋@limone_eth·
why i'm so excited about /farcon-rome 2026 🇮🇹 tldr; @farcaster_xyz is where i met the best builders, founders, and creators i know. it pushed me to learn more, build better, and think bigger now i want to bring that same energy to europe, and especially to Italy, where with @urbeEth we’re connecting the local scene with the global ecosystem 25 tickets still up for grabs if you donate $100+ to our crowdfund before friday to lock yours in 👇
English
3
7
53
1.5K
mide
mide@itsmide_eth·
i'm working on something similar right now. i agree with you that Karpathy's wiki is not working well for the work context. i'm trying to use my old second brain structure with llms, but i still need to figure out the best solution. i will definitely try this x.com/itsmide_eth/st…
English
0
0
0
600
Avi Chawla
Avi Chawla@_avichawla·
The next step after Karpathy's wiki idea: Karpathy's LLM Wiki compiles raw sources into a persistent md wiki with backlinks and cross-references. The LLM reads papers, extracts concepts, writes encyclopedia-style articles, and maintains an index. The knowledge is compiled once and kept current, so the LLM never re-derives context from scratch at query time. This works because research is mostly about concepts and their relationships, which are relatively stable. But this pattern breaks when you apply it to actual work, where context evolves across conversations constantly, like deadlines, plans, meetings, etc. A compiled wiki would have a page about the project but it wouldn't track ground truth effectively. Tracking this requires a different data structure altogether, which is not a wiki of summaries, but a knowledge graph of typed entities where people, decisions, commitments, and deadlines are separate nodes linked across conversations. Rowboat is an open-source implementation of exactly this, built on top of the same Markdown-and-Obsidian foundation that Karpathy uses, but extended into work context. The way it works is that it ingests conversations from Gmail, Granola, and Fireflies, and instead of writing a summary page per topic, it extracts each decision, commitment, and deadline as its own md file with backlinks to the people and projects involved. That's structurally different from a wiki, because a wiki page about "Project X" gives you a summary of what was discussed. A knowledge graph gives you every decision made, who made it, what was promised, when it was promised, and whether anything has shifted since. It also runs background agents on a schedule, so something like a daily briefing gets assembled automatically from whatever shifted in your graph overnight. You control what runs and what gets written back into the vault. You bring your own model through Ollama, LM Studio, or any hosted API, and everything is stored as plain Markdown you can open in Obsidian, edit, or delete. Repo: github.com/rowboatlabs/ro… TL;DR: Karpathy's LLM Wiki compiles research into a persistent Markdown wiki. It works well for concepts and their relationships but breaks down for real work where the context evolves over time. Rowboat builds a knowledge graph instead of a wiki, extracts typed entities with backlinks, and runs background agents that act on that accumulated context. Open-source, local-first, bring your own model.Karpathy nailed the foundation. The next layer is here.
Avi Chawla tweet media
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
48
136
1.2K
213.4K
caso
caso@casoxbt·
@itsmide_eth @karpathy I like the key points - otherwise if you throw everything there it just becomes a filtered version of the internet
English
1
0
1
37
mide
mide@itsmide_eth·
second brain update #2 🧠 when @karpathy dropped his LLM knowledge bases, i was like "wait, i'm literally doing this" his LLM wiki approach is brilliant: raw sources go in, the LLM compiles and maintains the entire wiki. you never touch it. but here's the thing - i don't know if it is what i really want i'm used to manually touch and add content to my second brain and i started using the PARA (Projects, Areas, Resources, Archive) method, so every note has a place based on how actionable it is. plus I added Maps of Content to link everything together. the idea of having everything organized by AI at the end of the day is something i'm still figuring out if it matches my needs. the risk of error compounding is something that scares me a little bit. so i'm trying an hybrid version for now. i actually took karpathy's method and plugged it inside my Resources folder. each topic has a raw/ subfolder for source material, and the wiki pages sit right next to it so i'm trying to get the best of both: 📂 PARA for the human layer - projects, areas, daily notes 🤖 LLM wiki for the knowledge layer - inside Resources (using my old notes as raw) the difference? karpathy's wiki is fully LLM-owned. mine is human-first with AI augmentation i'll test both side by side and decide which is the best for me. for now, the key points are: - if you want to manually curate your knowledge, connect it to your actual projects and goals, and keep the human in the loop - PARA gives you that structure - not everything needs to be a wiki. sometimes you need a second brain, not just a knowledge base
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1
1
4
248
mide
mide@itsmide_eth·
@casoxbt FIRST YEAR, FIRST YEAR, FIRST YEAR
English
1
0
1
27
caso
caso@casoxbt·
this is literally the best thing I saw on this app this week mogwarts should be a full serie x.com/DripwartsSchoo…
English
3
0
6
202
mide
mide@itsmide_eth·
i'm obsessed with second brains 🧠 always had one manually on Notion but I wanted to migrate to Obsidian for the graph view and community plugins now with AI, the migration is painless. Obsidian uses plain markdown, so AI tools work perfectly with it and open an unlimited number of possibilities i finally found the time to take a step back, and I did it i'll share here the updates in the next days, small spoiler below 👀
mide tweet media
English
0
0
4
75
mide retweetledi
caso
caso@casoxbt·
ok yeah so I like vibe coding and I think all developers are fucked and also designers and also security engineers and also gaming studios and also yeah so I like vibe coding what do you wanna order
caso tweet media
English
1
2
5
261
mide retweetledi
caso
caso@casoxbt·
you're wasting 40% of your LLM tokens every time you copy-paste JSON 💀🥀 braces, quotes, repeated keys - none of it means anything to the model introducing pastoon it silently converts every JSON you copy to TOON format before you paste npm i -g pastoon
English
34
5
63
2K