
TEACHMEDEFI
3.5K posts

TEACHMEDEFI
@TEACHMEDEFI
🎙️ Weekly interviews and insights from founders & experts on the latest trends in tech, AI, crypto & web3




490 open browser tabs. Most people just forget about them. I turned them into a searchable knowledge base in 20 minutes. @itsolelehmann wrote about @karpathy‘s viral LLM knowledge base post today and ended it with "someone please build this." Karpathy's idea was to take everything you're interested in (articles, papers, tweets, videos etc.) dump it into one folder, point your AI at it, and let it read, organize, and index everything. Then you can query your entire personal library with natural language. Ask it to connect ideas across sources, surface things you forgot you saved or summarize everything you got on a topic. And the AI maintains it all as you add new stuff. I had 490 open Safari tabs. Basically an unorganized dump of links to read later. Which I never end up reading later anyway :) So I sent Karpathy's post to my AI agent and after some back & forth, it actually built this in one short session I pasted all 490 URLs. I asked Pulse (my AI agent) how to do this easily and it told me how to easily bulk copy links in Safari on your phone with 2 clicks. Out of the 490, it filtered to 251 worth saving: 206 X posts, 21 articles, 15 tools, 6 GitHub repos, 3 research papers. The rest was skipped automatically. All categorized, frontmatter written, saved to the right folder. Done in under 20 minutes! It pulled metadata on every X link. We ran into a few bugs which got fixed after some back and forth: most of the "tweets" weren't tweets. They were full X articles sitting inside empty post wrappers, invisible unless you know where to look in the API response. 158 of 206 posts turned out to be complete long-form articles that were missed. It went back and reextracted it all. What the system does now (image attached): I drop any URL into the chat. It gets filed, tagged, indexed. Zero effort on my end. The ongoing maintenance is basically free. I query it naturally. "That David Deutsch article about raising children" -> finds it. "Connect new ideas and frequent tips regarding SEO across my knowledge base" -> synthesizes it across 30+ sources. "Something about subscription pricing in a newsletter I read a while back" -> semantic search gets it from a half-memory. Now and then, it surfaces relevant material proactively without me asking. The attached image was also generated by the agent itself, showing how the full flow works. The rabbit hole honestly doesn't end. You can do so much now, simply with an idea + giving an AI agent the right tools + iterating on the system together. It's wild. We‘ve never had such high leverage on human creativity ever. As usual, technology keeps shifting the arena from the perspiration phase to the inspiration phase. We don‘t have to do the arduous tasks. „1% inspiration and 99% perspiration“ is a misleading idea of how progress happens: the perspiration phase can be automated. I think creativity & understanding will increasingly become differentiators. And it has never been easier to build both: by using AI. (Sidenote: I already have a personal AI agent running 24/7 on a server, with access to obsidian notes and QMD for semantic search. So the plumbing for this use case was already in place. But this setup is too much to get into for this post…will get into it another time!)






Braille becomes Braille Studio. To our past and current partners, thank you. To our future collaborators, we can’t wait to work with you. Onwards to simplifying how people interact with technology. Check out our new website: braille.wtf

Without privacy, AI becomes surveillance.. Throwback to Nov, when I got to join a panel on AI, Privacy & Web3 in London at the @MidnightNtwrk Summit. An amazing venue to be discussing tech at ->> The Old Royal Naval College is a UNESCO world heritage site. It’s a historic technology & science hub. The home of the Prime Meridian and Greenwich Mean Time (GMT)! I shared some thoughts on how Web3 can help AI go beyond blind trust. AI is obviously super useful and increasingly so in many contexts. But it’s also a black box. We should be able to rely on credible guarantees that our data is treated confidentially and that we are receiving the exact service we are expecting. Legitimate questions that come up for any person or company using AI: What happens to my data? Is it kept private? Who says so? How can I verify that? Is the output actually coming from the AI Model I expect to be using? …or from an AI model at all? How do I know for sure? This is what we are working on at @OODA_AI_ Full video linked in comments. Thanks for having us @MidnightNtwrk




Privacy and compliance aren’t opposites. Architecture decides the outcome- Howard Wu, Founder- @AleoHQ


The person who “has everything” doesn’t have this: 🍾 A smart Champagne bottle 🔗 On-chain provenance 🧠 Bacchus AI built in 🎁 A shot at a $50k Alpha Cellar This week only. news.dvinlabs.com/the-smartest-g…




