Mike Bardi

7.4K posts

Mike Bardi banner
Mike Bardi

Mike Bardi

@MikeBardi

CSM @Chainalysis by day, #X1 and #Pulsechain by night. Always learning. Opinions my own.

Remote Katılım Temmuz 2009
936 Takip Edilen3.2K Takipçiler
Mike Bardi retweetledi
SHRED
SHRED@SHREDfi·
Quick summary of what’s covered: - We are continuing to postpone our launch until market conditions improve. - The reason is persistently compressed funding rates. - Many DeFi yield protocols are losing TVL or getting hacked. Not an environment we want to launch into. - The product is ready to go. We’re just testing it further and using AI tools to audit. - Large institutional investors are still interested in what we’re doing, but agree waiting is smart. - We are keeping our burn rate low to conserve our treasury.
English
1
6
10
731
Mike Bardi
Mike Bardi@MikeBardi·
@karpathy Hey @karpathy, Literally just launched this! Check out SnipSnip.ai (free chrome extension) 1. Snip from any website (uses AI to interpret image) 2. Share it with whomever 3. Organize in folders You can hook it up to Claude/ChatGPT (via MCP). Already built!
English
0
0
1
82
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
7K
58.2K
20.8M
Mike Bardi
Mike Bardi@MikeBardi·
@itsolelehmann Hey Ole, literally just launched this! Check out SnipSnip.ai (free chrome extension) 1. Snip from any website (uses AI to interpret image) 2. Share it with whomever 3. Organize in folders You can hook it up to Claude/ChatGPT (via MCP). Already built!
English
0
0
0
71
Ole Lehmann
Ole Lehmann@itsolelehmann·
karpathy just casually described the future of ai and most people scrolled right past it: he's been building what he calls "llm knowledge bases." here's what that means in plain english: you take everything you're interested in. articles, research papers, datasets, images, etc and you dump it all into one folder then you point your ai at the folder and say "read all of this, organize it, and remember it" the ai reads through every single source. writes summaries, groups related ideas together, links concepts across different articles basically builds a personal library that's fully organized and searchable and it maintains the whole thing for you. when you add something new, the ai reads it, figures out how it connects to everything already in the library, and updates automatically. karpathy said he rarely touches it himself once the library gets big enough (~100 articles, ~400k words), you can start asking it complex questions and get answers pulled from across your entire collection > "what are the common themes across these 30 papers" > "what did i save six months ago that connects to this new idea" > "summarize everything i have on topic x and tell me what's missing" and every answer it gives gets filed back into the library. so the system gets smarter every single time you use it. the memory grows from both sides: what you save AND what you ask now think about your own life for a second you probably have > thousands of twitter bookmarks you'll never reopen. > hundreds of saved articles from the last year > podcasts where someone said something brilliant and you can't remember what it was or which episode all dead knowledge. you consumed it once and it disappeared now imagine all of it lives in one system: organized, connected, and queryable. you could ask "what are the best pricing frameworks i've come across this year" and get an answer that pulls from: 1. a podcast you listened to in january 2. a twitter thread you bookmarked in march 3. and a blog post you forgot you even read the ai connects dots across formats, across months, across topics. because it absorbed everything and has photographic memory of all of it that's the dream. and karpathy built it the problem: right now this requires obsidian (a note-taking app built around linked notes), command line tools, custom scripts, and browser extensions just to wire it all together. you need to be quite technical karpathy even said it himself: "i think there is room here for an incredible new product instead of a hacky collection of scripts" i think whoever packages this for normal people is sitting on something massive. one app that syncs with the tools you already use, your bookmarks, your read-later app, your podcast app, your saved threads. it pulls everything in automatically, the ai organizes and connects it over time, and you can ask questions across your entire personal library whenever you want you never manually upload anything. it just learns in the background someone please build this
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
137
141
1.8K
309.4K
Mike Bardi
Mike Bardi@MikeBardi·
Check out my bot! NKN is up 2.5% within the last few minutes and still accelerating. 🚀 How this bot works: 1. Bot triggers when price moves >3% within minutes 2. Screenshots the chart with snipsnip.ai 3. Posts a Tweet snipsnip.ai/share?c=4ae4xr…
English
0
0
0
44
Mike Bardi
Mike Bardi@MikeBardi·
Check out my bot! FAI is up 2.0% within the last few minutes and still accelerating. 🚀 How this bot works: 1. Bot triggers when price moves >3% within minutes 2. Screenshots the chart with snipsnip.ai 3. Posts a Tweet snipsnip.ai/share?c=ucueem…
English
0
0
0
52
Mike Bardi
Mike Bardi@MikeBardi·
Check out my bot! KTA is up 2.0% within the last few minutes and still accelerating. 🚀 How this bot works: 1. Bot triggers when price moves >3% within minutes 2. Screenshots the chart with snipsnip.ai 3. Posts a Tweet snipsnip.ai/share?c=ngxojm…
English
0
0
0
47
Mike Bardi
Mike Bardi@MikeBardi·
Check out my bot! AMP is up 2.8% within the last few minutes and still accelerating. 🚀 How this bot works: 1. Bot triggers when price moves >3% within minutes 2. Screenshots the chart with snipsnip.ai 3. Posts a Tweet snipsnip.ai/share?c=kxhrtv…
English
0
0
0
37
Mike Bardi
Mike Bardi@MikeBardi·
Check out my bot! ABT is up 4.9% within the last few minutes and still accelerating. 🚀 How this bot works: 1. Bot triggers when price moves >3% within minutes 2. Screenshots the chart with snipsnip.ai 3. Posts a Tweet snipsnip.ai/share?c=lff5vx…
English
0
0
0
31
Mike Bardi
Mike Bardi@MikeBardi·
Check out my bot! NOM is up 2.9% within the last few minutes and still accelerating. 🚀 How this bot works: 1. Bot triggers when price moves >3% within minutes 2. Screenshots the chart with snipsnip.ai 3. Posts a Tweet snipsnip.ai/share?c=aqan1t…
English
0
0
0
32
Mike Bardi
Mike Bardi@MikeBardi·
Check out my bot! KERNEL is up 2.4% within the last few minutes and still accelerating. 🚀 How this bot works: 1. Bot triggers when price moves >3% within minutes 2. Screenshots the chart with snipsnip.ai 3. Posts a Tweet snipsnip.ai/share?c=slaed9…
English
0
0
0
60
Mike Bardi retweetledi
SHRED
SHRED@SHREDfi·
When we launch we are going to set deposit caps to help us scale the protocol systematically. Deposit in the first 10M to receive 10x points. This is the highest multiplier we will ever offer.
English
9
7
19
832
Mike Bardi retweetledi
SHRED
SHRED@SHREDfi·
The waitlist is back open for referrals & limited time only! You must already be on the waitlist to invite friends. What's in it for you: - Refer 3 friends who join the waitlist & unlock a 2x points multiplier on your balance. - If the waitlist crosses 25,000 total sign-ups, every original waitlister gets a 3x points multiplier. All three referral codes must be redeemed to qualify for the 2x multiplier. Already on the waitlist? Here's how to refer: - Go to referral.shred.fi - Enter the exact X username and wallet address you used to sign up. - Share your 3 referral codes with friends (post them on X, drop them in Telegram, DM them directly). Got a referral code but not on the waitlist yet? - Go to waitlist.shred.fi - Enter your referral code. - Complete the form and submit. You're in! ⏳The waitlist closes in ~5 days.
English
74
15
97
4.6K
Mike Bardi retweetledi
Mark Bardi
Mark Bardi@MarkBardi·
Come join @dohko_01 and I for another insightful AMA to discuss tokenized stablecoin yield @SHREDfi. Starts later today: 18/2 at 11am UTC in the Whale Sailor Telegram chat.
WHALESAILORS@whalesailors

🎙️ YOU’RE INVITED 🎙️ @SHREDfi x @WHALESAILORs Shredfi is a high yield savings protocol built on Hyperliquid. Users can earn sustainable, real yield on stablecoins through delta-neutral strategies. 📍 Venue: t.me/whalesailors?l… 📅 Date: FEB 18TH , 2025 ⏰ Time: 11:00 AM UTC 🏆 Prizes: $200 Stay Active, Ask Questions and Give Feedback And share Ideas Stay Connected:

English
12
3
27
1.4K
Mike Bardi retweetledi
Exodus
Exodus@exodus·
Like coffee? Attending ETH Denver? This post is for you. Exodus is brewing something special in Denver. A taste of freedom. We’re taking over a renowned cafe to put frictionless payments into action. And yes, you’ll get to use it too 👀 If you’re in the city, don’t miss an exclusive preview of Exodus Pay along with games and giveaways. Details below 👇
English
12
20
83
8.9K
Mike Bardi retweetledi
MegaETH
MegaETH@megaeth·
Introducing The Rabbithole — your MegaETH ecosystem frontend Mainnet is now fully open to the public. Rabbithole is how users: → Discover live and upcoming apps → Bridge and swap assets → Get notified of ecosystem events → More Live now. Features and link below.
English
1K
859
3.2K
1M
Mike Bardi retweetledi
Lord VibeZ🤍
Lord VibeZ🤍@TheLordVibeZ·
You are totally wrong if your stable coins like USDC and USDT is not working for you Say bye bye to your stablecoins sitting idle I’ll be having am AMA space on Friday 10AM UTC with @SHREDfi which will be co hosted by @Builderz_Space Space link ~ x.com/i/spaces/1owxw… Set your reminderssss asappppp
Lord VibeZ🤍 tweet media
English
27
12
131
2.3K