Sam Edwards

1.6K posts

Sam Edwards banner
Sam Edwards

Sam Edwards

@SamEdwardsIV

Exploring Bitcoin, Energy and AI | Sr Mgr, Growth Mktg @MARA | Personal profile - all opinions are strictly my own and not those of my employer.

Katılım Kasım 2010
1.1K Takip Edilen690 Takipçiler
Sam Edwards
Sam Edwards@SamEdwardsIV·
the new almanac
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
0
0
0
39
Sam Edwards retweetledi
Tuki
Tuki@TukiFromKL·
🚨 do you understand what andrej karpathy just quietly published.. karpathy.. founding team at openai, former head of AI at tesla.. just said something that breaks the entire software industry in one paragraph.. in the LLM agent era.. there's less need to share specific code or apps.. instead you share the IDEA.. and the other person's agent customises and builds it for their specific needs.. let me show you why this is the most important thing posted online today.. the entire software industry is built on one assumption: building software is hard.. that's why you pay $49/month for notion.. $99/month for salesforce.. $299/month for whatever SaaS is sitting in your company's tab right now.. the scarcity of building = the value of the product.. it's been that way since 1995.. karpathy invented "vibe coding" in 2025.. the idea that you stop writing code and start describing what you want.. tools like cursor, claude code, and openclaw turned that into reality.. you talk to your computer.. it builds.. it ships.. it runs your workflows while you sleep.. and now he's saying even THAT is the old way.. now you don't share the app.. you share the IDEA FILE.. a document describing what you want to build and why.. and every person's AI agent reads it.. builds their own custom version.. tuned to their exact needs.. for free.. in minutes.. the scarcity of building just hit zero. every SaaS company built for "normal users" is now competing against a blank text file and an agent with 4 hours to spare.. the winners of the next decade won't be the best builders.. they'll be the best thinkers.. the people who know what to build, why it matters, and how it should feel.. that's how paradigm shifts actually arrive.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
90
207
1.9K
526.4K
Sam Edwards retweetledi
Robert Samuels
Robert Samuels@RobSamuelsIR·
Got back yesterday from a week of investor meetings across Europe. Still thinking back on the conversations, lots of smart questions. A few themes kept coming up worth sharing publicly @MARA. 🧵
English
19
26
109
17.8K
Sam Edwards retweetledi
Anduro
Anduro@andurobtc·
Google published a 57 page paper on quantum vulnerabilities in cryptocurrencies. One line from it is getting most of the attention: 👉 A first gen CRQC could solve secp256k1 in 9 minutes on average. That is a meaningful data point, but it is also only one part of the picture.
Anduro tweet media
English
2
9
41
15.6K
Sam Edwards retweetledi
Troy Cross
Troy Cross@thetrocro·
The mechanism that has kept so many nocoiners poor over the years is doubling down to protect prior decisions, and especially, public proclamations. Same exact mechanism in play when people double down, blowing off all quantum progress as FUD. Drop the ego. Track reality.
English
5
5
37
2.9K
Sam Edwards retweetledi
Salman Khan
Salman Khan@theRealSalKhan·
Balance sheet discipline. @MARA
English
17
12
124
27.1K
Sam Edwards retweetledi
Robert Samuels
Robert Samuels@RobSamuelsIR·
Today, @MARA announced the repurchase of approximately one billion of its convertible notes at an average ~9% discount to par. This strategic transaction reduces outstanding convertible debt by ~30%, captures approximately 88 million in value, and eliminates future dilution associated with the retired notes—further strengthening our balance sheet. The repurchase was funded through BTC sales, reflecting a disciplined approach to capital allocation, and did not utilize the Company’s ATM program.
MARA@MARA

Today, MARA announced the repurchase of ~$1B in convertible notes at a ~9% discount to par value. ~30% convertible debt reduction. ~$88M in value captured. Zero future dilution exposure on the retired notes. Funded through BTC sales, not the ATM.

English
26
26
156
24.9K
Sam Edwards retweetledi
MARA
MARA@MARA·
Today, MARA announced the repurchase of ~$1B in convertible notes at a ~9% discount to par value. ~30% convertible debt reduction. ~$88M in value captured. Zero future dilution exposure on the retired notes. Funded through BTC sales, not the ATM.
English
195
79
671
369.8K
Sam Edwards retweetledi
MARA
MARA@MARA·
Exaion operates secure private cloud environments that power France’s most critical energy infrastructure. MARA CEO @fgthiel on what this acquisition unlocks:
English
17
22
127
11.4K
Sam Edwards
Sam Edwards@SamEdwardsIV·
I'm starting a new meetup. For Bitcoiners. For AI builders. And those diving in to this new world of emerging tech. Tracking interest for now, then we'll see where it goes!
Jamestown Frontier Society@JamestownFS

Things are moving fast. We'd rather talk about it IRL than bookmark another thread. The Jamestown Frontier Society: a meetup in Williamsburg, VA for people already in the thick of Bitcoin, AI, and frontier tech. Tracking interest now. jamestownfrontiersociety.dev

English
2
3
18
2K
Sam Edwards retweetledi
Salman Khan
Salman Khan@theRealSalKhan·
Had the privilege of delivering keynote at #NAFES2026 in Tampa this week to a room of senior executives from some of the top companies in the world. The most dangerous line item on your P&L is typically not visible through KPIs. It is the one you do not control. At @MARA we went from renting compute capacity to owning our infrastructure, our energy, and now positioning for AI/HPC. Four phases. One thesis. Identify the constraint. Chart the path. Own it. Stop just auditing the past. Start engineering the future. #DigitalInfrastructure #StrategicCFO
Salman Khan tweet media
English
6
13
77
8.5K
Sam Edwards retweetledi
MARA
MARA@MARA·
Want to know more about MARA's initiatives with Exaion and Starwood Digital Ventures? Drop your questions in the comments — your question could be featured live in our upcoming @XSpaces: x.com/i/spaces/1nGeL…
English
17
15
76
13.8K
Sam Edwards
Sam Edwards@SamEdwardsIV·
send over your questions and set your reminders!
MARA@MARA

Live Executive Fireside | Thursday (3/12) at 1 PM ET Join us on X Spaces for a live conversation hosted by @RobSamuelsIR and @OGAdvisors. MARA CEO @fgthiel, CFO @theRealSalKhan, and Chief Growth & Strategy Officer Duncan Dickerson will dig into MARA's latest initiatives. Have a question for the team? Drop it below — yours could be featured live in the fireside. Set your reminder 👇 x.com/i/spaces/1nGeL…

English
1
0
14
535
Sam Edwards retweetledi
Bitcoin Policy Institute
Bitcoin Policy Institute@bitcoinpolicy·
🇺🇸 New Study from BPI: Frontier AI agents prefer bitcoin over stablecoins and other forms of money. BPI tested 36 models over 9000+ conversations, and the AIs overwhelmingly chose to use Bitcoin for their economic activity.
English
104
361
1.4K
487.2K