0xHM (e/acc)

2.3K posts

0xHM (e/acc) banner
0xHM (e/acc)

0xHM (e/acc)

@panaikk

Building Memory Layer | Muay Thai @EthPadThai | Add meaning to tech| @AllianceDAO ALL7 | Flag Football Athlete 🏈 prev: Found @EdgeProtocol |@AlphaFinanceLab

Bangkok, Thailand Katılım Şubat 2020
4.1K Takip Edilen679 Takipçiler
0xHM (e/acc) retweetledi
How To AI
How To AI@HowToAI_·
Yann LeCun was right the entire time. And generative AI might be a dead end. For the last three years, the entire industry has been obsessed with building bigger LLMs. Trillions of parameters. Billions in compute. The theory was simple: if you make the model big enough, it will eventually understand how the world works. Yann LeCun said that was stupid. He argued that generative AI is fundamentally inefficient. When an AI predicts the next word, or generates the next pixel, it wastes massive amounts of compute on surface-level details. It memorizes patterns instead of learning the actual physics of reality. He proposed a different path: JEPA (Joint-Embedding Predictive Architecture). Instead of forcing the AI to paint the world pixel by pixel, JEPA forces it to predict abstract concepts. It predicts what happens next in a compressed "thought space." But for years, JEPA had a fatal flaw. It suffered from "representation collapse." Because the AI was allowed to simplify reality, it would cheat. It would simplify everything so much that a dog, a car, and a human all looked identical. It learned nothing. To fix it, engineers had to use insanely complex hacks, frozen encoders, and massive compute overheads. Until today. Researchers just dropped a paper called "LeWorldModel" (LeWM). They completely solved the collapse problem. They replaced the complex engineering hacks with a single, elegant mathematical regularizer. It forces the AI's internal "thoughts" into a perfect Gaussian distribution. The AI can no longer cheat. It is forced to understand the physical structure of reality to make its predictions. The results completely rewrite the economics of AI. LeWM didn't need a massive, centralized supercomputer. It has just 15 million parameters. It trains on a single, standard GPU in a few hours. Yet it plans 48x faster than massive foundation world models. It intrinsically understands physics. It instantly detects impossible events. We spent billions trying to force massive server farms to memorize the internet. Now, a tiny model running locally on a single graphics card is actually learning how the real world works.
How To AI tweet media
English
371
1.7K
10.1K
886.7K
0xHM (e/acc) retweetledi
MTS
MTS@MTSlive·
Introducing MTS: The first timeline-native news network that's always on. Monitoring tech, finance, geopolitics and culture — as it happens. We are Live Now.
English
240
225
3.4K
2.7M
0xHM (e/acc) retweetledi
jeff.hl
jeff.hl@chameleon_jeff·
Thanks @domcooke for spending months on researching and writing this piece. Einstein once said, "If you can't explain it simply, you don't understand it well enough." By that measure, Dom has blown me away with how deeply he came to understand Hyperliquid and what we're all building together. When someone asks what "housing all of finance" means, I'm proud to point them to this piece. I hope readers appreciate just how much Dom and his team put into their work. It reflects the thoughtful craft that is in Hyperliquid's DNA. Special thanks to @patrick_oshag for taking a bet on Hyperliquid's story.
Colossus@colossusmag

This is the story of Hyperliquid, the most profitable startup per employee on earth, told from a guarded office in Singapore. Last year, its team of 11 generated $900 million in profit. It's 3 years old, has never taken a dollar of venture capital, and is beginning to change how century-old markets work. Its founder, Jeffrey Yan (@chameleon_jeff), had never taken a physics class when he picked up a textbook at 16. Two years later, he won gold at the International Physics Olympiad. In 2019, he started trading with $10,000 from a living room in Puerto Rico—working off a television because he didn't own a monitor. Within 3 years, he was running one of the largest anonymous crypto trading firms. Then he shut it down. Yan was rich and free, but he had spent years inside crypto, watching it betray itself. Bitcoin's central premise was decentralization. Yet the biggest exchanges were centralized. Crypto kept reintroducing the dependence on trust it was built to eliminate. He set out to create what should have existed. Hyperliquid is a blockchain with a trading exchange on top, and anyone can build on it. Yan's vision is to house all of finance. In 3 years, it has done over $4 trillion in volume. And in the past few months, it has begun to outgrow crypto. Markets for oil, silver, and the S&P 500 now trade on Hyperliquid around the clock, weekends included, and are growing roughly 40% week on week. When the US and Israel bombed Iran on a Saturday in February, Hyperliquid was the venue traders turned to. Hyperliquid's success has cost Yan his freedom. He works out of a secret office in Singapore and cannot travel without two bodyguards. Even the team's housekeeper doesn't know what they do. In January, @domcooke spent a week at their office. Read his profile on Yan and @HyperliquidX below.

English
291
536
3.5K
343.2K
0xHM (e/acc) retweetledi
Garry Tan
Garry Tan@garrytan·
If your memory dies when your harness dies, you built the harness too thick. Memory is markdown. Skills are markdown. Brain is a git repo. The harness is a thin conductor — it reads the files, it doesn't own them.
Harrison Chase@hwchase17

x.com/i/article/2042…

English
77
193
2.2K
443.4K
0xHM (e/acc) retweetledi
Muratcan Koylan
Muratcan Koylan@koylanai·
Since I posted my Personal OS / filesystem article, LLM personal knowledge bases have turned into a real topic in the AI world. I’ve been building this system in Cursor for almost two years, but I wasn’t expecting to end up talking with people like a YouTube co-founder, a NASCAR driver, or some of the AI leaders I respect most because of that post. For me it was the first signal that this wasn’t niche anymore. The biggest pushback on the article was whether a filesystem is enough or scalable for something like this. Scaling the vault is easy; scaling curation and placement is not. Karpathy’s LLM Wiki published soon after with same thesis and it was an independent validation for me. "LLM incrementally builds and maintains a persistent wiki structured, interlinked markdown between you and the raw sources." Now there are tons of similar projects, different takes on the same idea. That’s good, I’m also evolving my own stack from what’s out in the open, and honestly, reframing the personal filesystem as a wiki is a smart move. I’m posting this because I think the harder problem is still the knowledge transfer pipeline. Designing a Personal OS (aka personal knowledge base) is the easy part. The architecture only starts to pay off when you fill it for years -not just posts you liked, but decision patterns, career and life details, half-formed thoughts, writing, the messy stuff. Getting all of that into the right markdown file, at the right time, in the right shape is still the bottleneck. I built a Chrome extension (Feed2Context, details in the article) that grabs a post with my notes from my feed, drops it into the filesystem, and my agents synthesize and route it. I also built OpenHome assistant as a voice pipeline from my room into the wiki. Plus a bunch of MCP hooks into my accounts. But orchestrating all these helpers gets exhausting. A lot of people suggested Obsidian but I'm mostly on Readwise CLI to pull from X, LinkedIn, arXiv, books, and news. It works well on mobile, and because it’s a CLI, agents can find what they need and push it into the filesystem. Skill registries help a lot, in the videos I’ve got flows like Readwise CLI + alphaXiv MCP for research papers: save a paper, agent pulls the full text, analyzes it, teaches me back. I’m also testing Zapier CLI, and waiting for especially the Triggers API, between things like Yutori or plain cron, keeping a personal wiki alive is still hard; nobody wants to be the cron job for their own life, so triggers might be part of the answer. TL;DR: A personal filesystem you control isn’t optional if you don’t want to rent your memory from one AI company. The open problem is keeping it fed and current. What I actually want is one solution that can watch my screen, hear my voice, read my accounts, and write into my Personal OS without me acting as the integration layer forever.
Muratcan Koylan@koylanai

x.com/i/article/2025…

English
27
42
529
74.1K
0xHM (e/acc) retweetledi
kepano
kepano@kepano·
Obsidian is weird: - 7 full-time employees - ~1 million users per employee - fully remote - 1 in-person meetup per year - no scheduled meetings - no stand-ups - deep focus is prioritized - our manifesto guides our product What works for us may not work for you.
English
66
272
6.6K
648.8K
0xHM (e/acc) retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Wow, this tweet went very viral! I wanted share a possibly slightly improved version of the tweet in an "idea file". The idea of the idea file is that in this era of LLM agents, there is less of a point/need of sharing the specific code/app, you just share the idea, then the other person's agent customizes & builds it for your specific needs. So here's the idea in a gist format: gist.github.com/karpathy/442a6… You can give this to your agent and it can build you your own LLM wiki and guide you on how to use it etc. It's intentionally kept a little bit abstract/vague because there are so many directions to take this in. And ofc, people can adjust the idea or contribute their own in the Discussion which is cool.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
1.1K
2.8K
26.3K
6.8M
0xHM (e/acc) retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
6.8K
56.8K
20.1M
0xHM (e/acc) retweetledi
GREG ISENBERG
GREG ISENBERG@gregisenberg·
sequoia put out a blog post called "services is the new software" look at this map of over $1T in services being replaced by AI agents
GREG ISENBERG tweet media
English
268
504
4.2K
618.5K
0xHM (e/acc) retweetledi
Campbell
Campbell@abcampbell·
Hormuz isn't an oil story. It's much deeper. Fertilizer. Sulfur. Helium. Plastics. Supply shocks, all the way down. And inflation & chaos if the strait remains closed. New Ramble: The Cascade campbellramble.ai/p/the-cascade
Campbell tweet media
English
30
94
742
132.6K
0xHM (e/acc) retweetledi