Uri Pomerantz

2.3K posts

Uri Pomerantz banner
Uri Pomerantz

Uri Pomerantz

@uripomerantz

AI-focused CEO and investor. I run an fintech startup, and also invest in leading private AI companies via AI Applied Ventures (https://t.co/ihhITKH6yZ).

Silicon Valley Katılım Şubat 2009
1.9K Takip Edilen5.4K Takipçiler
Sabitlenmiş Tweet
Uri Pomerantz
Uri Pomerantz@uripomerantz·
"In the end, enjoying life's experiences is the only rational thing to do... You're sitting on a planet spinning around in the middle of absolutely nowhere. You're floating in empty space in a universe that goes on forever. If you're going to be here, enjoy the experience." -MS
Uri Pomerantz tweet media
English
32
250
1.6K
0
Uri Pomerantz retweetledi
Sam Altman
Sam Altman@sama·
I wrote this early this morning and I wasn't sure if I would actually publish it, but here it is: blog.samaltman.com/2279512
English
2.9K
1.2K
15.8K
6.9M
Uri Pomerantz
Uri Pomerantz@uripomerantz·
Claude Code's entire codebase recently leaked. I've spent time analyzing it and thinking about my own agent setup in @openclaw and other platforms. The question I keep coming back to is — what are the most important things I can learn from Claude Code's design, and where should I be applying them? I believe there are 3 core places to focus on. 1. Nailing sub-agents The core of Claude Code's setup is its heavy reliance on sub-agents for everything. As I think about it, I continue to see AI harnesses increasingly mirroring how the human brain operates — one main executive function (our conscious mind — or the main agent loop that's always active) that manages and delegates to many specialized subsystems (sub-agents). So what does this mean practically? Get to know how to spawn sub-agents in OpenClaw, and use them for everything. Familiarize yourself with the /subagents command. Consider installing / playing around with plugins like mission control that help you manage processes here. Similar to how our brain has specialty subsystems, if you're looking to optimize your agent setup — choose the right model for each sub-agent (to both do the job best and save money), set up soul.md and other files for each agent, and set timeouts for long-running tasks. 2. Your 80/20 in time/impact is optimizing agents.md, soul.md, and memory.md One fascinating thing I learned from the Claude Code source code was the fundamental importance of the claude.md file. It's included in the context window / sent for inference in every single request. What's the learning for OpenClaw and similar platforms? Become really good friends with and optimize OpenClaw's equivalent — agents.md, soul.md, and memory.md. OpenClaw combines these into most of your requests — so make sure you understand how they work, keep them updated, and keep them small. 3. Memory is key The last fascinating piece of Claude Code's setup is how memory is handled. In computer science terms — it's handled through pointers. The memory file stores very concise, high-level details of what it remembers, but Claude Code only "double clicks" into the actual memory when needed. This allows the system to manage memory efficiently (given limited context windows), save money, and perform better — less junk sent over for each inference request. 3 Fun Ways to Start — If you're ready to roll up your sleeves (1) A fun and high-impact way to start is to install QMD (open source) — a memory management framework written by Tobi Lütke (@tobi) , the CEO of @Shopify. (It's awesome to see billionaire execs still actively hacking and contributing to the open-source ecosystem.) Take a look here: github.com/tobi/qmd (2) You might also consider the Lossless Context Management plugin, which helps OpenClaw persist important instructions in a SQL database — so you can avoid what happened recently to Meta's Director of Alignment when her agent started deleting emails after compaction wiped an instruction from context. Take a look here: github.com/martian-engine… (3) Lastly, someone recently ported over the up-and-coming Claude Code "Auto Dream" feature — which helps OpenClaw run nightly scans to "dream" (look at logs, extract key memories, consolidate knowledge). Take a look here: github.com/LeoYeAI/opencl… Have fun!
Uri Pomerantz tweet media
English
0
0
2
51
Uri Pomerantz retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.8K
6.7K
56.2K
20M
Uri Pomerantz retweetledi
klöss
klöss@kloss_xyz·
let me explain what Karpathy just shared he’s spending way less time using AI to write code and more time using it to build personal knowledge bases the full breakdown:  → he dumps raw sources (articles, papers, repos, datasets, images) into a folder. then has an LLM organize them into a wiki… a collection of markdown files with summaries, links between related ideas, and concept articles that connect everything together → he uses Obsidian as his frontend. he views raw data, the organized wiki, and visualizations all in one place. the LLM writes and maintains the entire wiki. he rarely touches it directly → once the wiki gets big enough (~100 articles, ~400K words on one recent research topic)… he just asks the LLM questions against it. no RAG (complex retrieval system) needed. the LLM maintains its own index files and reads what it needs → outputs aren’t just text. he has the LLM render markdown files, slide decks, charts, and images… then files the outputs back into the wiki so every question he asks makes the knowledge base smarter → he runs “health checks” where the LLM finds inconsistent data, fills gaps using web search, and suggests new connections and articles. the wiki cleans and improves itself over time → he even vibe coded a search engine over his wiki that he uses directly in a browser or hands off to an LLM as a tool for bigger questions → his next step: training a custom model on his own research so it knows the material in its weights… not just in the context window most people use AI to get answers. Karpathy is using AI to build his own ‘Jarvis’ via compounding knowledge systems that get smarter the more he uses them the difference between asking ChatGPT or Claude a question and having a personal research engine that grows with every session is the gap most people haven’t crossed yet and this is where it gets really powerful not replacing your thinking but organizing everything you’ve ever learned into something you can query or create with forever if you’ve been using CLAUDE .md and context files in Claude Code… this is that same idea at a much bigger scale if you’re doing any kind of AI work or deep learning on a new topic right now… this workflow is worth studying closely you’ll want to adopt it yourself this is one of AI’s brightest minds after all. we’re all better off listening to him.
klöss tweet media
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
94
429
3.7K
484.7K
Uri Pomerantz retweetledi
Chubby♨️
Chubby♨️@kimmonismus·
Holy: OpenAI says it’s reallocating compute and talent toward its next generation of models, echoing the internal shift that preceded GPT-3. The focus now is on “automated researchers” and agent-based systems that can execute complex tasks end-to-end. It’s a signal that the lab sees these systems as the next major step beyond current AI tools. Thats why they sacrificed Sora. For automated reserachers.
English
45
30
396
31.8K
Uri Pomerantz
Uri Pomerantz@uripomerantz·
I've been thinking a lot about the recent Claude Code full source leak — and specifically, what it means for strategic moats in the era of AI, and who and what drives them. First off — I see great product management as a moat in the era of AI. Why? Clearly part (and likely the bulk) of the moat comes from leading-edge AI research (e.g. more intelligence per dollar, delivered as a utility). This is not connected to the harness (e.g. Claude Code) itself. But beyond that — I believe a significant part of the moat comes from how the harness is designed, and specifically, how and why it becomes the most beloved product for users. The obvious gut reaction to the source code leak is: "Every other major AI lab, as well as open source projects, will be studying and learning from this." Which is both obvious and inevitable. I'm sure everyone relevant at@OpenAI,@GoogleAI, Moonshot AI/Kimi,@Meta, and others are having a field day studying their competitor's design patterns in depth. However — if you look at the features that make Claude Code great and loved, it's clearly not just based on Claude Code being the most effective harness for coding. And it's not clear who is copying whom. If you trust third-party benchmarks like Terminal-Bench to provide a rough approximation of which coding harnesses are most powerful — Claude Code doesn't come in first, or even round out the top 5, based on coding performance. It's currently in 39th place. For performing the set of tasks in the current iteration of Terminal-Bench, you'd be better off with using 38 other options – like using the open source ForgeCode harness with Opus 4.6 or GPT-5.4, which both score 81.8% — compared to Claude Code with the same model at 58%. Yet for many developers, Claude Code is truly beloved. Why? Because of the product management moat. It is product management that drives the features enabling Claude Code to be loved and adopted by users. This is driven by a few things: (1) First, great product management for AI harnesses is focused not just on task quality (that is the core function of the AI research teams) — but on the kinds of features that make the tool smarter, easier to use, and better at persisting your context over time. This is everything from the way memory compaction works, to planned features like having the product analyze your codebase even while you're not actively using it, to finding better ways to store your preferences so the tool genuinely understands you over time. All of this builds a connection to the product — a product you feel bonded with and love using. It also goes far beyond what a benchmark like Terminal-Bench is measuring. Great product management for a harness (like Claude Code) will focus on what your user experience is like days, weeks, and many months into using the tool – how will it grow, adapt, learn with you, and better fit your needs. What it remembers. How it adapts. It's a matter of taste, preference, and love – not just task completion. And as state-of-the-art models continue to improve, far beyond our functional needs for most tasks – this becomes even stronger over time. Second, I think we'll increasingly see the harness adapted to multiple verticals. We're seeing this already — from vertical-specific harnesses for healthcare, accounting, product management itself, financial analysis, and other domains. What's interesting is that even within coding, companies like Cursor have built their own engines (off of an open source base) that outperform Claude Code combined with a more powerful LLM. The harness matters a ton, and at least at this stage, post-training — having great industry- and use-case-specific data — wins over general models. Whether this holds in the longer term will be fascinating to watch. But for now, and I believe forever, the fundamentals hold. They're driven by two things: (1) Build something users truly love — this is the heart of the product management discipline. (2) Craft something of high quality — this is the ongoing state-of-the-art model performance game, driven by AI researchers. In the short term, the gap between leading coding harnesses will narrow. The leak will accelerate that. In the long term, the gap will widen again — led by organizations with great product management, fantastic model development (or post-training data), and world-class execution. AI is changing everything. Except the fundamentals of building products people love.
Uri Pomerantz tweet media
English
1
0
1
82
Uri Pomerantz retweetledi
Alex Finn
Alex Finn@AlexFinn·
Unreal. The entire Claude Code source code just leaked It reveals EVERY secret Anthropic has in store for Claude I went through all 600,000 lines of code Here's EVERYTHING juicy detail you need to know about how Claude Code is built and what is coming next:
Alex Finn tweet media
English
426
268
3.1K
1M
Uri Pomerantz retweetledi
Jason Walls
Jason Walls@walls_jason1·
Yesterday Mark Cuban reposted my work, DM'd me, and told me to keep telling my story. So here it is. I'm a Master Electrician. IBEW Local 369. 15 years pulling wire in Kentucky. Zero coding background. I didn't go to Stanford. I went to trade school. Every week I'd show up to a home where someone just bought a Tesla or a Rivian. And every time, someone had already told them they needed a $3,000-$5,000 panel upgrade to install a charger. 70% of the time? They didn't need it. The math is in the NEC — Section 220.82. Load calculations. But nobody was doing them for homeowners. Electricians upsell. Dealers don't know. And the homeowner just pays. I got angry enough to build something about it. I found @claudeai. No coding experience. I just started talking to it like I'd explain a job to an apprentice. "Here's how load calcs work. Here's the NEC code. Now help me build a tool that does this." 6 months later — @ChargeRight is live. Real software. Stripe payments. PDF reports. NEC 220.82 calculations automated. $12.99 instead of a $500 truck roll. I'm still pulling wire. I still take service calls. I wake up at 5:05 AM for work. But something shifted. Yesterday @vivilinsv published my story as Claude Builder Spotlight #1. Mark Cuban saw it. The Claude community showed up. And for the first time, I felt like this thing I built in my kitchen might actually matter. I'm not a tech founder. I'm a dad who wants to coach little league and be home for dinner. I just happened to build something that helps people. If you're in the trades and thinking about using AI — do it. The barrier isn't technical skill. It's believing you're allowed to try. EVchargeright.com
English
601
2.2K
16.2K
890K
Uri Pomerantz retweetledi
Alex Hollings
Alex Hollings@AlexHollings52·
This lady ate a surface-to-air missile, sustained a 12-14 G rocket-propelled ejection, likely hit the ground at 15+ miles per hour… And then she popped up smiling at the locals who came to help her. Tell me again that women don’t belong in combat.
OSINTtechnical@Osinttechnical

Footage of Kuwaiti locals approaching one of the shot down American pilots this morning. “Are you OK? Thank you for helping us.”

English
4.7K
7.7K
64.3K
5.1M
Uri Pomerantz retweetledi
Dhairya
Dhairya@dkare1009·
BREAKING: MIT just mass released their Al library for free. (Links included) I went through these and honestly... this is better than most paid courses I've seen. Here's the full list of books: Foundations 1. Foundations of Machine Learning Core algorithms explained. Theory meets practice. 2. Understanding Deep Learning Neural networks demystified. Visual explanations included. 3. Machine Learning Systems Production-ready architecture. System design principles. Advanced Techniques 4. Algorithms for ML Computational thinking simplified. Decision-making frameworks. 5. Deep Learning The definitive textbook. Covers everything deeply. Reinforcement Learning 6. RL Basics (Sutton & Barto) The classic. Agent training fundamentals. 7. Distributional RL Beyond expected rewards. Advanced theory. 8. Multi-Agent Systems Agents working together. Coordination and competition. 9. Long Game Al Strategic agent design. Future-focused thinking. Ethics & Probability 10. Fairness in ML Bias detection. Responsible Al practices. 11. Probabilistic ML (Part 1 & 2) Links: lnkd.in/gkuXuexa Most people pay thousands for bootcamps that teach half of this. Bookmark it. Start anywhere. Just start. Repost for others Follow for more insights on Al Agents. MIT's books on Al Foundations 1. Foundations of Machine Learning - lnkd.in/gytjT5HC 2. Understanding Deep Learning - lnkd.in/dgcB68Qt 3. Machine Learning Systems - lnkd.in/dkiGZisg Advanced Techniques 4. Algorithms for ML - algorithmsbook.com 5. Deep Learning - lnkd.in/g2efT6DK Reinforcement Learning 6. RL Basics (Sutton & Barto) - lnkd.in/guxqxcZZ 7. Distributional RL - lnkd.in/d4eNP-pe 8. Multi-Agent Systems - marl-book.com 9. Long Game Al - lnkd.in/g-WtzvwX Ethics & Probability 10. Fairness in ML - fairmlbook.org 11. Probabilistic ML (Part 1) - lnkd.in/g-isbdjj 12. Probabilistic ML (Part 2) - lnkd.in/gJE9fy4w
Dhairya tweet media
English
54
1.3K
4.8K
315.1K
Uri Pomerantz retweetledi
Miad Maleki
Miad Maleki@miadmaleki·
1/15 President Trump is the first American president to come to the rescue of the Iranian people and stand against this tyranny. That takes courage and historic vision. What comes next can't be worse than the Islamic Republic. Believe me, I've lived there, and then witnessed firsthand while serving in the U.S. government how this regime destroyed this country, murdered, suppressed, and forced out tens of millions of generations of talented Iranians. As a transition away from the brutal regime is taking shape, here is a key issue that needs quick attention: Targeted, phased, and strategic sanctions relief will be essential to empower a democratic transitional government in Iran, one that can deliver stability, national unity, and territorial integrity.
English
75
981
4.9K
922.3K
Uri Pomerantz retweetledi
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
Here is Peter Thiel’s email to Zuck and Andreessen in Jan-2020 predicting socialism. Tl;dr too much student debt and lack of affordable housing keeps young people with negative capital for too long. And without a stake in the capitalist system, they will turn against it.
Chamath Palihapitiya tweet media
English
1.6K
3.2K
25.3K
4.7M
Uri Pomerantz retweetledi
Nicholas Fabiano, MD
Nicholas Fabiano, MD@NTFabiano·
A sense of purpose in life significantly lowers the risk of cognitive impairment in later life. Even in those with a genetic risk for dementia.
Nicholas Fabiano, MD tweet media
English
99
848
7.6K
439K
Uri Pomerantz retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
Holy shit. MIT just built an AI that can rewrite its own code to get smarter 🤯 It’s called SEAL (Self-Adapting Language Models). Instead of humans fine-tuning it, SEAL reads new info, rewrites it in its own words, and runs gradient updates on itself literally performing self-directed learning. The results? ✅ +40% boost in factual recall ✅ Outperforms GPT-4.1 using data it generated *itself* ✅ Learns new tasks without any human in the loop LLMs that finetune themselves are no longer sci-fi. We just entered the age of self-evolving models. Paper: jyopari. github. io/posts/seal
Alex Prompter tweet media
English
621
2.2K
11.4K
1M
Uri Pomerantz retweetledi
Michael Dell 🇺🇸
Michael Dell 🇺🇸@MichaelDell·
Blessed are the peacemakers
English
193
569
10.3K
203.7K
Uri Pomerantz retweetledi
Ilya Sutskever
Ilya Sutskever@ilyasut·
truly the greatest day ever🎗️
English
832
685
16K
1.8M
Uri Pomerantz retweetledi
Shaun Maguire
Shaun Maguire@shaunmmaguire·
As the hostages come home, I have tears in my eyes As the War winds down, I'm beaming with pride For the last decade or so it has oftentimes felt like the American empire is in decline But today it feels like America is rising again, and lifting its friends along with it 🇺🇸
Shaun Maguire tweet media
English
265
763
8.4K
127.7K