Peter Somerville

700 posts

Peter Somerville banner
Peter Somerville

Peter Somerville

@PeterSomerville

Veteran leader driving technology innovation at places like @AntithesisHQ @CaparraAI @CarlsonMBA @Yale @USMC

Washington, DC Katılım Nisan 2009
1.1K Takip Edilen16K Takipçiler
Sabitlenmiş Tweet
Peter Somerville
Peter Somerville@PeterSomerville·
"The world reveals itself to those who travel on foot." - Werner Herzog
English
0
1
4
727
Peter Somerville retweetledi
SpaceX
SpaceX@SpaceX·
Full duration and full thrust 33-engine static fire with Super Heavy V3
English
2K
5.3K
32.8K
33.7M
Peter Somerville retweetledi
Claude
Claude@claudeai·
We’ve agreed to a partnership with @SpaceX that will substantially increase our compute capacity. This, along with our other recent compute deals, means that we’ve been able to increase our usage limits for Claude Code and the Claude API.
English
4.7K
12.1K
130.8K
23.4M
Peter Somerville retweetledi
Jarvis
Jarvis@jarvis_best·
Minnesota is a problem. Let’s talk about it. When people think of Minnesota they probably think of the great film FARGO in which the average resident is portrayed as either a low class low IQ drunk or a serial killer. You know what? Fargo was being NICE. The real Minnesota is far worse. It was worse back then and it’s WAY worse now.
Jarvis tweet media
English
159
170
1.9K
466K
Peter Somerville retweetledi
Cairo Smith
Cairo Smith@cairoasmith·
There's a common misconception that Brutalist buildings were unpainted, but thanks to microscopic analysis of the exteriors we can now recreate what they looked like in their prime.
Cairo Smith tweet media
English
443
3.4K
39.7K
7.4M
Peter Somerville retweetledi
Woody P, Rugged As Fuck®️
Woody P, Rugged As Fuck®️@woodypanama·
Image has been released of the Special Forces soldier who bet on ops.
Woody P, Rugged As Fuck®️ tweet media
English
352
708
7.3K
367.3K
Peter Somerville retweetledi
SpaceX
SpaceX@SpaceX·
Three years since the first flight of Starship, the next generation is here. New ship. New booster. New engines. New pad and new test site. SpaceX engineers are working to solve one of the most difficult engineering challenges in history: developing a fully, rapidly reusable rocket
English
1.7K
5.9K
31.5K
6.2M
Peter Somerville retweetledi
Meredith Thornburgh
Meredith Thornburgh@MCMCD_·
Something Jocko said on a podcast I was listening to c. winter 2020-2021 changed my life—he was recounting how someone once asked him “what he says to himself” to get himself to do all the crazy disciplined stuff he does (up before 4am working out every morning, etc) and he was like that is the EXACT wrong question, you need to get out of the mind and into the body, you need to learn how to move the body by just going around the mind, let it scream and protest while you drag yourself out of bed, you cannot be held hostage by having to get the mind on board before you do anything
Alex Olshonsky@oloal

Heard this in AA years before I realized it was wu wei: “It's easier to act your way into new ways of thinking than it is to think your way into new ways of acting.”

English
57
603
9.2K
857.2K
Peter Somerville retweetledi
Shashank Joshi
Shashank Joshi@shashj·
Experimenting with OpenAI's new model. A hydrologically accurate cut-away of the Strait of Hormuz, drawn by Richard Scarry, drawing on current AIS data.
Shashank Joshi tweet media
English
89
298
2.1K
351.4K
Peter Somerville retweetledi
Antithesis
Antithesis@AntithesisHQ·
BugBash is starting in an hour, but if you're not coming, you can still join the fun with Hegel. New libraries for C++ and Typescript just dropped.
Antithesis tweet media
English
2
2
8
722
Peter Somerville retweetledi
Office of the Secretary of the Air Force
In consultation with @SecWar, we will EXTEND the A-10 “Warthog” platform to 2030. This preserves combat power as the Defense Industrial Base works to increase combat aircraft production. Thank you to @POTUS for your unwavering support of our warfighters and quick, decisive leadership as we equip our force. More to come.
Office of the Secretary of the Air Force tweet media
English
1.2K
3K
19.5K
2.6M
Peter Somerville retweetledi
Husk
Husk@huskirl·
Idk what to type here rn
English
515
1.4K
30.2K
1.5M
Peter Somerville retweetledi
Montreal Expos
Montreal Expos@Montreal_Expos·
The Montreal Expos are exiting the baseball space. During Q2 and Q3 2026, we will transition to acquiring high-performance GPU assets. This is all part of our long-term vision to become a fully integrated GPU-as-a-Service (GPUaaS) and AI-native cloud solutions provider.
English
156
887
14K
734.1K
Peter Somerville retweetledi
A_Horrible_Glory
A_Horrible_Glory@AHorribleGlory·
@aelfred_D "I yield to no man in sympathy for the gallant men under my command; but I am obliged to sweat them tonight, so that I may save their bananas tomorrow."
A_Horrible_Glory tweet media
English
2
5
85
2.4K
Peter Somerville
Peter Somerville@PeterSomerville·
@anna_y_zhang I do nearly all my LLM chats these days in a Claude Code instance within my Obsidian Vault. In theory Claude is cataloguing our conversations in a way that both humans and agents can refer back to later; in practice it's spotty...
English
0
0
0
40
Anna Z
Anna Z@anna_y_zhang·
Karpathy just described a problem we've been wrestling with for 6 months. We killed our product twice getting to the real version of it. His system starts with manually capturing articles and research. But there's a version of this problem nobody talks about: people are already generating massive amounts of thinking in AI tools every day - and that data is scattered and invisible. The reasoning evaporates after every session. Every reply here is a power user building their own system. I respect that. But we keep arriving at the same conclusion: the solution can't be primitives. We need a system that has opinions about your context so you don't have to. Writing about the journey soon.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
2
2
4
192
Peter Somerville retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.9K
7K
58.4K
20.9M