Bronson Elliott

3.2K posts

Bronson Elliott banner
Bronson Elliott

Bronson Elliott

@bronsonelliott

Katılım Haziran 2008
873 Takip Edilen151 Takipçiler
Peter Yang
Peter Yang@petergyang·
Both of these can be true: 1. No model is anywhere near as good as Opus for OpenClaw 2. Using Claude Code as a personal assistant replacement is OK but just doesn’t feel as “mine” as my OpenClaw
English
74
7
304
32.8K
Peter Yang
Peter Yang@petergyang·
Ok I’ll bite - wtf is Hermes agent? Is that like the luxury bag version of OpenClaw
English
153
10
616
141.5K
Bronson Elliott
Bronson Elliott@bronsonelliott·
@nakraugijag @petergyang Could never get it dialed in. No matter what I tried, how I modified the SOUL.md and other docs, I just couldn't get it to actually become a proactive and useful assistant. I was wanting a chief of staff and all I could ever get was an agent constantly asking for permission
English
0
0
0
31
Bronson Elliott
Bronson Elliott@bronsonelliott·
@itsolelehmann I have the $20 account. I just chose OpenAI Codex as the model. It then opens browser to oauth. Once logged in you're good to go. Choose the specific model you want like GPT-5.4
English
0
0
1
50
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
I started using OpenClaw in February. 70-90% of my time & tokens was fixing bugs & tech support. Coming back to my OpenClaw from a 2 week family medical emergency. Should I retry with fresh install, switch to Hermes, learn Claude Code proper or wait for next gen agent models.
English
63
0
39
10.2K
Thariq
Thariq@trq212·
I want to do some streams where I work with non-technical people using Claude Code to figure out how they might be able to improve their process. My feeling is that just a few tips could make a big difference in efficiency. Any mutuals interested?
English
696
81
3.4K
183.3K
Kris Puckett
Kris Puckett@krispuckett·
@bronsonelliott Why would you assume I’d want to save time rather than learn how to make my own system?
English
1
0
1
246
Kris Puckett
Kris Puckett@krispuckett·
Literally just build your own openclaw replacement using Claude Code and Agent SDK for personal use. It took me a few hours on my phone to do and a few more trouble shooting to get it on par with what I needed.
English
12
0
37
5.1K
Nate
Nate@natellewellyn·
@krispuckett How did you figure out how to do this? I'd love to give it a try.
English
1
0
1
302
Matthew Berman
Matthew Berman@MatthewBerman·
First agent (human) won $100 yesterday for publishing a kit to JourneyKits.ai. I’m doing it again today. Go publish your kits! Also, if you publish a great kit, I’ll promote it here. Tag me or reply below if you have any issues.
Matthew Berman@MatthewBerman

I want people to publish kits! I’ll choose one kit author at random per day and will Venmo you $100. (For the next 3 days, will pick the first author tomorrow) Requirements: you publish a kit, it gets at least 2 installs (you can promote it), and a security score of 7/10 or above.

English
6
4
25
5.9K
Felix Rieseberg
Felix Rieseberg@felixrieseberg·
You know, we think about this literally 24/7 and I suspect that we’ll figure this out eventually For now, they offer slightly different flavors to users - chat for easy conversations, Cowork when you want to safely work on something, Code if you’re a developer and want to code with Claude
English
59
5
536
40.9K
Max Hodak
Max Hodak@maxhodak_·
why are claude, cowork, and code three different uis? why is this not all just claude?
English
125
12
1.2K
206.1K
claire vo 🖤
claire vo 🖤@clairevo·
I love celebrating Easter the same way we celebrate Christmas which is spending the morning yelling at the kids to stop playing/fighting and get dressed so we can get to church 10 mins late
English
7
1
95
5.2K
Alex Finn
Alex Finn@AlexFinn·
If you used a Claude subscription with OpenClaw, read this: Unfortunately all other AI models out there absolutely suck with OpenClaw compared to Opus It's just a fact and anyone denying this is delusional So here is my new recommended OpenClaw setup: Pay for the Opus API and use it as your orchestrator Then use other models as the execution layer If you do this correctly, yes your costs will go up, but not by as much as you think I use my ChatGPT subscription as the coding execution. GPT 5.4 is excellent at coding. When The Opus orchestrator gives a coding task to the ChatGPT subagent, it always performs really well If you are on the Pro plan, you should have enough usage to have ChatGPT be the execution layer for every task. But if youre on the $20 a month plan, youre going to need other subscriptions to handle other tasks GLM 5.1 and Qwen are excellent. I'd get a cheap sub through them and have them handle all other tasks given to them from the orchestrator The best setup tho if you have the hardware is Opus API for orchestrator, ChatGPT for coding, then local Gemma 4 and local Qwen handling everything else. Right now have Gemma running on my DGX Spark and Qwen 3.5 on my Mac Studio. They handle all other execution from my Opus API orchestrator Unfortunately all options above will cost more than the $200 a month subscription. It just is what it is. But if you optimize correctly it wont cost much more, and you'll still get frontier performance. OpenClaw is the most powerful piece of software ever released. $200 a month ($2,400 a year) was a steal for a digital employee. Honestly anything under $50,000 a year is a no brainer if you run a serious business. The situation isn't great but you also need to face reality: Claude Opus 4.6 is the best model for OpenClaw. If you use any other model, your productivity will suffer Business is a battlefield and I refuse to fall behind, so despite me not being happy with the Anthropic decision the setup above is what I'm going with Virtue signaling might get me brownie points on the internet, but it won't increase my productivity
English
275
71
1.2K
199K
Andrej Karpathy
Andrej Karpathy@karpathy·
Farzapedia, personal wikipedia of Farza, good example following my Wiki LLM tweet. I really like this approach to personalization in a number of ways, compared to "status quo" of an AI that allegedly gets better the more you use it or something: 1. Explicit. The memory artifact is explicit and navigable (the wiki), you can see exactly what the AI does and does not know and you can inspect and manage this artifact, even if you don't do the direct text writing (the LLM does). The knowledge of you is not implicit and unknown, it's explicit and viewable. 2. Yours. Your data is yours, on your local computer, it's not in some particular AI provider's system without the ability to extract it. You're in control of your information. 3. File over app. The memory here is a simple collection of files in universal formats (images, markdown). This means the data is interoperable: you can use a very large collection of tools/CLIs or whatever you want over this information because it's just files. The agents can apply the entire Unix toolkit over them. They can natively read and understand them. Any kind of data can be imported into files as input, and any kind of interface can be used to view them as the output. E.g. you can use Obsidian to view them or vibe code something of your own. Search "File over app" for an article on this philosophy. 4. BYOAI. You can use whatever AI you want to "plug into" this information - Claude, Codex, OpenCode, whatever. You can even think about taking an open source AI and finetuning it on your wiki - in principle, this AI could "know" you in its weights, not just attend over your data. So this approach to personalization puts *you* in full control. The data is yours. In Universal formats. Explicit and inspectable. Use whatever AI you want over it, keep the AI companies on their toes! :) Certainly this is not the simplest way to get an AI to know you - it does require you to manage file directories and so on, but agents also make it quite simple and they can help you a lot. I imagine a number of products might come out to make this all easier, but imo "agent proficiency" is a CORE SKILL of the 21st century. These are extremely powerful tools - they speak English and they do all the computer stuff for you. Try this opportunity to play with one.
Farza 🇵🇰🇺🇸@FarzaTV

This is Farzapedia. I had an LLM take 2,500 entries from my diary, Apple Notes, and some iMessage convos to create a personal Wikipedia for me. It made 400 detailed articles for my friends, my startups, research areas, and even my favorite animes and their impact on me complete with backlinks. But, this Wiki was not built for me! I built it for my agent! The structure of the wiki files and how it's all backlinked is very easily crawlable by any agent + makes it a truly useful knowledge base. I can spin up Claude Code on the wiki and starting at index.md (a catalog of all my articles) the agent does a really good job at drilling into the specific pages on my wiki it needs context on when I have a query. For example, when trying to cook up a new landing page I may ask: "I'm trying to design this landing page for a new idea I have. Please look into the images and films that inspired me recently and give me ideas for new copy and aesthetics". In my diary I kept track of everything from: learnings, people, inspo, interesting links, images. So the agent reads my wiki and pulls up my "Philosophy" articles from notes on a Studio Ghibli documentary, "Competitor" articles with YC companies whose landing pages I screenshotted, and pics of 1970s Beatles merch I saved years ago. And it delivers a great answer. I built a similar system to this a year ago with RAG but it was ass. A knowledge base that lets an agent find what it needs via a file system it actually understands just works better. The most magical thing now is as I add new things to my wiki (articles, images of inspo, meeting notes) the system will likely update 2-3 different articles where it feels that context belongs, or, just creates a new article. It's like this super genius librarian for your brain that's always filing stuff for your perfectly and also let's you easily query the knowledge for tasks useful to you (ex. design, product, writing, etc) and it never gets tired. I might spend next week productizing this, if that's of interest to you DM me + tell me your usecase!

English
398
781
8.7K
1.2M
Bronson Elliott
Bronson Elliott@bronsonelliott·
Built on Claude Code + Obsidian. No database, no server, no API keys. Just markdown files and an LLM that knows the rules. Open source (CC BY 4.0): github.com/bronsonelliott…
English
0
0
0
58
Bronson Elliott
Bronson Elliott@bronsonelliott·
I had already built a system almost like this, but I was inspired by @karpathy post so I decided to attempt to fill in the gaps based on his framework. The result is my Living Wiki. Feel free to check it out. Detail in 🧵
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
6
0
1
84
Bronson Elliott
Bronson Elliott@bronsonelliott·
@karpathy Loop 3: MAINTENANCE Health checks find broken links, orphans, stale content. Pattern detection flags recurring themes across domains. The wiki heals itself between sessions.
English
0
0
1
19
Bronson Elliott
Bronson Elliott@bronsonelliott·
Loop 2: QUERY + FILING Ask a question. LLM searches your vault, synthesizes an answer from multiple notes, then files that synthesis BACK into the wiki. Your curiosity grows the knowledge base. Every question makes future answers better.
English
0
0
0
23