JoePro

3.9K posts

JoePro banner
JoePro

JoePro

@JoePro

CS/Math grad turning AI chaos into tools for all. Dad x2, wife-powered. Crafting https://t.co/G9rywHomQr: image/video gen, NFTs, Followtronics exports. Let's democratize

Katılım Mart 2024
962 Takip Edilen820 Takipçiler
Sabitlenmiş Tweet
JoePro
JoePro@JoePro·
Crimson Desert on RTX 5080 | Cinematic Max Settings | Grinding Back After a Full Wipe | Raw Gameplay, No Mic x.com/i/broadcasts/1…
English
1
0
1
74
JoePro retweetledi
dotta 📎
dotta 📎@dotta·
As far as I can tell, Paperclip is not affected by the Claude Code "OpenClaw billing change" We use claude as the harness binary for the local single-user default, and this seems to be allowed
Boris Cherny@bcherny

@EricBuess Yep, working on improving clarity here to make it more explicit

English
38
7
202
26.6K
JoePro
JoePro@JoePro·
@elonmusk It really is good with kids. I'd switch up the voices or add a few geared that way. @blippi on there would instantly destroy the market. Always prime stuff from xAI. Top Company!
English
0
5
5
112
JoePro
JoePro@JoePro·
@steipete Beautiful system you made. If the ease of switching between Auth and API can get smoother it would change everything. Maybe Codex 5.4 auth and Opus API subagents would make things cheaper for me.
English
0
0
0
69
JoePro
JoePro@JoePro·
@tonysimons_ Same. The design of it is like an old Game meets new tech feel. I'm using it as a dispatcher from my main Claw harness. Nice computer bro. Bet the RGB adds some extra horsepower. Know mine does. 😂
English
1
0
1
106
Tony Simons
Tony Simons@tonysimons_·
The Hermes Agent CLI makes me feel so warm and fuzzy inside! 😊 Who’s busy setting up or tinkering with Hermes right now?! I wanna know what you’re building!!
Tony Simons tweet media
English
27
1
87
7K
JoePro
JoePro@JoePro·
@Scobleizer Free Claws? Mine costs $200 a month now they want to add pay as you go ontop for Harnesses. Anthropic is the real looser here. I'm dropping down and using the Coex 5.4 auth. Not much will change. X is hyping it too much. Hermes is amazing OpenClaw the GOAT IMO
English
0
0
0
138
Robert Scoble
Robert Scoble@Scobleizer·
My AI says Hermes is the winner: alignednews.com/ai after Anthropic shut down the free Claws. Or is it just a sycophantic parrot of what everyone on X is saying? :-)
English
20
5
87
11.5K
JoePro
JoePro@JoePro·
@Zimmerman895 @IamEmily2050 Cinematic Notebook LM is available to anyone with a sub. Copy and pasting Karpathy getting 115k views is a broken viewership. X being X. I'm pointing that out.
English
0
0
0
16
Emily
Emily@IamEmily2050·
NotebookLM video overview on Andrej post.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
22
108
1.2K
138.5K
JoePro
JoePro@JoePro·
Totally local and the Mac's unified memory makes them super convenient. Anthropic turned it into a pay as you go subscription. Anyone who knows will switch auth to ChatGPT Codex 5.4. Downgrade their Anthropic plan and use pay as you go for opus. Things will prob get cheaper. 100% on local it is going to be there like real soon and those mac studios will be more valuable. But nobody needs one. I'm still setting claws up and running things haven't changed.
English
0
0
0
125
Alex Finn
Alex Finn@AlexFinn·
I told you so. For months I’ve been telling you to buy Mac Minis Mac Studios and DGX Sparks I told you AI companies were going to ban you. Reduce limits. Increase prices Now it’s happening All while local models get 100x better My DMs are now filled with messages like this I don’t care Anthropic banned OpenClaw. Right now I have 3 Mac Studios a Mac Mini and a DGX Spark running incredible local models. You can never take those away from me This isn’t even close to over either. Tokens will only get more expensive. Local models will only get better and smaller. The clocks ticking. Own your intelligence before it’s too late
Alex Finn tweet media
Alex Finn@AlexFinn

It’s over. Anthropic just banned OpenClaw. Uncensored thoughts: 1. Massive mistake that will come back to bite them 2. Open source needs to win. If you have a local model running on your Mac mini, no corporation will ever be able to ban you 3. ChatGPT 5.4 is the best model. But it sucks compared to opus in OpenClaw. I will continue to pay for Anthropic api 4. I have no doubt the next OpenAI model will be optimized for Openclaw and be excellent 5. In 6 months the local models will be as good as opus 4.6 and all of this will be forgotten 6. It’s feels like from a consumer sentiment perspective things have flipped for OpenAI and Anthropic. They were the darlings when Opus 4.5 came out 7. Going to the Kanye concert right now please don’t spoil the stage or set list in the replies 8. The best openclaw set up is now Opus as the orchestrator, then much cheaper models as the execution layer. If you do this properly you won’t be paying much more than $200 a month. I’m using Gemma 4 and Qwen 3.5 for execution on my DGX Spark and Mac Studio

English
141
37
650
98.9K
blanco da bronco
blanco da bronco@blancodomingo_·
@JoePro @karpathy Feel free to contribute we went with a provider approach so you can plug and play whatever you want
English
1
0
1
11
JoePro
JoePro@JoePro·
🤣I think I just tinkered my way out of the equation. @karpathy has auto research. I introduce auto clawing.
JoePro tweet media
English
1
0
0
41
JoePro
JoePro@JoePro·
@blancodomingo_ @karpathy I'm currently trying to make an agent system that works similar to groks 4 agent structure. This is very useful.
English
0
0
0
9
JoePro
JoePro@JoePro·
@alexutopia My 2 year old is the most anti AI. Broke my meta glasses and limitless pendant in one day. 😂
English
1
0
2
483
Alex Utopia
Alex Utopia@alexutopia·
Most anti-AI arguments collapse the moment you ask one rude question: If AI is slop and worthless, why does it make you this emotional?
English
141
70
423
18.4K
JoePro
JoePro@JoePro·
@AlexFinn Local device. Mac mini not the best IMO. Accurate though.
English
0
0
0
19
Alex Finn
Alex Finn@AlexFinn·
Every AI tool you need to escape the permanent underclass: • OpenClaw • Hermes Agent • Gemma 4 running on a Mac Mini • Paperclip • ChatGPT 5.4 Pro • Claude Code • Codex app • 2nd monitor that has these agents up 24/7 Do work on 1st monitor. Constantly prompt 2nd monitor
English
225
140
1.7K
94.3K
JoePro
JoePro@JoePro·
Yea its the reason I clicked. But just figured it was to get interest. Click bait a bit. But it works well for what it is and potentially could do more cities with more users in those cities. Criticizing someones hard work in early stages isn't right. But I get the misleading also not.
English
0
0
0
7
James Kennedy
James Kennedy@JamesKennedyio·
@JoePro @m_atoms That it was built with claude wasn't the point of my criticism, I'm knocking him for claiming it is "the best way to monitor the situation in your city." yet it only tracks one city.
English
1
0
1
19
Michael Adams
Michael Adams@m_atoms·
Introducing Republic - the best way to monitor the situation in your city. Track political news, crime, permits, events, community groups, and more across the city and in your neighborhood!
English
110
206
2.5K
200.7K
JoePro
JoePro@JoePro·
@JamesKennedyio @m_atoms To knock someone for using Claude or any other AI to code is backwards thinking. Takes more then some shit prompt to make this. To integrate all those datapoints would be wasteful for an MVP. High value home run.
English
2
0
1
39
James Kennedy
James Kennedy@JamesKennedyio·
@m_atoms By 'your city' you mean 'I had claude build this cool tracker but to plug in the data for every city is impossible so I did San Francisco only but claude made it look cool and it even added a billing + subscription system'
English
2
0
1
712
JoePro
JoePro@JoePro·
@m_atoms Really great stuff. Are the land plots accurate? I know if it had that it would sell like hotcakes. Also, this is just for San Francisco do you have plans of making it more robust.
English
1
0
1
81
JoePro
JoePro@JoePro·
@elonmusk Would ya just look at that! Nicely done!
GIF
English
0
0
1
26
Elon Musk
Elon Musk@elonmusk·
Next flight of Starship and first flight of V3 ship & booster is 4 to 6 weeks away
English
14K
36.9K
447.9K
101.6M
JoePro
JoePro@JoePro·
Newest Open model from Google Been listening to Mr. Berman's cast for a couple months His takes are spot on with AI breaking news Entertaining to watch very intelligent and has some of the sickest tech setups out there. I'm getting Gemma 4 loaded now. What are your plans for local AI?
Matthew Berman@MatthewBerman

Gemma 4 is a bigger deal than more people realize... It's an incredible model that fits on most consumer hardware. The future is hybrid hosted/edge models.

English
0
0
0
61
JoePro retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
English
2.3K
5.4K
46.7K
14.1M