localsonly

180 posts

localsonly banner
localsonly

localsonly

@LocalsOnlyAI

Agency in the Agentic Era. If it doesn't run on your machine, you don't own it. Locals Only

Sumali Nisan 2026
91 Sinusundan33 Mga Tagasunod
localsonly
localsonly@LocalsOnlyAI·
Update: Ollama just fixed tool calling. So a third run is probably happening.
English
0
0
0
0
localsonly
localsonly@LocalsOnlyAI·
Spent the weekend running 336 pages of real commercial leases through 5 AI models — 2 frontier, 3 local. Opus won. But wait until you see what happened when I fixed my methodology. The lineup: Claude Opus 4.6 Claude Haiku 4.5 Google Gemma 4 31B (dense, 31B params) Qwen 3.5 35B-A3B (MoE, 3B active) Google Gemma 4 E4B (~4B effective) I messed up the test design, caught it, and reran everything. The second set of numbers surprised me. Full results Wednesday. Do you think any of these local models got close to Opus? Even Haiku? Any models or tools you'd want to see tested?
English
1
0
0
23
localsonly
localsonly@LocalsOnlyAI·
A lot of “multimillion dollar” AI businesses are just subsidized compute with margin. That subsidy goes away, the margin disappears. Local isn’t perfect yet. But no redundancy = real risk. It’s the same as letting someone build your whole system with no contract or IP assignment.
English
0
0
0
15
klöss
klöss@kloss_xyz·
$200/mo for max frontier AI compute is a subsidy… and subsidies eventually end the smartest builders I know are already moving their craziest workflows to open models and local the rest will wake up to subscription hikes and usage throttles soon enough own your own compute
English
17
2
37
1.8K
localsonly
localsonly@LocalsOnlyAI·
@owroot We only let our kid watch little bear every once and a while or Vooks.
English
0
0
0
520
O.W. Root
O.W. Root@owroot·
My kids watch extremely minimal tv/movies but they have watched Little Bear and Kipper the Dog as we are closing in on the last hours of a long car trip. They are both from the 90s and they are so slow and relaxing. The VOs, the music, the animation - all of it is basically as gentle as you get in tv form. I believe a lot of kids shows are totally psychotic and make their brains more addled than they need to be. I don't feel that with Little Bear or Kipper the Dog though. They don't annoy me when I hear them. They feel like 1997 PBS and that's basically the least brain frying option and the best you can get.
Old Media@oldmedia

Little Bear (1995)

English
50
50
1.3K
98.1K
0xSero
0xSero@0xSero·
Ideal go kit: 1. 30~ 1 gram gold pieces 2. Water filtration kit 3. Fire starter 4. A gun, knife, flare 5. A computer 6. A few power banks 7. A compass 8. Opioids, antibiotics 9. Rope + net 10. High protein food 11. Portable solar panel 12. Portable starlink 13. Tent
0xSero tweet media
English
17
4
113
5.1K
localsonly
localsonly@LocalsOnlyAI·
@RoundtableSpace Once you start… you’ll never go back. I’ve been doing this for the past year and a half and rarely sit at my desk anymore. How is work bench? I use jumpdesk (if you need another option)
English
0
0
0
560
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
HEADLESS MAC MINI + IPAD IS STARTING TO LOOK LIKE A CLEAN SETUP FOR MANAGING AI AGENTS REMOTELY. Monitor everything from your iPad, step in when needed, and keep the whole system running from anywhere.
English
32
17
449
103.8K
Jon Hernandez
Jon Hernandez@JonhernandezIA·
The army is growing! Added the spark from @nvidia to run an @openclaw with gemma 4 local and also run some models for my other claws, image, voice etc.. So far I have One Mac mini with codex as my main work horse named clippy. Another Mac mini with zai gl5.1 he does some specific stuff and is a great backup when clippy has a problem, he can get clippy backup when there is a problem and it's been really a bless as I'm traveling all the time. And now the spark adding to the team... It's quite a journey but loving it... Having your own personalized ultra agentic chatgpt setup is incredible
Jon Hernandez tweet media
English
40
16
249
17.5K
localsonly
localsonly@LocalsOnlyAI·
@AlexFinn What is your favorite models for DGX Spark? I have one coming next week.
English
0
0
1
1K
Alex Finn
Alex Finn@AlexFinn·
I don't care what kind of hardware you have, you should be running local models It will save you a ton on money on OpenClaw and keep your data private Even if you're on the cheapest Mac Mini you can be doing this Here's a complete guide: 1. Download LMStudio 2. Go to your OpenClaw/Hermes and say what kind of hardware you have (computer and memory and storage) 3. Ask what's the best local model you can run on there (probably will be Gemma 4 or Qwen. if you have a big computer, it will be GLM) 4. Ask 'based on what you know about me, what workflows could this open model replace?' 5. Have OpenClaw walk you through downloading the model in LM Studio and setting up the API 6. Ask OpenClaw to start using the new API Boom you're good to go. You just saved money by using local models, have an AI model that is COMPLETELY private and secure on your own device, did something advanced that 99% of people have never done, and have entered the future. If you are on smaller hardware you probably are not going to replace all your AI calls with this, but you could replace smaller workflows which will still save you good money Own your intelligence.
Alex Finn tweet media
English
118
129
1.5K
116.1K
Andy
Andy@andyantiles_·
If you’re making over $250k/yr Retire your wife immediately Not so she can do yoga and Pilates all day But so she can become the family real estate professional With the real estate professional status on your tax returns You’ll be able to claim enormous tax deductions from buying real estate Have your wife quit her job And you’ll secure generational wealth from buying tax deductible, cash flowing real estate My life changed forever when I had my wife quit her corporate job and we started buying a ton of section 8 rental properties Running this playbook till I’m blue in the face
English
36
27
893
286.2K
localsonly
localsonly@LocalsOnlyAI·
@realEstateTrent @SJFriedl Wait you actually want control about your financial decisions? Makes more sense when you look at the actual principal payments you’re making over the first 5 to 10 years.
English
0
0
0
33
StripMallGuy
StripMallGuy@realEstateTrent·
@SJFriedl You’re making a different argument. I’m saying it’s better to have the freedom to choose how much you’re paying of; and when, if any.
English
2
0
19
3.7K
StripMallGuy
StripMallGuy@realEstateTrent·
The blind spots around the home mortgage topic are wild. The people who don’t understand the math behind why being forced to pay down principal monthly is a bad financial move are the exact same people who desperately need that forced structure. So it all works out.
StripMallGuy@realEstateTrent

The mortgage on our home is interest-only. Why? Because it’s much smarter to invest that principal instead of paying it back to myself every month. Unless you need a forced savings account to protect yourself from yourself, OR You don’t have good investment opportunities, an interest-only mortgage is a no-brainer. It’s actually not even close.

English
54
1
298
157.9K
localsonly
localsonly@LocalsOnlyAI·
Local AI is not just a toy. Here's what changed this week: 1. An 80B coding model (Qwen3-Coder-Next) now fits on a 64 GB Mac mini. 3B active per token. MoE does the heavy lifting. Going to try and do the CRM run on this. Might be too big. 2. Ollama's MLX preview landed. ~2x decode on Apple Silicon. The bottleneck people kept pointing at is gone. Anything else I missed? Anything worth testing on a 16GB and 64GB Mac Mini?
English
0
0
1
45
localsonly
localsonly@LocalsOnlyAI·
@0xSero Would a 3090 with my 64GB mini be worth anything? Will probably get a M5 Studio when it comes out. Just debating on getting a 3090 now or waiting.
English
1
0
0
752
localsonly
localsonly@LocalsOnlyAI·
SaaS got away with adding back SBC. Some one is going to figure out the next move: Pay LLM providers in stock instead of cash. Call it “AI R&D.” Add it back to EBITDA. Same cost. Better story.
Bill Gurley@bgurley

x.com/i/article/2042…

English
0
0
0
37
localsonly
localsonly@LocalsOnlyAI·
8/ Updated CRM Scoreboard: Claude Opus 4.6: 24/25 Gemma 4 31B: 23/25 ← new Gemma 4 e4b (9.6 GB): 22/25 Qwen3-Coder 30B: 21/25 Gemma 4 26B: 21/25 Qwen 2.5 Coder 14B: 19/25 The gap is closing faster than most takes on this app will tell you.
English
0
0
0
38
localsonly
localsonly@LocalsOnlyAI·
7/The catch is real and I'm not going to dress it up. 20 minutes per round means your dev loop is "kick it off, go walk the dog, come back." Not "run it, read the output, iterate." Different workflow. But it's $0 a query, on your hardware, with your data. For a lot of the work I care about, that's the better tradeoff.
English
1
0
0
8
localsonly
localsonly@LocalsOnlyAI·
"Local models can't keep up with Claude on real coding work." That's what I told myself for months. It's why I left Gemma 4 31B out of the first CRM benchmark. Too slow to bother with. Ran it this week anyway. It came one point short of Opus. Here's what I found. 🧵
English
1
0
0
32