localsonly
175 posts

localsonly
@LocalsOnlyAI
Agency in the Agentic Era. If it doesn't run on your machine, you don't own it. Locals Only
Присоединился Nisan 2026
88 Подписки33 Подписчики

@RoundtableSpace Once you start… you’ll never go back. I’ve been doing this for the past year and a half and rarely sit at my desk anymore.
How is work bench? I use jumpdesk (if you need another option)
English

@LocalsOnlyAI @JonhernandezIA @nvidia @openclaw Bro leave room for ventilation, those things need space to breathe.
English

The army is growing!
Added the spark from @nvidia to run an @openclaw with gemma 4 local and also run some models for my other claws, image, voice etc..
So far I have
One Mac mini with codex as my main work horse named clippy.
Another Mac mini with zai gl5.1 he does some specific stuff and is a great backup when clippy has a problem, he can get clippy backup when there is a problem and it's been really a bless as I'm traveling all the time.
And now the spark adding to the team...
It's quite a journey but loving it... Having your own personalized ultra agentic chatgpt setup is incredible

English

@AlexFinn What is your favorite models for DGX Spark? I have one coming next week.
English

I don't care what kind of hardware you have, you should be running local models
It will save you a ton on money on OpenClaw and keep your data private
Even if you're on the cheapest Mac Mini you can be doing this
Here's a complete guide:
1. Download LMStudio
2. Go to your OpenClaw/Hermes and say what kind of hardware you have (computer and memory and storage)
3. Ask what's the best local model you can run on there (probably will be Gemma 4 or Qwen. if you have a big computer, it will be GLM)
4. Ask 'based on what you know about me, what workflows could this open model replace?'
5. Have OpenClaw walk you through downloading the model in LM Studio and setting up the API
6. Ask OpenClaw to start using the new API
Boom you're good to go.
You just saved money by using local models, have an AI model that is COMPLETELY private and secure on your own device, did something advanced that 99% of people have never done, and have entered the future.
If you are on smaller hardware you probably are not going to replace all your AI calls with this, but you could replace smaller workflows which will still save you good money
Own your intelligence.

English

@andyantiles_ What if I’m already in real estate… and she is too?
English

If you’re making over $250k/yr
Retire your wife immediately
Not so she can do yoga and Pilates all day
But so she can become the family real estate professional
With the real estate professional status on your tax returns
You’ll be able to claim enormous tax deductions from buying real estate
Have your wife quit her job
And you’ll secure generational wealth from buying tax deductible, cash flowing real estate
My life changed forever when I had my wife quit her corporate job and we started buying a ton of section 8 rental properties
Running this playbook till I’m blue in the face
English

@realEstateTrent @SJFriedl Wait you actually want control about your financial decisions?
Makes more sense when you look at the actual principal payments you’re making over the first 5 to 10 years.
English

@SJFriedl You’re making a different argument. I’m saying it’s better to have the freedom to choose how much you’re paying of; and when, if any.
English

The blind spots around the home mortgage topic are wild.
The people who don’t understand the math behind why being forced to pay down principal monthly is a bad financial move are the exact same people who desperately need that forced structure.
So it all works out.
StripMallGuy@realEstateTrent
The mortgage on our home is interest-only. Why? Because it’s much smarter to invest that principal instead of paying it back to myself every month. Unless you need a forced savings account to protect yourself from yourself, OR You don’t have good investment opportunities, an interest-only mortgage is a no-brainer. It’s actually not even close.
English

Local AI is not just a toy. Here's what changed this week:
1. An 80B coding model (Qwen3-Coder-Next) now fits on a 64 GB Mac mini. 3B active per token. MoE does the heavy lifting. Going to try and do the CRM run on this. Might be too big.
2. Ollama's MLX preview landed. ~2x decode on Apple Silicon. The bottleneck people kept pointing at is gone.
Anything else I missed? Anything worth testing on a 16GB and 64GB Mac Mini?
English

@0xSero Would a 3090 with my 64GB mini be worth anything?
Will probably get a M5 Studio when it comes out. Just debating on getting a 3090 now or waiting.
English

If you have 5k budget I’d recommend this. A 3090 and a Mac Studio with 256gb
Alex Ziskind@digitalix
new video. i plugged a 5090 into a mac mini.
English

SaaS got away with adding back SBC.
Some one is going to figure out the next move:
Pay LLM providers in stock instead of cash.
Call it “AI R&D.”
Add it back to EBITDA.
Same cost. Better story.
Bill Gurley@bgurley
English

7/The catch is real and I'm not going to dress it up.
20 minutes per round means your dev loop is "kick it off, go walk the dog, come back." Not "run it, read the output, iterate."
Different workflow. But it's $0 a query, on your hardware, with your data. For a lot of the work I care about, that's the better tradeoff.
English

@AndrewYang But AI companies are expanding physical office space?
Anthropic and OpenAI now over 1M square feet with that space committed for +10 years.
They either:
1.) anticipate headcount growth
Or
2.) are horrible at planning for the future
English

What happens when millions of white-collar jobs disappear? blog.andrewyang.com/p/the-end-of-t…
English

@RandBusiness I’m working on building tests for running real business tasks in local models and consumer machines (Mac mini, DGX Spark, Mac Studio (when it arrives). I want to see what can actually be offloaded in a reliable way.
If you have any workflows or tests I should do, let me know!
English

I believe business owners need to see real examples of how other owners are actually using AI in their business.
Then literally just steal it and implement it in your own business
Right now in Scalepath, this is what we’re doing.
We have an “AI-Use-cases” channel in our slack forum. And ~every 2 weeks we’re doing live demonstrations having members show off exactly what they’ve built.
Last week a member demoed his vibe coded Service Titan replacement. He uses it every day in his business.
We’re doing as much of this as possible. Being early and “ahead” on AI will drive a ton of value to our members, so we want everyone to share what they’re building. The whole community will prosper.
Charles, will you let us demo your stuff next?
Charles Miller@BigDemoPrez
English

