Angehefteter Tweet
Naved Merchant
543 posts

Naved Merchant
@navedmer
Sr. SDE. Post tech memes and opinions. The future is local AI
San Francisco Beigetreten Kasım 2021
245 Folgt83 Follower
Naved Merchant retweetet

(1/3) Building with Kiro 👻
Every week we’re spotlighting what people are shipping, starting with our Kiroween hackathon winners.
Shout-out to our incredible judges:
@ania_kubow @svpino @ssennettau @ErikCH @rstephensme @brianjbeach @navedmer @_Jay_Raval_ @JasonTAndersen @mrsanfran2 @PhillShaffer
First up 👇
spr.ly/6015B65DG5
English

@kylekemper Modern front load washers are great, GEs have a in built venting system that prevents mold. Never had an issue in 4 years.
English

@sawyerhood Kiro has very good spec based development, try it out
English

i'm late to the party but my man really cooked with this one
Thariq@trq212
my favorite way to use Claude Code to build large features is spec based start with a minimal spec or prompt and ask Claude to interview you using the AskUserQuestionTool then make a new session to execute the spec
English

@0xSero Speeds look great! What hardware are you running it on?
English

Qwen3-235B is the most intelligent, and ergonomic model that can run at 192GB VRAM with full context.
This to me is incredibly strange, in that I typically dislike Qwen models, due to the extensive rejections.
It says something about having higher ACTIVE parameters.
• Score: 32/35 (91.4%).
- 60~ gen TPS 1.5k prefill
English

@loganthorneloe I am a huge proponent for local language models, but i disagree they can be used for coding. There are not there yet. Its worth using the best model out there for coding (Claude Opus 4.5), but you can use local models for stuff like searching and other general purpose tasks
English

My hypothesis was right.
Two weeks ago I dropped $4000 on a maxed-out MacBook to test if local coding models could replace $100+/mo cloud subscriptions.
After weeks of real development work, here's what you need to know:
- Small models are shockingly capable. I'm talking 90%+ of development work can be handled by local models. Even 7B parameter models punch way above their weight. You don't need to spend $4000 on a 128 GB MacBook Pro like I did—even 32 and 64 GB can run great models.
- The real constraint is tooling. While tooling makes it easy to serve local models, connecting those models to coding tools reliably was difficult. I spent a lot of time tinkering to get them to work.
- Local models provide benefits other than just cost. They apply to many more applications (think security- and privacy-focused applications), provide greater flexibility, and are more reliable. There's no downtime for local models and their performance will never randomly degrade.
So is better hardware worth it over a subscription?
Yes, but here's the catch:
If you're spending $100/mo+ on Cursor or Claude subscriptions, the investment is worth it. Local models will only get better and smaller from here on out.
However, Google offers a lot of free quota across its AI coding products. The hardware purchase becomes much more difficult to justify if the alternative is free coding tools instead of pricey subscriptions.
My approach going forward will be this: Use local models as my workhorse. Use the free cloud offerings for the 10% of cases where you need better performance.
I documented my entire local AI coding setup. I decided to use the Qwen3 models, serve them with MLX, and use Qwen Code CLI as my coding tool.
Link in bio for the complete guide.
Logan Thorneloe@loganthorneloe
I've got a MacBook w/128 GB of RAM coming today. My hypothesis: My money is better spent paying for greater hardware and running local coding models than paying a $100+/mo subscription. Follow for details of my setup and to see the results!
English

@Teknium I dont know what it is, but claude is just perfectly in tune for me for coding. Its always does exactly what is expected nothing more nothing less
English

When I say I am a claude stan for coding, I mean it
Teknium (e/λ)@Teknium
Ive never switched from claude despite all the gpt-5xx glazing - they had delayed putting claudes on this for the last like 6 months for unexplained reasons - but the vibes didn't lie
English

