Naved Merchant

543 posts

Naved Merchant banner
Naved Merchant

Naved Merchant

@navedmer

Sr. SDE. Post tech memes and opinions. The future is local AI

San Francisco Bergabung Kasım 2021
245 Mengikuti83 Pengikut
Tweet Disematkan
Naved Merchant
Naved Merchant@navedmer·
I built a way to access your local AI models from anywhere using WebRTC - no port forwarding, no cloud providers, completely private. Here's how and why 👇
English
2
0
1
61
Naved Merchant
Naved Merchant@navedmer·
Even with Opus 4.6 with all its intelligence, I can still confidently say its not replacing engineers yet. It still needs the right context to be effective. Its a force multiplier for sure and i think my productivity has doubled or tripled. But its still a tool like IDEs were.
English
0
0
0
4
Naved Merchant
Naved Merchant@navedmer·
Google will miss a huge opportunity if they dont include this image somewhere in android 17
Naved Merchant tweet media
English
0
0
0
81
Naved Merchant
Naved Merchant@navedmer·
@kylekemper Modern front load washers are great, GEs have a in built venting system that prevents mold. Never had an issue in 4 years.
English
0
0
1
1.4K
Kyle Kemper 💫
Kyle Kemper 💫@kylekemper·
Pro tip: never buy a front loading washing machine
English
785
330
13.3K
3M
Naved Merchant
Naved Merchant@navedmer·
@0xSero Speeds look great! What hardware are you running it on?
English
1
0
1
480
0xSero
0xSero@0xSero·
Qwen3-235B is the most intelligent, and ergonomic model that can run at 192GB VRAM with full context. This to me is incredibly strange, in that I typically dislike Qwen models, due to the extensive rejections. It says something about having higher ACTIVE parameters. • Score: 32/35 (91.4%). - 60~ gen TPS 1.5k prefill
English
8
14
182
19.4K
Naved Merchant
Naved Merchant@navedmer·
@loganthorneloe I am a huge proponent for local language models, but i disagree they can be used for coding. There are not there yet. Its worth using the best model out there for coding (Claude Opus 4.5), but you can use local models for stuff like searching and other general purpose tasks
English
0
0
3
108
Logan Thorneloe
Logan Thorneloe@loganthorneloe·
My hypothesis was right. Two weeks ago I dropped $4000 on a maxed-out MacBook to test if local coding models could replace $100+/mo cloud subscriptions. After weeks of real development work, here's what you need to know: - Small models are shockingly capable. I'm talking 90%+ of development work can be handled by local models. Even 7B parameter models punch way above their weight. You don't need to spend $4000 on a 128 GB MacBook Pro like I did—even 32 and 64 GB can run great models. - The real constraint is tooling. While tooling makes it easy to serve local models, connecting those models to coding tools reliably was difficult. I spent a lot of time tinkering to get them to work. - Local models provide benefits other than just cost. They apply to many more applications (think security- and privacy-focused applications), provide greater flexibility, and are more reliable. There's no downtime for local models and their performance will never randomly degrade. So is better hardware worth it over a subscription? Yes, but here's the catch: If you're spending $100/mo+ on Cursor or Claude subscriptions, the investment is worth it. Local models will only get better and smaller from here on out. However, Google offers a lot of free quota across its AI coding products. The hardware purchase becomes much more difficult to justify if the alternative is free coding tools instead of pricey subscriptions. My approach going forward will be this: Use local models as my workhorse. Use the free cloud offerings for the 10% of cases where you need better performance. I documented my entire local AI coding setup. I decided to use the Qwen3 models, serve them with MLX, and use Qwen Code CLI as my coding tool. Link in bio for the complete guide.
Logan Thorneloe@loganthorneloe

I've got a MacBook w/128 GB of RAM coming today. My hypothesis: My money is better spent paying for greater hardware and running local coding models than paying a $100+/mo subscription. Follow for details of my setup and to see the results!

English
128
124
1.7K
286.6K
Naved Merchant
Naved Merchant@navedmer·
@Teknium I dont know what it is, but claude is just perfectly in tune for me for coding. Its always does exactly what is expected nothing more nothing less
English
0
0
1
115
Naved Merchant
Naved Merchant@navedmer·
Everything is Free and Open Source!
English
0
0
0
18
Naved Merchant
Naved Merchant@navedmer·
Future Work: 1. MyDeviceAI-Web - browser client 2. Image/PDF support for multimodal models 3. llama.cpp slots for better responses + concurrent inference 4. Seamless desktop updates 5. Custom OpenAI-compatible endpoints
English
1
0
0
32
Naved Merchant
Naved Merchant@navedmer·
I built a way to access your local AI models from anywhere using WebRTC - no port forwarding, no cloud providers, completely private. Here's how and why 👇
English
2
0
1
61