Post

Anders Brownworth
Anders Brownworth@anders94·
Just learned that in Xcode’s Apple Intelligence you can add a local LLM using ollama and have private AI coding assistance without an internet connection.
Anders Brownworth tweet media
English
4
0
10
932
FalconOrtiz
FalconOrtiz@Falconortizx·
Apple Intelligence + Ollama for local AI coding in Xcode is the combo most devs are missing 🍎 I've been running local models on my Mac for months. The privacy advantage alone is worth it — no code leaving your machine, no API costs, no rate limits. But real question: has anyone actually benchmarked this vs sending to Claude/Copilot for real-world coding tasks? I'm curious if the quality gap is still noticeable or if local models have caught up enough for daily work.
English
1
0
1
37
Anders Brownworth
Anders Brownworth@anders94·
@Falconortizx For me the quality lags commercial models only by a matter of months. At this point I use local models (gemma4 / qwen3-coder) to write tests and non-essential UI / backends. I’ll use a commercial model for architectural planning but implement things myself using local models.
English
1
0
1
37
FalconOrtiz
FalconOrtiz@Falconortizx·
@anders94 I think that’s one of the best approach we can have right now . Having in mind there’s so much competition for comercial ones everyday. Qwen has become one of the best options right now
English
1
0
0
20
Anders Brownworth
Anders Brownworth@anders94·
@Falconortizx IMHO this is the major unsung win Apple has - unified memory and token generation performance per watt. Nobody seems to be talking about this but it’s a huge advantage.
English
0
0
0
31
Paylaş