Post

Apple Intelligence + Ollama for local AI coding in Xcode is the combo most devs are missing 🍎
I've been running local models on my Mac for months. The privacy advantage alone is worth it — no code leaving your machine, no API costs, no rate limits.
But real question: has anyone actually benchmarked this vs sending to Claude/Copilot for real-world coding tasks? I'm curious if the quality gap is still noticeable or if local models have caught up enough for daily work.
English

@Falconortizx For me the quality lags commercial models only by a matter of months. At this point I use local models (gemma4 / qwen3-coder) to write tests and non-essential UI / backends. I’ll use a commercial model for architectural planning but implement things myself using local models.
English

@anders94 I think that’s one of the best approach we can have right now . Having in mind there’s so much competition for comercial ones everyday. Qwen has become one of the best options right now
English

@Falconortizx IMHO this is the major unsung win Apple has - unified memory and token generation performance per watt. Nobody seems to be talking about this but it’s a huge advantage.
English
