
@gajesh tried this locally (desktop + laptop over gigabit, split an LLM) — too slow.
latency kills it, and most useful models don’t fit well across consumer hardware anyway.
also not convinced “idle compute = cheap compute” holds up in practice vs centralized infra.
English


































