
Shreyansh Lodha
1.1K posts

Shreyansh Lodha
@staticvar_dev
29 | He/Him | Staff Engineer (Mobile) - @MutualMobile | Thoughts are my own 💜💛



Putting out a wish to the universe. I need more compute, if I can get more I will make sure every machine from a small phone to a bootstrapped RTX 3090 node can run frontier intelligence fast with minimal intelligence loss. I have hit page 2 of huggingface, released 3 model family compressions and got GLM-4.7 on a MacBook huggingface.co/0xsero My beast just isn’t enough and I already spent 2k usd on renting GPUs on top of credits provided by Prime intellect and Hotaisle. ——— If you believe in what I do help me get this to Nvidia, maybe they will bless me with the pewter to keep making local AI more accessible 🙏









There is a CLI proxy that reduces LLM token consumption by 60-90% on common dev commands. It filters and compresses command outputs before they reach your LLM context 🧞♂️ Link in comments.






OpenCode offers free models out of the box, no ads, no credit cards, no accounts OpenCode Go is a low cost $10/month subscription that brings powerful open source models to developers everywhere open source wins










