Got my DGX setup with some Qwen models, and now hooked up to openclaw and hermes agents.
Let's see how bad would quality drop vs openAI that I've used before (now I am running Qwen3.5 35B).
Running model locally is an amazing feeling !
Got a new toy - Nvidia DGX spark - will try to setup some local models for inference.
in the past i used my 3090 (24 gb vram) - but this should allow a lot larger models (128gb).
ZKsync prover optimizations just hit mainnet. Big gains on speed & cost. Feel free to read more (technical) details here: amstelden.com/zksync-prover-…
DMs open for follow-up questions.
Happy to see many of you interacting with ZKsync's Prover API. I've seen healthy appetite in this direction, but also a lot of struggle on getting the proofs right (plenty of 400s on proof validation). If you're stuck in the process or need help troubleshooting your setup, DM me.