
Stochy
3.1K posts

Stochy
@StochasticGhost
Optimist. I write lots of code. Founder of five companies across education, games, VR, finance, AI companionship. I love my wife and my dog. Privacy is cool.


If you can reply to this post you’re in North America





New in Claude Code: auto mode. Instead of approving every file write and bash command, or skipping permissions entirely, auto mode lets Claude make permission decisions on your behalf. Safeguards check each action before it runs.





glm-5-turbo is my new favourite model of the season. I’ve been using glm-4.7 primarily for customer facing conversation. glm-5 was completely non-conformant for my use case, massive hallucination rate, extremely poor tool usage. glm-5-turbo is so much more stable for me. It’s a little more expensive, but where I use it amounts to a rounding error, it’s not a major cost driver. I’m now experimenting with it as the main personality and orchestrator model in my neuron project (in place of opus and gpt-5.4), where it coordinates other models to get work done. So far it’s been excellent for research, excellent for first pass spec generation (but I always pass these to opus-4.6, Gemini-3.1 and gpt-5.4 for edition), and I haven’t tried it for implementation because codex already just works well when you give it a detailed spec. This has reduced the cost of running neuron by about 80% as I was using opus via the api in a lot of these places. It is much more personable than most models. Almost as personable as opus. It’s a shame they aren’t releasing the weights.


oh fuck i think i made something cool
















