
Giulio Ardoino
3.5K posts

Giulio Ardoino
@giuliastro
Innovation Manager, Crowdsourcer, tech Startups advisor, Accessibility and Blockchain enthusiast, founder of ventures such as AccessiBit, BestCreativity






We’re updating our ChatGPT Pro and Plus subscriptions to better support the growing use of Codex. We’re introducing a new $100/month Pro tier. This new tier offers 5x more Codex usage than Plus and is best for longer, high-effort Codex sessions. In ChatGPT, this new Pro tier still offers access to all Pro features, including the exclusive Pro model and unlimited access to Instant and Thinking models. To celebrate the launch, we’re increasing Codex usage for a limited time through May 31st so that Pro $100 subscribers get up to 10x usage of ChatGPT Plus on Codex to build your most ambitious ideas.









Introducing Rork 1.5 • Easy app monetization with RevenueCat • Built-in analytics (no Firebase needed) • The smartest agent based on Opus 4.5 & Claude Code • Rork Stars community where successful mobile app founders like @zach_yadegari (Cal AI), @alexsllater (QUITTR), @georgeLampro20 will help you grow your app & over 100 small improvements 👇




We discovered GLM-4.6 was failing in Cline not because the model was flawed, but because inference providers were silently corrupting it. Using the same weights across different endpoints produced completely different behaviors. The variance wasn't minor, it determined whether the model could function at all. Some providers emitted tool calls inside reasoning traces. Others hallucinated parameters. We were debugging the wrong layer of the stack entirely. The fix required three interventions: 1. Prompt reduction from 56,499 to 24,111 characters 2. Provider filtering for high-fidelity endpoints only 3. Workflow enforcement with strict sequencing to prevent premature edits OpenRouter's :exacto endpoint transformed GLM-4.6 from intermittently broken to production-stable overnight. This poses a material risk to open source AI: When users encounter provider-induced failures, they blame the model, not the infrastructure. Trust erodes. Open source suffers. Reliability must be a shared responsibility. Transparent reporting of quantization settings and behavioral differences should become standard practice. Full technical analysis in the blog below, including our complete methodology and prompt optimizations. @cline cline.bot/blog/cline-our…











