opencode 1.3.0 will no longer autoload the claude max plugin
we did our best to convince anthropic to support developer choice but they sent lawyers
it's your right to access services however you wish but it is also their right to block whoever they want
we can't maintain an official plugin so it's been removed from github and marked deprecated on npm
appreciate our partners at openai, github and gitlab who are going the other direction and supporting developer freedom
@thdxr@_bgiori Could someone use the approach that vibe companion took? github.com/The-Vibe-Compa… where they use the agent sdk and opencode works with it bi directionally
@hugorcd@vite_js@voidzerodev I have vite project using hono and deployed on CF workers. I have evlog setup but I don’t find the logs very easy to read on their dashboard. My agents also seem to struggle or pull historical records. What’s the best practice there?
evlog now has a @vite_js plugin
Add it to your vite.config and get:
• Auto-initialization (zero setup code)
• Debug stripping at build time
• Source location injection
• Auto-imports for evlog symbols
• Client logger support
Works with any Vite-based framework. SvelteKit, Hono, Express, you name it.
👉 dub.sh/better-log
@_ashleypeacock Oh I see it's just a config driven way to setup a tunnel. I was thinking it was setting up a tunnel for a cf worker to connect to something on local.
@tharshan_09 If you want to expose something on your local machine securely, either share a link to an app running to a colleague/friend or test a webhook integration
You can now manage your Cloudflare Tunnels via Wrangler, rather than having to do it via the dashboard! It ships with the following capabilities:
- Create a new tunnel
- List all tunnels
- Get info about a specific tunnel
- Delete a tunnel
- Run a tunnel
- Start a free, temporary tunnel
Cloudflare Tunnel provides you with a secure way to connect your resources to Cloudflare without a publicly routable IP address by creating outbound-only connections to Cloudflare's global network.
Just shipped something I'm really excited about in @𝗇𝗎𝗑𝗍𝗃𝗌/𝗆𝖼𝗉-𝗍𝗈𝗈𝗅𝗄𝗂𝗍: Code Mode.
The idea: instead of the LLM calling your MCP tools one at a time (and resending ALL tool descriptions every single round-trip), it writes JavaScript that orchestrates everything in one go.
Loops, conditionals, Promise.all, real control flow, not 8 separate LLM turns.
With 50 tools the token savings are insane: -81% on tool description overhead alone.
And the best part? Your existing tools don't change at all. One line:
`experimental_codeMode: true`
Runs in a secure V8 sandbox thanks to @rivet_dev's secure-exec, perfect timing with their launch yesterday.
mcp-toolkit.nuxt.dev/advanced/code-…
Gemini is going to start cracking down on using your oAuth token outside of the CLI.
It's just not worth the risk to try to reuse your token in some weird way!
@badlogicgames@ClementDelangue@github just ban this user. Ive heard of regular users having their accounts banned recently but they can't ban the actual abusers.
Our biggest open-source repos are getting overwhelmed by AI slop which literally makes Github unusable (~a new pull request every 3 minutes).
Fun new challenges in an agentic world!
@ctatedev Oooh these seem really exciting! One thing it struggled with so far was using the dev tools network panel info. It kept trying to write observers in JS and eval them. Should this now use the native tools for network monitoring to debug issues?
agent-browser v0.21
It just keeps getting better (bc of you)
😮 `batch` command
😮 `network har` commands
😮 `upgrade` command
😮 iframe support
😮 --user-data-dir support
After this, you can just upgrade it:
npm i -g agent-browser
brew install agent-browser
@zeeg lol - I was worried it was another post about someone getting massively billed. I was just setting up a greenfield project and been using workers, queues, workflows - and shocked how much you get with $5.
@brynary Would love to try fabro serve early if you are wanting more feedback! Have a project I been doing exclusively with claude code cli, that I can over to this pretty easily and try the workflow.
Thank you for all the interest in Fabro yesterday for our launch. Today's Fabro development priority is cleaning up Standalone mode in preparation for finishing Server mode, which is the most asked about feature.
@brynary Also saw this during fabro install " openai connectivity: Invalid request to openai: The 'gpt-5-mini' model is not supported when using Codex with a ChatGPT account."
@brynary Thank you! Looks like with the cli backend you loose a few capabilities of fabro. How do those features impact the quality of the output? Or is it more of a loss of performance/speed?
@tharshan_09 It can do either. Each agent node in the graph either talks to a model via the API (default) or alternatively can be run as a sub-process invocation to claude/codex/gemini CLI
#cli-backend" target="_blank" rel="nofollow noopener">docs.fabro.sh/core-concepts/…
Today I'm thrilled to open source what I've been working on...
Meet Fabro (github.com/fabro-sh/fabro), the dark software factory for small teams of expert engineers.
Fabro gets you out of the REPL (read-eval-prompt-loop) by layering deterministic workflow graphs over agent sessions. It's batteries included with cloud sandboxes, quality sign offs, multi-model ensembles, and Git checkpoints.
It's MIT licensed so you can fork and own your AI coding toolchain. Single Rust binary with zero deps.
I'd love to hear what you think!