
zeb
2.4K posts

Sabitlenmiş Tweet


@alexgenovese Not gonna lie, I haven't used sonnet in months. I either use Kimi (also GLM if personal) paired with gpt 5.4/opus 4.6
English

We've been using Kimi internally for opencode and also internal agents for a while and it's been pretty amazing. If you're looking for building something that doesn't require the bleeding edge if inference but does require some intelligence then this is it.
michelle@_mchenco
kimi is here. workers ai is officially in the big model serving game blog.cloudflare.com/workers-ai-lar…
English

@cocogoatmain @providerproto @thdxr After looking closer it's both, there were still references to the Anthropic sub plugin (which apparently has been working on/off) but as well as methods for using Anthropic's API without sub usage.
Both using the sub and the Anthropic API were removed
English

@zebassembly @providerproto @thdxr Isn’t “Claude max plugin” referring to subscription (am I missing something?)
English

opencode 1.3.0 will no longer autoload the claude max plugin
we did our best to convince anthropic to support developer choice but they sent lawyers
it's your right to access services however you wish but it is also their right to block whoever they want
we can't maintain an official plugin so it's been removed from github and marked deprecated on npm
appreciate our partners at openai, github and gitlab who are going the other direction and supporting developer freedom
English

@providerproto @thdxr How is paying for using Anthropic's API (not even using a Claude max subscription) akin to cheating?
English

@thdxr If I make cheats for video games they send lawyers too. Same thing. Don’t cheat and you won’t have a problem.
English

Mostly things super specific to the distributed system. I currently maintain a product for doing logs in a serverless product and exporting OTLP to third party o11y providers, so we have things to view high level stats about a customer's logs, the ability to see what features a customer has enabled, the ability to see what configuration the customer is using, where the customer is trying to export data and when the last successful/failed export was, and some other stuff.
English

@zebassembly And what kind of actions/data views this Oncall panels have?
English

@drewhamlett I did that in the past, didn't play nicely with everything
English

Nice for Workers to get a shoutout here, but props should really go to these two npm libs providing all the support here!
npmjs.com/package/isolat…
npmjs.com/package/node-s…
Rivet@rivet_dev
Introducing the Secure Exec SDK Secure Node.js execution without a sandbox ⚡ 17.9 ms coldstart, 3.4 MB mem, 56x cheaper 📦 Just a library – supports Node.js, Bun, & browsers 🔐 Powered by the same tech as Cloudflare Workers $ 𝚗𝚙𝚖 𝚒𝚗𝚜𝚝𝚊𝚕𝚕 𝚜𝚎𝚌𝚞𝚛𝚎-𝚎𝚡𝚎𝚌
English

zeb retweetledi
zeb retweetledi

Today we have @southpolesteve giving us the scoop on how he made Vinext.
His opencode setup and planing approach was super interesting
Sahil@sheerluck_io
Really loved this video from @syntaxfm I guess syntax is becoming my favourite channel right now
English

Linux does technically run now, if you either jump through a bunch of firmware hoops or a manufacturer upstreams firmware. Afaik like 2 manufacturers do this and it's still kinda rough.
As for WSL, yeah that's better then bare Windows but still pretty meh since there's a lot of stuff that doesn't work in WSL
English

@zebassembly Qualcomm promised it, so Linuxshould run stably someday (it already kinda does now).
how about WSL?
English
zeb retweetledi

I fixed this for Next.js/any-other-RSC-framework btw.
At the limit I slice the array and defer it on the next row. Much better row packing. Far fewer rows. Much less work done. PR here: github.com/facebook/react…
Michael Hart@hichaelmart
However, looking into it further, it's not just the batch size – it's because the current logic doesn't actually batch evenly. It batches an element and its children until it reaches the limit, but then every child after that becomes its own chunk. For example:
English










