Zach Silveira

16.6K posts

Zach Silveira

Zach Silveira

@zachcodes

Dev @OddsJam Newsletter https://t.co/iL0ORyXIR2.

Orlando, FL เข้าร่วม Nisan 2015
241 กำลังติดตาม2.7K ผู้ติดตาม
ทวีตที่ปักหมุด
Zach Silveira
Zach Silveira@zachcodes·
Here's a quick look at the complete MCP server I setup, using the new Auth spec from May. Added it to claude.ai and it fully works with the new streamable http spec too! #ai #mcp #LLMs #claudecode
English
2
1
4
2.5K
Zach Silveira
Zach Silveira@zachcodes·
@luisrudge also are you using mac trackpad there, i need to have ai debug this for sure if you are lol
English
1
0
0
13
Zach Silveira
Zach Silveira@zachcodes·
anyone know if copy/ paste in opencode will ever get on par with claude code?
English
1
0
3
272
Zach Silveira
Zach Silveira@zachcodes·
Am I the only one who 99% of the time uses Claude or opencode or codex on manually accept edits? I get the review in as I go and quickly voice to text to tweak. Running to end then finding problems is more annoying
English
1
0
1
278
Zach Silveira
Zach Silveira@zachcodes·
You've never needed nextjs or a frontend framework Use script type module and use bun build exported to a storage bucket. Oo you need fancy "static pregeneration" OK have Claude spend 35s to make a script that loops your pages and renders them to HTML. Plain react can go far
English
1
0
2
171
Zach Silveira
Zach Silveira@zachcodes·
@luisrudge When I first got it with I term I didn't like the default speed of 3 so I set it to one. And I have a super high trackpad sensitivity. So I wonder if their app doesn't handle that well. No other tui has this issue its weird
English
1
0
0
17
Zach Silveira
Zach Silveira@zachcodes·
@luisrudge I wouldn't MIMD it if the scroll worked correctly. Moving my mouse's slightly makes it go all crazy
English
1
0
0
13
Luis Rudge
Luis Rudge@luisrudge·
@zachcodes never saw it glitching, but yeah the auto copy thing felt weird on the first day but my reality is that every time I selected something I wanted to copy it and now I miss this in other TUIs haha
English
1
0
0
9
Zach Silveira
Zach Silveira@zachcodes·
@luisrudge I already used it in native terminal ghostty and iterm2 so it's not my terminal.
English
0
0
0
13
Luis Rudge
Luis Rudge@luisrudge·
@zachcodes wdym? what's the difference? you can copy text and images what else do you want 😅
English
2
0
0
12
Zach Silveira
Zach Silveira@zachcodes·
As soon as I switched back to ollama cloud as the provider. The exact same task one shotted with no issue. The opencode team should be transparent about where the models are coming from and why they are performing badly or heavily quantized
English
0
0
0
68
Zach Silveira
Zach Silveira@zachcodes·
I use Claude and opencode and openclaw a lot now. Sounds crazy but I enjoy testing out all the models. Its sad that opencode go has a messed up glm5 provider. Tried it out yesterday and it was atrocious
English
1
0
0
253
Zach Silveira
Zach Silveira@zachcodes·
openai and anthropic will go to 0 in a few years when m5,6,7 ultra's are able to prompt process incredibly fast. qwen 3.5 122b a10b and qwen3-coder-next are already usable on device with a lot of ram
English
0
0
1
188
Zach Silveira รีทวีตแล้ว
Qwen
Qwen@Alibaba_Qwen·
🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: huggingface.co/collections/Qw… 🔗 ModelScope: modelscope.cn/collections/Qw… 🔗 Qwen3.5-Flash API: modelstudio.console.alibabacloud.com/ap-southeast-1… Try in Qwen Chat 👇 Flash: chat.qwen.ai/?models=qwen3.… 27B: chat.qwen.ai/?models=qwen3.… 35B-A3B: chat.qwen.ai/?models=qwen3.… 122B-A10B: chat.qwen.ai/?models=qwen3.… Would love to hear what you build with it.
Qwen tweet media
English
436
1.1K
8.1K
4M
LM Studio
LM Studio@lmstudio·
Qwen3.5-35B-A3B is now available in LM Studio! This model outperforms previous Qwen models that are more than 6x its size 🤯🚀 Requires about ~21GB to run locally. lmstudio.ai/models/qwen/qw…
English
75
179
2.3K
329.1K
Zach Silveira
Zach Silveira@zachcodes·
Why do MCP servers act like authentication was just built this year on the web? I shouldn't need to re-auth an mcp server every single day.... set a longer session lifetime geez
English
0
0
2
73
Zach Silveira
Zach Silveira@zachcodes·
@jlongster Split that into 10-15 posts if you want engagement and visibility to each key part.
English
0
0
0
32
James Long
James Long@jlongster·
I'm brewing a really really big post on frontend stuff like 10,000 words big with deep tech demos should I break it up and release it in parts? or just one big thing?
English
19
1
115
7.6K