zikasak
2.8K posts


the Codex app turns 3 (months old) today. they grow up so fast





Wanted to provide more clarity about this. Yesterday, we had a regression in merge queue behavior where, in some cases, squash or rebase commits were generated from the wrong base state, making earlier changes appear reverted in branch history. 2,804 pull requests out of over 4M merged on April 23 (roughly 0.07%) were affected. We fixed the issue, we've contacted every impacted customer, and we're expanding our automated test coverage for merge queue operations. The team will be updating the status page with RCA details as well.



we've moved opencode desktop to electron. it's faster, more reliable, and will replace our tauri build soon. try it out in beta via the link below.

🇯🇵 I'm really glad I live in a high trust society like Japan.


Things I’ve learned since the Macbook Neo was announced: - Turns out even a $499 MacBook can be controversial. - According to tech twitter, 8gb of ram isn’t enough for note taking, opening up a website and running the ChatGPT app at the same time. - everyone really did want colorful MacBooks - Android users are defending windows - Steve Jobs would have approved this - the iPad Air is cooked - Rest in peace Chromebooks - people think an iPhone chip is bad - If edit 8K video while rendering a 3D animation, while exporting a podcast, running 100 Chrome tabs, Final Cut, Blender, and still checking email like nothing’s happening, during simulating an entire city in a game engine just to test shadows and running four external displays while the fans barely wake up… the Neo is not for you.



"8gb on macOS is the same as 16gb on Windows" and 100 other jokes to tell your friends

🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: huggingface.co/collections/Qw… 🔗 ModelScope: modelscope.cn/collections/Qw… 🔗 Qwen3.5-Flash API: modelstudio.console.alibabacloud.com/ap-southeast-1… Try in Qwen Chat 👇 Flash: chat.qwen.ai/?models=qwen3.… 27B: chat.qwen.ai/?models=qwen3.… 35B-A3B: chat.qwen.ai/?models=qwen3.… 122B-A10B: chat.qwen.ai/?models=qwen3.… Would love to hear what you build with it.



Tiny (4GB) open source LLM models already match GPT 4.0. You can download one and run it for free on your entry-level GPU (runs on iGPU too). If these small open source models are good enough for most consumers, they will never become paying customers. That's a big risk.
















