Alberto Nunez

2.5K posts

Alberto Nunez banner
Alberto Nunez

Alberto Nunez

@Alberto8793

Building BurnBar // HormigaDormida// https:https://t.co/UKT2NBmsEW @MurMurXAi // https://t.co/ZDJUKwS10L & @ThatImagin4361 // https://t.co/wdBFADSB8F Ready. Set. Go Slow.

ÜT: 34.751839,-92.297008 Katılım Nisan 2009
477 Takip Edilen133 Takipçiler
Z.ai
Z.ai@Zai_org·
Introducing GLM-5V-Turbo: Vision Coding Model - Native Multimodal Coding: Natively understands multimodal inputs including images, videos, design drafts, and document layouts. - Balanced Visual and Programming Capabilities: Achieves leading performance across core benchmarks for multimodal coding, tool use, and GUI Agents. - Deep Adaptation for Claude Code and Claw Scenarios: Works in deep synergy with Agents like Claude Code and OpenClaw. Try it now: chat.z.ai API: docs.z.ai/guides/vlm/glm… Coding Plan trial applications: docs.google.com/forms/d/e/1FAI…
English
233
656
5.8K
1.9M
Alberto Nunez
Alberto Nunez@Alberto8793·
@bridgemindai The work is already hard to take seriously because the experiments are so small. The angry, oddly opinionated tone makes it worse and further undercuts your credibility. You should probably take a step back and recalibrate.
English
1
0
1
133
BridgeMind
BridgeMind@bridgemindai·
GLM 5.1 just took 30 seconds to read 5 lines of a file. Five lines. Thirty seconds. On the GLM Max Coding Plan. I paid $80/month to watch a model think about reading a markdown file longer than it takes me to read it myself. Slowest model on BridgeBench SpeedBench. Now confirmed slowest at the most basic tasks imaginable. $80/month wasted. Full review video coming soon.
BridgeMind tweet media
English
41
6
189
15.2K
Alberto Nunez
Alberto Nunez@Alberto8793·
@Zai_org But I cant use my coding plan subscriptions, tokens?
English
0
0
0
350
Z.ai
Z.ai@Zai_org·
Here comes AutoClaw. We offer a new solution to run OpenClaw locally on your own machine. - Download and start immediately. No API key required. - Bring any model you like, or use GLM-5-Turbo, optimized for tool calling and multi-step tasks. - Fully local. Your data never leaves your machine. We're giving data control back to Claw users. Meet AutoClaw → autoglm.z.ai/autoclaw/ Join the conversation → discord.gg/jvrbCRSF3x
English
172
339
3.4K
536.8K
Bindu Reddy
Bindu Reddy@bindureddy·
The riskiest thing you can do is use AI to write all your code - anti patterns multiply - engineers have no idea how to debug issues - tech debt can mushroom - you create a giant pile of AI slop I am hearing a lot of IRL horror stories
English
137
41
376
24.5K
Alberto Nunez
Alberto Nunez@Alberto8793·
The amount of damage I was allowed to do on one thread @Zai_org <3
Alberto Nunez tweet media
English
0
0
0
58
Alberto Nunez retweetledi
Lou
Lou@louszbd·
from 5 am to 11 am PT, you can switch over to the GLM Coding Plan. will take some of the load off Claude. also a good window to run your token-heavy background tasks.
Thariq@trq212

To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before.

English
57
24
643
58.4K
Alberto Nunez
Alberto Nunez@Alberto8793·
@wolfaidev @Zai_org @FactoryAI So far so good! @Zai_org A little slow, but I imagine the demand on it is pretty insane at this point with Claude's cuts and it being a brand new model... @FactoryAI I have not observed any issues with tool usage or thought slippage.
English
1
0
1
154
Alberto Nunez retweetledi
dax
dax@thdxr·
one place where i always need the smartest possible model is to resolve merge conflicts smartest model for the dumbest work
English
56
6
554
30.7K
0xSero
0xSero@0xSero·
Here's why I shill Droid 24/7 ---------- Today Droid single-handedly: 1. Published a REAP of GLM-5 in FP8, there's a reason no one else has done it DSA is still very new: huggingface.co/0xSero/GLM-5-R… 2. Found and Fixed an upstream issue with VLLM + DSA + Hopper where GLM-5's kv-cache would need to recompute and spend 20x the time needed, fixed. 3. Created multiple working quantisations on it's own, it tried exl3 and autoround but both failed so resorted to GGUF (autoround 3 bits doesn't work on ampere) huggingface.co/0xSero/GLM-5-R… 4. Implemented github.com/0xSero/turboqu… within 24 hours of the research paper coming out, tested it across 5090s, 3090s, H100s, and B200s 5. Has been distilling larger models into LoRA to help me test arxiv.org/abs/2505.21835 and it got an 80% prune to be semi-coherent again. 6. Helped my find research papers, clean up slop with the human-writing skill. 7. Got BYOK working with Anthropic, ZAI, Kimi, MiniMax, OpenAI working in Cursor github.com/0xSero/factory… 8. Helped me Implement blog.comfy.org/p/dynamic-vram… 's dynamic loading, only works on a tiny model, but still. ------- I only have to check in on it every 30-45 minutes (I am talking all 8 of my sessions) the thing will run for 16 hours with like 0 prep All this while I am mostly focused on my actual job and tweeting 24/7 Keep in mind each one of these experiments is running on a different server, with different constraints, like I don't understand how I can get such good results here. --------- I love novelty. Which is why I jump around and talking about all these different tools. I have used all of these harnesses and messed around with every feature. I keep coming back to this, and I keep shilling it because I sincerely wish others get to experience this.
0xSero tweet media0xSero tweet media
English
30
15
394
28.7K
Alberto Nunez
Alberto Nunez@Alberto8793·
@garrytan — really appreciated you liking my post today. I’m building OpenBurnBar, and I’m taking the gstack side seriously as a real first-party feature — tighter security boundary, more turnkey setup, and aiming for something that feels like a legitimate open-source release. Also just wanted to say thank you for gstack — it’s genuinely inspiring software. Are you comfortable with me building on top of it for this project if I’m careful about attribution, licensing, and boundaries?
Alberto Nunez tweet mediaAlberto Nunez tweet media
English
2
0
1
12