Nour Eddine Hamaidi

2.1K posts

Nour Eddine Hamaidi banner
Nour Eddine Hamaidi

Nour Eddine Hamaidi

@NOOROU

life goes on.

EARTH Katılım Eylül 2010
121 Takip Edilen154 Takipçiler
Nour Eddine Hamaidi
@Teknium Hermes is catching up at an unimaginable speed. We really need extra hands for PR reviews.
English
0
0
1
115
coffee
coffee@coffeedev·
Does @NousResearch welcome and actually look ay external contributions to Hermes? My biggest gripe right now with OpenClaw is the inability to get any bug fixes looked at/reviewed/merged. I'm maintaining a local fork at this point because I gave up on getting OTEL fixed in main
English
5
0
34
2.6K
Nour Eddine Hamaidi
@bridgemindai Does the distance from their servers have anything to do with this? I mean in my case it is isn't that blizzard.
English
0
0
0
369
BridgeMind
BridgeMind@bridgemindai·
GLM 5.1 has one of the best UI skills of any coding model I've tested. It still isn't ready. Dropped $80 on the GLM Max Coding Plan. Put it through real production workflows in BridgeSpace. The UI output is better than GPT 5.4. Better than Claude Opus 4.6 on design tasks. That part shocked me. But 30 seconds to read 5 lines of a file. Constant 529 errors. Tiny context window. Slowest model on BridgeBench SpeedBench. Beautiful code you can't ship fast enough is still code you can't ship. Full review and exclusive BridgeBench results below.
English
19
8
146
12.8K
Nour Eddine Hamaidi
@test_tm7873 we getting mixed signals about the 5.1 no idea which are reliable, the best way to find out is to try on your usecases.
English
0
0
0
157
Nour Eddine Hamaidi
@Teknium Yes, Hermes is smart the comment is about the one who asks it to PR a feature no one wants or needs.
English
0
0
0
11
Teknium (e/λ)
Teknium (e/λ)@Teknium·
Love the community working with us on building Hermes Agent. If you want to too, just ask hermes-agent to make the changes you want to see in Hermes Agents' codebase and ask it to submit a PR!
Denis Skripnik (blind) (✱,✱)@Denis_skripnik

Just opened PR #3984 to @NousResearch Hermes agent 🔥 3 UX improvements for Telegram DM topics: — /model now works per-thread. Change model in one topic — others stay untouched — Fallback notification: when primary model fails, you see it instantly in chat instead of wondering why the bot switched — Subagent progress now shows the model: 🔀 delegate_task: "..." (model: claude-opus-4-6) All tested live — the agent was literally editing its own code 😄 Also have an older open PR #3165 (voice messages as captions) — still waiting for review @Teknium 👀 P.S. Do you use Telegram DM topics with Hermes? Or just chat without threads?

English
8
4
137
6.5K
Nour Eddine Hamaidi
@mark_fietje Just had my Hermes Agent reverse engeneer it again, it uses OpenClaw under the hood and I prefer Hermes, most of the cool features would still need a chinese phone number though.
Nour Eddine Hamaidi tweet media
English
1
0
1
29
Mark Fietje
Mark Fietje@mark_fietje·
@NOOROU cool, good to know! How is AutoClaw. Unfortunately, I noticed the Chinese phone number requirement as well
English
1
0
0
25
Nour Eddine Hamaidi
Nour Eddine Hamaidi@NOOROU·
The Hermes Agent x GLM-5-Turbo challenge: Hermes Agent just reversed engeenerd AutoClaw. It's a clean Electron app with a local gateway running on port 18789 that proxies calls through their API. The only catch for non-Chinese users is the login screen asking for a Chinese phone number. There's no API key option, no alternative auth, and no way to skip it. Hermes Agent handled this in about 5 minutes of tinkering
Nour Eddine Hamaidi tweet media
English
2
0
0
99
BridgeMind
BridgeMind@bridgemindai·
GLM 5.1 just spent 10 minutes and 57 seconds on a simple bug fix. And didn't even fix the bug. Slowest model on BridgeBench SpeedBench. Now confirmed slowest in real workflows too. I can give this same task to Codex with GPT 5.4 and have it done in under a minute. Claude Opus 4.6 would have one-shotted it. GLM 5.1 is 2.6 points behind Opus on coding benchmarks. In practice it feels like 20. Slow and wrong is the worst combination in vibe coding.
BridgeMind tweet media
English
65
8
231
17.4K
zR
zR@zRdianjiao·
🛡️ Local. Private. Yours. AutoClaw runs OpenClaw on your machine — bring any model, no API key. Deploy OSS models with vLLM or SGLang and you can even go fully offline. Not a single byte leaves. An open internet and absolute privacy — you should be able to choose both. 💪
Z.ai@Zai_org

Here comes AutoClaw. We offer a new solution to run OpenClaw locally on your own machine. - Download and start immediately. No API key required. - Bring any model you like, or use GLM-5-Turbo, optimized for tool calling and multi-step tasks. - Fully local. Your data never leaves your machine. We're giving data control back to Claw users. Meet AutoClaw → autoglm.z.ai/autoclaw/ Join the conversation → discord.gg/jvrbCRSF3x

English
13
10
71
8.2K
Claude
Claude@claudeai·
Computer use is now in Claude Code. Claude can open your apps, click through your UI, and test what it built, right from the CLI. Now in research preview on Pro and Max plans.
English
2.5K
4.7K
57K
14.6M
Luke Parker
Luke Parker@LukeParkerDev·
bro please
Luke Parker tweet media
English
38
14
968
63.2K
Nour Eddine Hamaidi
@Teknium Are you the only maintainer of Hermes? I see a 500+ PRs and only growing. You could use a hand.
English
1
0
1
43
Teknium (e/λ)
Teknium (e/λ)@Teknium·
I really love shipping and am kind of addicted to it. Having a huge community of OS devs working on a project with me is super exhilerating! Thanks everyone!
English
36
11
501
15.2K
Nour Eddine Hamaidi
@icanvardar The LLM trying to figure out how to us the API is even more of a waist, but hey this post is very clickbaitable.
English
0
0
2
123
Can Vardar
Can Vardar@icanvardar·
skills are a waste of tokens
English
132
8
259
33.5K
Nour Eddine Hamaidi retweetledi
عزيز
عزيز@nmr2776·
قام والد شاب دخل في غيبوبة بسبب الكحول بتنظيم جنازة وهمية في كنيسة، حيث استيقظ ابنه داخل نعش، كوسيلة لتعليمه درساً.
العربية
59
146
4K
2M
Tyler
Tyler@rezoundous·
Are you team Claude or Codex?
Tyler tweet mediaTyler tweet media
English
638
33
724
308.4K
Nour Eddine Hamaidi
@0xSero Literally the post above this: x.com/sudoingX/statu…
Sudo su@sudoingX

hey if you're considering nvidia's nemotron cascade 2 for agent coding on your 3090 this might save you time. here's what afew days of testing taught me. speed settled. 187 tok/s flat from 4K to 625K context. 67% faster than qwen 3.5 35B-A3B on the same card. mamba2 is context independent and needs zero flags to get there. for chat, bash scripting, API calls, simple tool use, this model at this speed is unmatched in the 3B active class. but i pushed it harder. gave it the same autonomous coding test i give every model. octopus invaders, a full space shooter game, pixel art enemies, particle systems, audio, HUD, game states. the kind of build that tests whether a model can hold architectural coherence across thousands of lines. i ran it five times. multi file, single file, thinking mode on. broken imports, blank screens, skeleton code that never rendered a single frame. on the same 3090 qwen's 9B dense built 2,699 lines and was playable on its first iteration. cascade 2 at 3B active never got there. 3 billion active parameters winning gold at the international math olympiad is real. but math competitions and autonomous coding are different problems. the speed is there. the reasoning is there for structured tasks. but holding coherence across thousands of lines of game logic, particle systems, audio, and collision detection? 3B active MoE hits a ceiling. cascade 2 is the fastest local model i've tested in its class. for complex agentic coding it's not ready at this size. test before you commit.

English
0
0
5
1.1K
0xSero
0xSero@0xSero·
This is god tier and fits on 24gb + It crushes everything up to 6x its size and ties with Gemini deep think and Deepseek on Math huggingface.co/nvidia/Nemotro…
English
86
99
1.9K
102.8K
Nour Eddine Hamaidi
@witcheer Spoiler, it is only good for coding, can't perform well otherwise. GLM-5 is generally good for both.
English
0
0
1
82
Andrej Karpathy
Andrej Karpathy@karpathy·
- Drafted a blog post - Used an LLM to meticulously improve the argument over 4 hours. - Wow, feeling great, it’s so convincing! - Fun idea let’s ask it to argue the opposite. - LLM demolishes the entire argument and convinces me that the opposite is in fact true. - lol The LLMs may elicit an opinion when asked but are extremely competent in arguing almost any direction. This is actually super useful as a tool for forming your own opinions, just make sure to ask different directions and be careful with the sycophancy.
English
1.7K
2.4K
30.8K
3.2M