Konstantin Anagnostou

2.2K posts

Konstantin Anagnostou

Konstantin Anagnostou

@anotherdaynow

Greece Katılım Kasım 2011
249 Takip Edilen91 Takipçiler
Konstantin Anagnostou
Konstantin Anagnostou@anotherdaynow·
@browser_use @gregpr07 Installed it. It's like magic. Migrating browser use skills now. I use it with deepseek v4 pro for first pass unpaved domains, afterwards I use deepseek v4 flash with success! Integrated 1password cli. Only thing: must have tab sessions manager because they keep accumulating.
English
0
0
0
20
David Garrido
David Garrido@PhotoGarrido·
Pues enhorabuena a @OpenAI porque han logrado que vuelva a pasar por caja con la suscripción pro. GPT 5.5 xhigh va como un tiro y con la cuenta plus me estaba topando con los límites con frecuencia. Poder ponerlo en @openclaw sin coste extra ha sido definitivo en mi caso.
David Garrido tweet media
Español
5
1
74
9.5K
Konstantin Anagnostou
Konstantin Anagnostou@anotherdaynow·
@PhotoGarrido @OpenAI @openclaw I was 1 click away from a €100 AI plan. Then DeepShift V4 dropped. Tried Flash once and never went back. Now I keep a €20 sub only for super specific architecture research… basically never. Flash is that good—I use it with zero guarantees.
English
1
0
0
74
Konstantin Anagnostou
Konstantin Anagnostou@anotherdaynow·
Hot take from my OpenClaw tests: DeepSeek Pro beat GPT-5.5, Kimi, GLM. Not just coding—architecture + composition + logic. GPT was close, but it talks in a scrambled way I can’t understand, need to ask clarifications. DeepSeek has the best writing of the bunch.
English
0
0
0
40
Konstantin Anagnostou
Konstantin Anagnostou@anotherdaynow·
@EdgeDimi @openclaw New features don’t matter if updates break basics. I love Openclaw, but I’m constantly on the verge of switching to Hermes. The boring stuff—backend + reliability—has to come first. It’s unacceptable to be afraid to update your software.
English
0
0
2
173
EdgeDimi
EdgeDimi@EdgeDimi·
I don’t know what’s going on with the last week of @openclaw releases. The feature direction is genuinely exciting, but the base keeps shifting so fast that every release I tested hit a different foundational break. - v2026.4.27: local memory/search broke because the managed runtime deps did not retain/resolve node-llama-cpp. -v2026.4.29: that class looked fixed, but Discord/Telegram failed on startup because packaged plugin runtime deps missed json5. Repair attempts were not durable. -v2026.5.2: channels could be made healthy only after local patching, but native Codex/tool adoption hit a worse blocker: tools visible in catalog/runtime inspect were missing from tools.effective, so agents could not reliably use memory/wiki tools. These are not edge-case polish issues. They are “can the agent actually operate normally after upgrade?” issues. Right now testing the new features feels impossible. Stability first, then features. Back to v2026.4.26
Peter Steinberger 🦞@steipete

This one fixes the depenency issues/slowness some had when installed via npm. Plugins are hard, worth it tho! Package is way leaner now, we moved [almost] everything into extensions! docs.openclaw.ai/plugins/manage…

English
7
1
86
22.1K
Shruti
Shruti@heyshrutimishra·
You're using OpenClaw wrong if it's still one chat window. Anyone running everything in one chat knows the feeling. Nothing runs in parallel. Code waits on research, research waits on ops, ops waits on whatever you started yesterday. And every topic switch contaminates the next. Telegram supergroup topics fix this. Each topic is a separate conversation, and the agent treats each one as its own context. Give each topic a job (Code, Research, Ops, Content), point OpenClaw at the group, and you've got what's basically four agents running in parallel that never talk to each other. Setup takes an afternoon. Here's how: Step 1. Install clawddocs first. openclaw skills install clawddocs This pulls 200+ pages of OpenClaw docs into the agent's context. Without it, every config question becomes a guess. Step 2. Create a Telegram supergroup. In Group Settings, turn on Topics. Each topic is a fully separate conversation. The agent doesn't carry context between them, which is the whole reason this works. It's not multitasking, it's hard isolation. Step 3. Name the topics after jobs, not the agent. Code, Research, Ops, Content. Whatever your actual workflow is. Step 4. Add the bot to the group. Make it admin and give it the Manage Topics permission. Then open [@]BotFather, run /setprivacy, pick your bot, and choose Disable. Without that, the bot only reads messages that start with /, which means it ignores almost everything you type in the topics. Step 5. Open a chat with the bot and tell it to find every group it's been added to, then update openclaw.json with that list. The bot pulls the list from Telegram itself. From then on, OpenClaw sees every group and every topic inside. Step 6. That's it. Open whichever topic you need, ask the question, get an answer that isn't contaminated by the other three lanes.
Shruti tweet media
English
29
38
252
24.6K
Konstantin Anagnostou
Konstantin Anagnostou@anotherdaynow·
@nahcrof At least it responds but unstable and slow not worth moving from Deepseek provider. Hope you make it reliable and faster.
English
0
0
0
17
nahcrof
nahcrof@nahcrof·
@anotherdaynow Flash is down, please try another model, I can give you $2 more credit
English
2
0
0
579
nahcrof
nahcrof@nahcrof·
Alright, CrofAI is officially the cheapest option for deepseek-v4-pro have fun saving tons :)
nahcrof tweet media
English
36
11
416
24.4K
Konstantin Anagnostou
Konstantin Anagnostou@anotherdaynow·
@nahcrof Tried pro. Took x10 time compared to Deepseek provider to complete a simple reply. I see also that thinking level low medium high is not supported? Come for the 700t/s of flash -need almost live agent- but it didn't work.
English
1
0
0
206
Irrational Shuma
Irrational Shuma@IrrationalShuma·
@nahcrof Has anyone used deepseek-v4-flash, 900tps is insane. Is it decent at anything?
English
1
0
1
457
0xSero
0xSero@0xSero·
I LOVE Deepseek-v4-flash, incredibly reliable and capable, logical. It's lacking in frontend but I have MiMo for that. I would recommend any company spending 100k+ a year on AI to purchase 8-10~ 6000s and have a few of the works to have them blind test these models for work.
English
57
30
546
49.3K
Konstantin Anagnostou
Konstantin Anagnostou@anotherdaynow·
@zUnEm01 I agree that GLM is good, but it’s far too slow—painfully so. I consider Kimi good as well, but it’s also far too slow. For me, DSeek is the best, and the biggest surprise is Flash, Deepseek also the ONLY amon GPT Gemini that makes correct math calc.
English
0
0
1
1.3K
zUn
zUn@zUnEm01·
GLM 5 could be better but it is very unreliable asf! Kimi k2.6 could be better but it just has issues with understanding and following instructions, it over does stuffs and destroys my repo. Deepseek is a winner here because it understands and follows instructions with 1m context it's a plus for me. The only problem with Deepseek is this: it doesn't have vision.
Kasif@md_kasif_uddin

Be honest, which is the best open source AI Model?

English
43
15
507
58.6K
Konstantin Anagnostou
Konstantin Anagnostou@anotherdaynow·
@openclaw After 3 hours debugging and my nerves broken I falled back to .27 version... This is not good. We can't every time update-and-pray.
English
0
0
0
24
Konstantin Anagnostou
Konstantin Anagnostou@anotherdaynow·
@openclaw Updated. Crashed ALL system. Sorry folks Openclaw is FAR away from being a pro system. It is just a hobby keeping us awake till 3am trying to take the shit out of the system.
English
3
0
7
555
OpenClaw🦞
OpenClaw🦞@openclaw·
OpenClaw 2026.4.29 🦞 💬 Group chats feel much better now 📌 Follow-up commitments from context 🔐 Safer exec, pairing, and owner controls 🟩 NVIDIA provider + model catalogs ⚡ Faster startup + plugin/channel fixes Group chat finally feels agent-native. github.com/openclaw/openc…
English
207
121
1.1K
366.6K
Alexander Yue
Alexander Yue@Alezander907·
Been cooking up something amazing lately New highest scoring browser agent of all time
Alexander Yue tweet media
English
4
10
112
62.9K