Ralf Roeber

1K posts

Ralf Roeber banner
Ralf Roeber

Ralf Roeber

@CognosR

https://t.co/csWmz4tUgG 🎸 + https://t.co/0m7yKT5sam Founder | IBM-Champion | Dad, Mentor, Speaker, FOSS 💗

Cyberspace & Barcelona, Spain Katılım Şubat 2021
365 Takip Edilen209 Takipçiler
Ralf Roeber
Ralf Roeber@CognosR·
Before buying at Mediamarkt ... review: ocu.org/reclamar/mis-r… There are tons of similar cases.
Ralf Roeber@CognosR

@MediaMarkt_es Ojo, si eres cliente B2B y compras en mediamarkt. En sus condiciones de garantia dicen: "En el caso de daños ocultos, deberá comunicarse dicha circunstancia a Media Markt en el plazo máximo de veinticuatro (24) horas desde la entrega." La ley dice: 30 dias.

English
0
0
0
47
Ralf Roeber
Ralf Roeber@CognosR·
@MediaMarkt_es Ojo, si eres cliente B2B y compras en mediamarkt. En sus condiciones de garantia dicen: "En el caso de daños ocultos, deberá comunicarse dicha circunstancia a Media Markt en el plazo máximo de veinticuatro (24) horas desde la entrega." La ley dice: 30 dias.
Español
0
1
0
62
Ralf Roeber
Ralf Roeber@CognosR·
@MediaMarkt_es Tengo un problema con ustedes. Compre una tele por internet. Embalaja esta perfecto. Pero la tele tiene un golpe fuerte y esta rota. Incidencia: 251230-075757 ... Me gustaria solucionar lo en una manera buena. Espero vuestra respuesta.
Español
2
0
0
56
Ralf Roeber
Ralf Roeber@CognosR·
You have a restaurant and want to change opening hours. github.com/satisfecho/pos ... just implemented a future opening hours change. Try it out
English
0
0
0
7
Ralf Roeber
Ralf Roeber@CognosR·
@ivanfioravanti Have a look at the uncensored model: this worked really well for me
Eric ⚡️ Building...@outsource_

🚨 SUPER GEMMA 4 26B UNCENSORED IS INSANE LLM WIZARD COOKING AGAIN @songjunkr Dropped SuperGemma4-26B-Uncensored GGUF v2 and it’s trending on @huggingface🤗 This thing SMOKES the regular Gemma-4 26B: 🤯0/100 refusals (actually uncensored) 🚀Fixed all the tool-call + tokenizer jank ⚡️90% faster prompt processing 🏆Sharper, smarter, way more capable responses - Perfect local beast for llama.cpp ✅ Runs ~18-22 GB VRAM (16.8 GB Q4_K_M file) - Run on 16 GB GPUs! The 31B version in the works, should be out SOON 🤯 Pull this version on hugging face below 👇🏻

English
1
0
2
111
Ivan Fioravanti ᯅ
Ivan Fioravanti ᯅ@ivanfioravanti·
@CognosR Thanks for sharing this, I'll try it new week in my second round. 🚀
English
2
0
4
263
Sudo su
Sudo su@sudoingX·
mac users qwen 3.6-35B-A3B hitting 91 tok/s on M4 Max 128gb via mlx in lmstudio, that's solid first numbers from @dreamworks2050. i don't have apple hardware to benchmark myself so i'm counting on the community for mac data. if you're running qwen 3.6 on any M4, M3, or M2 chip drop your tok/s, quant, and app below. especially interested in how mlx compares to llama.cpp on the same chip, same model. the more configs we collect the faster everyone finds their optimal setup
M4rc0z@dreamworks2050

@Alibaba_Qwen QWEN 3.6-35B-A3B MLX FIRST LOOK 👀 91tps @ M4 MAX 128

English
47
13
277
48.1K
Ralf Roeber
Ralf Roeber@CognosR·
24.4., 19:00, La Calandria Co-Working, El Masnou
English
0
0
1
28
Ralf Roeber
Ralf Roeber@CognosR·
El viernes que viene, 24.4., vamos a reunirnos la tercera vez para hablar de inteligencia artificial local y código abierto. La primera vez éramos 5, la segunda 12. Una buena mezcla de profesionales, músicos, artistas, techies Traer amigas y amigos.
Español
1
0
1
34
Linus ✦ Ekenstam
Linus ✦ Ekenstam@LinusEkenstam·
Turns out spending 3 hours to send 30 LinkedIn messages is not a scalable go-to-market strategy. One replied 😭 It wasn't even a good reply. The math just didn't work. And I wasn't doing it wrong. I was doing exactly what every playbook said to do. The real issue was timing. These tools hand you a list and leave you to figure out when to reach out. If time-zones alone was the only variable... But the window where someone is actually open to a conversation is tiny. They just got funding. A VP of Sales just started a new role. Someone is suddenly engaging with competitor content. Those signals are all there. Most tools just don't see them. That's what we've been trying to fix with Flocurve. It watches those signals, finds the people who are ready to hear from you right now, and reaches out in a voice that sounds like yours. This is not a replacement this is another tool in your sales arsenal. This is end to end, you define your ICP, signals, keywords or events. Well defined specs and you're golden. (no worries the agent will help here too). With your leads coming through, you can set up campaigns, these are sophisticated multi-step conditional campaigns. Allowing for sending text, videos or voice notes. Assigning waits, conditionals and enrichments. There is no need to connect multiple tools, string together sheets. You or your entire team can work together in a workspace. We've been building this for a year and we're opening it up to a small group first. Specifically people who are deep in AI agent workflows and want to see what agentic sales actually looks like in practice. If that's you, let me know below, and I'll invite you.
Linus ✦ Ekenstam tweet media
English
10
2
10
3.6K
Daniel Imfeld
Daniel Imfeld@dimfeld·
So many SaaS products now selling cost tracking for your agents. I should really write an blog post about how we did automatic LLM call cost tracking with an AI SDK middleware, which is linked to each agent step. It's not that hard to do and you retain full control of your data.
English
1
0
0
69
Ralf Roeber
Ralf Roeber@CognosR·
@TeeDevh 1x Gemma4:26b running Claude code local + $20 Claude Code + $ 20 cursor
English
0
0
0
225
Vu.
Vu.@TeeDevh·
Honestly stuck between 4 options: 3 × $20 Codex 3 × $20 ClaudeCode 1 × $100 Codex 1 × $100 ClaudeCode What would you pick? 🤔
English
214
5
380
93.4K
Ralf Roeber
Ralf Roeber@CognosR·
Why #LISP changed the world to the better, why going away from LISP has led us to data and security problems. See this eyeopening 50 minutes presentation from FOSDEM 2018 - youtube.com/watch?v=jlERSV… LISP actually made AI possible @fosdem @IBMData
English
0
0
0
26
Ralf Roeber
Ralf Roeber@CognosR·
@VictorTaelin Especially: "Tools: prefer shell commands, do not use Execute_typescript" <- this made a big change ... from "almost nothing works" ... to, started working
English
0
0
0
35
Taelin
Taelin@VictorTaelin·
$1300 on a single day using Opus Fast Mode *sighs* I accomplished a lot tho, but that's more than I anticipated. On a brighter note, I have a side plan that will be very fortunate if it works. I just picked a 8x B200 on a cloud market and I'm trying to max GLM 5.1 throughput now. The best provider does ~120 tokens/s, but that's for serving many customers. How about serving just one in a dedicated DGX? Opus is probably fooling me, but it promises I could achieve > 500 tokens/s on GLM 5.1 with batch=1. If that is true, that'd change everything. I'd probably fine tune it on my stuff, say thank you goodbye to OpenAI / Anthropic, and just host my own stuff. At current prices, I could leave 8x B200 up 24/7 for $20k/month, which is definitely affordable. And even though GLM is way dumber than Opus, at 500 tokens/s, plus tuning, perhaps it would actually be better for my personal usage? I'm optimistic. Don't tell me if that's won't work, I want to believe (jk please tell me I don't want to waste my time)
Taelin tweet media
English
88
6
504
68.4K
Mitchell Hashimoto
Mitchell Hashimoto@mitchellh·
I just want to point out that you’re wrong particularly about LLMs and Zig. I see this perpetuated by many and don’t understand it. I use plenty of agents with Zig and it does fantastic. Ghostty is heavily AI written, isn’t that a strong counter example? Almost the entire Zig 0.14 to 0.15 initial PR was via AI, into a version that there is zero training data whatsoever. The key is to use an agent, give it tools. For Zig, that means I point it to the Zig source and release notes and the corpus of existing Ghostty code. It does fantastic. Collapse this into an agent skill and it’s all just automatic. If you go to a chat interface and expect it to poop out good safe Zig though it’s going to do terrible. AI needs tools. The rest I’ll ignore. I have no problem if you dislike Zig. I dislike Rust. It’s fine.
English
20
14
624
93.5K
Max Kupriianov
Max Kupriianov@xlab_os·
Zig is not a memory safe language and does nothing to protect the runtime from mistakes with memory. I don't get the trend to make another "sexy" language out of it just to rewrite all existing software. Yes it is fast, but it also crashes fast. Bun crashes, Ghostty crashes, I don't want to know the name of next big thing that is written in Zig and also crashes. In 2026 many segfaults or memory corruption errors can be weaponized to exploit systems. If you take slow but safe software and rewrite it into Zig you are actually working backwards. LLMs are also terrible at Zig because there isn't much code base and documented mistakes to learn from. One of my few exploration attempts was about making a CLI app with a buffer and mouse handling. It was one-shotted okay, but upon launch it just started to print raw bytes from my RAM onto the buffer. Basically exposing adjacent apps state. Zig might sound cool for people coming from the web world (where everything runs in the most safe sandbox imaginable) or even non-CS-backgrounded people who just pick a language that is popular on X, but this is a really bad bad option for anything with >0 users. Before LLMs - maybe, but not in today world. If you want raw power - write in C, ASM, use ASan. Also, Rust is not perfect but at least it offers something.
Max Kupriianov tweet media
English
49
8
220
66.3K
Ralf Roeber
Ralf Roeber@CognosR·
Taking a break 🫣
Ralf Roeber tweet media
English
0
0
0
22