Txema ⚡

16.9K posts

Txema ⚡ banner
Txema ⚡

Txema ⚡

@durbon

Director of Mobile @jobandtalentEng. Formerly @idealista @genbetadev También escribo sobre tecnología https://t.co/4N8UPJ6eEp Del Rayo ⚡Runner

Madrid Katılım Nisan 2007
1.8K Takip Edilen3.2K Takipçiler
Txema ⚡
Txema ⚡@durbon·
ollama launch claude --model gemma4
Eesti
0
0
0
83
Txema ⚡
Txema ⚡@durbon·
Con más presión sobre token budgeting, modelos como Gemma 4 hacen más interesante el enfoque local o híbrido: no solo por privacidad, también por control de costes
Español
1
1
2
233
Txema ⚡
Txema ⚡@durbon·
Bloqueo de Anthropic. Fin de la comodidad. Tocó configurar OpenClaw en serio: limpiar crons, enrutar modelos por tarea, combinar proveedores (OpenAI, Gemini y Ollama local) y hacer API budgeting consciente. Lección: optimizar sistemas es donde se aprende de verdad.
Español
0
0
4
652
Txema ⚡ retweetledi
GREG ISENBERG
GREG ISENBERG@gregisenberg·
POV: April 2026
GREG ISENBERG tweet media
200
346
3.6K
148K
Txema ⚡ retweetledi
Alex Finn
Alex Finn@AlexFinn·
If you used a Claude subscription with OpenClaw, read this: Unfortunately all other AI models out there absolutely suck with OpenClaw compared to Opus It's just a fact and anyone denying this is delusional So here is my new recommended OpenClaw setup: Pay for the Opus API and use it as your orchestrator Then use other models as the execution layer If you do this correctly, yes your costs will go up, but not by as much as you think I use my ChatGPT subscription as the coding execution. GPT 5.4 is excellent at coding. When The Opus orchestrator gives a coding task to the ChatGPT subagent, it always performs really well If you are on the Pro plan, you should have enough usage to have ChatGPT be the execution layer for every task. But if youre on the $20 a month plan, youre going to need other subscriptions to handle other tasks GLM 5.1 and Qwen are excellent. I'd get a cheap sub through them and have them handle all other tasks given to them from the orchestrator The best setup tho if you have the hardware is Opus API for orchestrator, ChatGPT for coding, then local Gemma 4 and local Qwen handling everything else. Right now have Gemma running on my DGX Spark and Qwen 3.5 on my Mac Studio. They handle all other execution from my Opus API orchestrator Unfortunately all options above will cost more than the $200 a month subscription. It just is what it is. But if you optimize correctly it wont cost much more, and you'll still get frontier performance. OpenClaw is the most powerful piece of software ever released. $200 a month ($2,400 a year) was a steal for a digital employee. Honestly anything under $50,000 a year is a no brainer if you run a serious business. The situation isn't great but you also need to face reality: Claude Opus 4.6 is the best model for OpenClaw. If you use any other model, your productivity will suffer Business is a battlefield and I refuse to fall behind, so despite me not being happy with the Anthropic decision the setup above is what I'm going with Virtue signaling might get me brownie points on the internet, but it won't increase my productivity
English
276
70
1.2K
196.8K
Txema ⚡ retweetledi
Peter Steinberger 🦞
Anthropic now blocks first-party harness use too 👀 claude -p --append-system-prompt 'A personal assistant running inside OpenClaw.' 'is clawd here?' → 400 Third-party apps now draw from your extra usage, not your plan limits. So yeah: bring your own coin 🪙🦞
English
492
278
5.5K
1.6M
Txema ⚡
Txema ⚡@durbon·
Anthropic acaba de notificar por email que desde hoy a las 21:00h, las suscripciones de Claude dejan de cubrir el uso en herramientas de terceros como OpenClaw Si tu equipo depende de un solo proveedor de IA, hoy es un buen día para replanteártelo
Español
2
0
3
393
Txema ⚡
Txema ⚡@durbon·
Si usas OpenClaw, tus opciones: - API key de Anthropic (pay-per-token, sin límites de suscripción) - OpenAI/Codex — sí permite sus subs en OpenClaw - Modelos locales vía Ollama (Qwen, Gemma) - EdgeClaw — fork open-source, model-agnostic, sin dependencias cloud
Català
0
0
0
123
Txema ⚡
Txema ⚡@durbon·
Tokenmaxxing isn't an AI adoption strategy. It's a cargo cult dressed up as data-driven management. Measure impact. Measure outcomes. Or don't, and in two years we can all write the retrospective about how obvious it was. durbon.dev/tokenmaxxing-m…
English
0
0
0
133
Txema ⚡
Txema ⚡@durbon·
What you should actually measure: - Time to resolution (are problems closing faster?) - PR quality (fewer review rounds?) - Bug reduction (is AI-assisted code more reliable?) - Team velocity, not individual (is the boat lifting or is one person just looking busy?)
English
1
0
0
94
Txema ⚡
Txema ⚡@durbon·
Companies are building internal leaderboards to track who burns the most AI tokens. They're calling it "tokenmaxxing." Management is gamifying LLM usage like it's a step-count challenge on a company Fitbit. I've seen this movie before. We learned nothing
English
1
1
6
293
Txema ⚡
Txema ⚡@durbon·
Claude se cae. Un test inesperado para saber si tu equipo usa la IA como herramienta diaria o solo como “nice to have”. Algunos ni lo notaron.
Txema ⚡ tweet media
Español
0
0
1
265
Txema ⚡ retweetledi
Gergely Orosz
Gergely Orosz@GergelyOrosz·
No project has gotten more traction in such a short time than @openclaw by @steipete But how is he building it? Watch or listen: • YouTube: youtube.com/watch?v=8lF7Hm… • Spotify: open.spotify.com/episode/5Ie6Qt… • Apple: podcasts.apple.com/us/podcast/the… Brought to you by: • @statsig  — ⁠The unified platform for flags, analytics, experiments, and more. Join us at The Pragmatic Summit I’m hosting with Statsig, on 11 February: pragmaticsummit.com • @SonarSource  – The makers of SonarQube, the industry standard for automated code review. Join me online at the Sonar Summit, on 3rd March: sonarsource.com/pragmatic/sona… • @WorkOS  – Everything you need to make your app enterprise ready. If you're in SF on 9 February, stop by at the WorkOS AI Night with The Pragmatic Engineer (free to register): luma.com/workos-pragmat…
YouTube video
YouTube
English
49
138
1.4K
225.5K