Marius

1.4K posts

Marius banner
Marius

Marius

@marius_vibes

La verdad es solo para los valientes.

Katılım Haziran 2014
1.2K Takip Edilen159 Takipçiler
Marius
Marius@marius_vibes·
@astnkennedy your attention span is fragmented, and your neurotransmitters are getting handicapped you just need to take a break once in a while
English
0
0
0
17
Austin Kennedy
Austin Kennedy@astnkennedy·
I'm 22 years old and Claude Code is deteriorating my brain. Every single day for the last 6 months I've had 6 to 8 Claude Code terminals open, waiting for a response just so I can hit 'enter' 75% of the time. And it's doing something to me. In convos with a couple of friends, it's been a point that's been brought up pretty frequently. None of us feel as sharp as we used to. I don't know if it's just us, or others in their 20s are feeling the same thing, but it's something I've been thinking about a lot. P.S. I know this is a problem with my reliability/usage of it, not Claude Code itself, but the effects are real nonetheless
English
1.3K
372
9.2K
2M
Mistral AI
Mistral AI@MistralAI·
🆕 Today, we're releasing the public preview of Workflows, the orchestration layer for enterprise AI. 🌎 Enterprise teams have capable models. What they don't have is a way to run them reliably in production. That's the gap Workflows fills. It takes AI-powered business processes from prototype to production, with the durability, observability, and fault tolerance that production actually requires. Leading organisations like ASML, ABANCA, CMA-CGM, France Travail, La Banque Postale, Moeve, and many others are already using Workflows to automate critical processes.
English
78
247
2K
266.3K
Marius
Marius@marius_vibes·
@trq212 @RhysSullivan bro I've had claude write a file for 13 minutes or even more and compactions way longer than that
English
0
0
1
305
Thariq
Thariq@trq212·
@RhysSullivan hmm usually long compactions happen when coming back to a long conversation with a cold prompt cache, but 5 minutes is crazy. did it eventually succeed? or did you cancel it?
English
34
0
255
46.4K
Marius
Marius@marius_vibes·
@parcadei shit is too hard even for technical people they should've not released this dumb ass model opus 4.7 it's barely usable
English
0
0
0
89
dei
dei@parcadei·
here's a few tips on how to not get all your tokens burgled by the the new tokeniser: A tokeniser converts text into numbers and those numbers are what the model actually reasons over so it never sees your words, only token IDs and the difference: 4.6 seems to have had a larger vocabulary and most English words were 1 token like fibonacci or algorithm but 4.7 has a smaller vocabulary which means words get split into subword pieces so fibonacci is now 5 tokens, algorithm is now 4 And the below is true, spaces are now separate tokens and now every word boundary costs an extra token here's what to do about it: 1) you need to rewrite your CLAUDE.md, skills, and rules etc because english prose pays a higher % tax which means each turn you're paying more and that's why you're burning through usage 2) when prompting the model, instead of throwing large prompts, just say less A way to think through it is: 1: What does this literally say? (Keep it short) 2: What does the structure of the prompt hint at? (look at using formatting like bullets, tables, code blocks because the model reads structure) 3: What will the model derive? (Keyword-dense text lets it derive more from less) 4: What's the deeper intent? (The model gets it from context, you don't need to spell it out) And here's the counterintuitive part: you can say less with the same token budget and the model works better tldr: replace prose instructions with write structured, keyword-dense ones downside: you're going to have to think about what you're saying if you want to save tokens
dei tweet media
wh@nrehiew_

The 4.7 tokenizer treats whitespace as separate tokens? A string consisting of 50 one-token words separated by Whitespace tokenizes to ~50 more tokens than with the 4.6 tokenizer. If so, the 1.35x more token estimate seems way too low.

English
10
2
64
11.4K
Marius
Marius@marius_vibes·
@0xSero it's just bad at instructions following also takes a fucking forever to answer, bro be like 2 mins / request, ain't nobody got time for that
English
0
0
0
111
0xSero
0xSero@0xSero·
Opus-4.7 is unusable. Multiple times i have given it specific links, for it to use, specifically. Instead it goes finds unrelated links, starts expensive processes, and goes for hours in a completely wrong path. No ability to infer intent. Wasted 200$ worth HF credits. lol
English
107
43
1.3K
125K
Ashar
Ashar@ashar_builds·
Old models on @GitHubCopilot are not just slow they are incorrect. Earlier I tried GPT 4.1 (0 tokens) to find a code via subagent and it simply said it doesn’t exists. Switched to Haiku .33x tokens and found multiple files with the the search term. 🤦‍♂️ We need better free models or retire the older ones @pierceboggan
English
1
1
25
3.5K
Marius
Marius@marius_vibes·
@GHCopilotCLILog wen hook compatibility between different harnesses? 😭
English
0
0
3
199
GitHub Copilot CLI Changelog
GitHub Copilot CLI Changelog@GHCopilotCLILog·
🚀✨ Copilot CLI v1.0.26 released! 18 features & enhancements in this release Top features: • Escape key reliably dismisses ask_user and elicitation prompts without getting stuck 🔑 • Remote tab shows Copilot coding agent tasks and supports steering without requiring a pull request • Rename "steering" to "remote control" in --remote flag and /remote command help text • Anthropic BYOM correctly includes image data when viewing image files • Plugin hooks receive PLUGIN_ROOT, COPILOT_PLUGIN_ROOT, and CLAUDE_PLUGIN_ROOT env vars with plugin installation directory Enhancements: • Ctrl+O now expands all timeline entries, same as ctrl+e • Session scope selector in sync prompt is more prominent and keyboard-navigable with left/right arrow keys • Instruction files with specific applyTo patterns consolidated into a table, reducing context window usage • Relative paths in file edit operations resolve against session working directory • ACP server binds to localhost only, preventing unintended network exposure Bug fixes: • Spurious directory access prompts no longer appear for arguments inside find -exec blocks • Agent sessions no longer fail with unrecoverable errors when context compaction splits tool call across checkpoint boundary • Single-segment slash-prefixed tokens (e.g. /help, /start) no longer treated as file paths in bash commands • Permission prompt notification hook only fires when a prompt is actually shown to the user • Avoid sending duplicate custom instruction files to reduce wasted tokens per turn • Installing a plugin named 'git' from marketplace no longer fails due to incorrect URL parsing • Enterprise login accepts hostnames without a URL scheme • LSP language servers correctly initialize on Windows using proper file URI paths github.com/github/copilot… #GitHubCopilotCLI
English
6
14
100
9K
Marius
Marius@marius_vibes·
@michael_chomsky jokes on him I've set up a hook where he can't edit quality files (git hooks, config files) only way he could edit them is if i set an env variable to true from the cmd
English
0
0
1
801
Michael
Michael@michael_chomsky·
Claude code has decided the burden of lint rules is too high and has figured out a genius solution
Michael tweet media
English
79
49
2K
112.2K
Marius
Marius@marius_vibes·
Hey Pierce, sure. The main pain point is being mid-session after deep research (exploring a repo, gathering findings from multiple sources) or mid-implementation with just a few tasks left, and compaction fires automatically before you can dump that accumulated context into a plan file. Once it compacts, that session knowledge degrades and there's no way to recover it. For context, no other harness does this. Claude Code, Open Code, Cline, Roo Code and similar tools let you comfortably work up to 90-95% context usage and choose when to compact. Copilot's 80% threshold feels way too aggressive. Ideally we'd have the option to disable auto compaction entirely. But even a configurable threshold and a simple 'compact now?' confirmation before it fires would make a huge difference. Just enough to let us save our work first.
English
0
0
0
12
Pierce Boggan
Pierce Boggan@pierceboggan·
@marius_vibes @GHCopilotCLILog Disabling auto compaction would have a lot of other side effects. Can you say more about what exactly is breaking/what context is being lost?
English
1
0
1
18
GitHub Copilot CLI Changelog
GitHub Copilot CLI Changelog@GHCopilotCLILog·
🚀✨ Copilot CLI v1.0.25 released! 18 features & enhancements in this release Top features: • Install MCP servers from the registry with guided configuration directly in the CLI 🚀 • ACP clients can now provide MCP servers (stdio, HTTP, SSE) when starting or loading sessions • You can now remote control your CLI sessions using --remote or /remote • MCP remote server connections automatically retry on transient network failures 🔄 Enhancements: • Persist resolved model in session history and defer model changes during active turns • The --config-dir flag is now respected when selecting the active model • /env command added to show loaded environment details including MCP servers and plugins • /share appends correct file extension (.md or .html) when custom output path lacks one • /add-dir accepts relative paths and resolves them to absolute • Custom instruction files preserve special characters like & • Skill picker list scrolls correctly when exceeding terminal height • MCP client reports correct CLI version during server handshake • Alt+D now deletes the word in front of the cursor in text input ✂️ • /share html shows a file:// URL and supports Ctrl+X O to open file directly • Skill instructions persist correctly across conversation turns • /logout shows a warning when signed in via gh CLI, PAT, API key, or environment variable, clarifying OAuth session management Bug fixes: • Esc key works correctly after a failed /resume session lookup ⏹️ • Share Research TOC sidebar anchor links navigate correctly within the page • Fixed session history persistence issues related to model changes Misc: • Updated environment details display with instructions, MCP servers, skills, agents, and plugins github.com/github/copilot… #GitHubCopilotCLI
English
4
14
106
8.2K
Marius
Marius@marius_vibes·
@GHCopilotCLILog Hi, any chance we can get disable auto compact? Or at least let us configure threshold. It breaks mid turn all of current work, losing context in the process. The problem is that you can't even go back and restore previous point, you just lost your conversation & have to reexplai
English
1
0
0
135
Erick
Erick@ErickSky·
En el grupo de telegram hay verdaderos genios que son tan amables y tan buenas personas 🥰 Por lo general, las personas que son realmente inteligentes son todos unos idiotas, y por eso mi grata sorpresa al leer gente que comparte su inteligencia de forma humilde. Gracias!
Español
9
3
50
3.2K
Marius
Marius@marius_vibes·
@DotCSV yo lo llevo probando 1 semana y tengo sensaciones mixtas, funciona algo diferente dependiendo del harness que uses (claude code, copilot, etc) hay otro que en teoría es más "simple" pero viendo el repo es casi igual de complejo que superpowers x.com/i/status/20432…
Charly Wargnier@DataChaz

🚨 ICYMI @addyosmani from Google just dropped his new Agent Skills and it's incredible. It brings 19 engineering skills + 7 commands to AI coding agents, all inspired by Google best practices 🤯 AI coding agents are powerful, but left alone, they take shortcuts. They skip specs, tests, and security reviews, optimizing for "done" over "correct." Addy built this to fix that. Each skill encodes the workflows and quality gates that senior engineers actually use: spec before code, test before merge, measure before optimize. The full lifecycle is covered: → Define - refine ideas, write specs before a single line of code → Plan - decompose into small, verifiable tasks → Build - incremental implementation, context engineering, clean API design → Verify - TDD, browser testing with DevTools, systematic debugging → Review - code quality, security hardening, performance optimization → Ship - git workflow, CI/CD, ADRs, pre-launch checklists Features 7 slash commands: (/spec, /plan, /build, /test, /review, /code-simplify, /ship) that map to this lifecycle. It works with: ✦ Claude Code ✦ Cursor ✦ Antigravity ✦ ... and any agent accepting Markdown. Baking in Google-tier engineering culture (Shift Left, Chesterton's Fence, Hyrum's Law) directly into your agent's step-by-step workflow! `npx skills add addyosmani/agent-skills` Free and open-source. Repo link in 🧵↓

Español
1
2
3
986
Carlos Santana
Carlos Santana@DotCSV·
Habiendo probado la skill de superpowers para programar no me termina de convencer. Demasiadas preguntas intermedias incluso para los cambios más pequeños que me rompen el flujo de trabajo. ¿estoy sólo en esto? ¿es un skill issue mío (literal)?
Español
40
2
267
34.6K
Marius
Marius@marius_vibes·
@ctatedev nice now my secrets can go away in real time!
English
0
0
0
19
Chris Tate
Chris Tate@ctatedev·
New command: agent-browser skills Cached skills go stale when the CLI updates. Now the CLI serves skill content at runtime. One thin skill installed via: npx skills add vercel-labs/agent-browser Evals show agents pick agent-browser more often and use it correctly.
Chris Tate tweet media
English
24
23
417
26.4K
IndiJo
IndiJo@odd_joel·
you asked for Android Moshi. well...it's rough. but under the hood: a fully rewritten GPU renderer on OpenGL ES 3.1 that I'm genuinely proud of. everything else? work in progress. follow along — there's more coming.
English
7
1
16
2.1K
Marius
Marius@marius_vibes·
@mattpocockuk this is literally the best skill every made since I've discovered it it's so simple stupid it works flawlessly yes it takes time to answer the questions, but those are the questions that if not answered, will be translated to technical debt or time lost in the long run, grats!
English
0
0
1
49
Marius
Marius@marius_vibes·
@theo tailscale + termius/termux + ssh (+ tmux or cmux) that's about it
English
0
0
0
79
Theo - t3.gg
Theo - t3.gg@theo·
I want to control my agents from my phone. I'm sure you do too. What workflows do you want? Do you have one that works now? Tell me all about it.
English
532
19
1.7K
244.4K
Marius
Marius@marius_vibes·
@PPosition @javilop pero qué acabo de leer eres bastante tontito si te crees que alguien con cáncer o una enfermedad rara le importa lo más mínimo dónde van sus datos, lo único que quiere es una cura se os ve el plumero, los médicos tenéis los días contados con esto de la IA y me alegro bastante
Español
1
0
0
42
Anonymous
Anonymous@PPosition·
@javilop A ti te parece bien que los médicos demos información absolutamente confidencial y personal, que puede utilizarse para las mayores aberraciones posibles fuera de la comprensión humana, a la IA, sin conocimiento y consentimiento de la persona? Peligro
Español
13
0
19
7.9K
Javi López ⛩️
Javi López ⛩️@javilop·
🔴 NECESITO TU ATENCIÓN Llevo una semana ayudando a Miriam en su caso de cáncer metastásico y quiero compartir la metodología que he estado usando porque es absolutamente replicable. Pienso que, con suerte, puede ser ÚTIL A OTRAS PERSONAS con cáncer (o con cualquier otra enfermedad). Los resultados que hemos conseguido no son un milagro, pero pensamos que son realmente útiles y pueden significar una diferencia crucial en un caso médico de vida o muerte. Aquí va paso a paso el método: 1/ Usar los modelos más avanzados del momento (por desgracia de pago, y no son baratos, opino que Sanidad Pública debería invertir en esto): - ChatGPT Pro + Extended (40min de pensamiento aprox por llamada) - Claude Opus 4.6 MAX Pendientes de probar a fondo: - Perplexity Sonar Pro - Notebook LM 2/ Dárselo MUY MASCADO a la IA todo el historial. Esto parece una tontería pero es muy importante. - Lo primero que pido, con Claude Cowork que tiene acceso al disco duro, es que entre en la carpeta en la que está TODO EL HISTORIAL (pueden ser más de 100 pdfs) y lo unifique todo en: - Un único PDF (puede ser de más de 1000 páginas o lo que sea necesario) - Un único txt legible, que debe hacer correctamente usando un script con OCR y luego comprobar con lupa que está bien hecho. Insisto: no saltar al siguiente paso antes de tener muy bien hecho lo anterior, sobre todo el txt. 3/ Una vez tenemos lo anterior utilizar este prompt junto con el txt y el PDF como archivos de entrada y lanzarlo en AMBOS modelos (y en más si es posible) a la vez. 👉 Os lo dejo aquí, este prompt es increíble complejo/avanzado: dropbox.com/scl/fi/f5luli8… Está pensado para el caso concreto de Miriam, pero con los modelos del punto 1/ podrías adaptarlo a tu caso particular sin problemas. 4/ La PUNTA DE FLECHA enfrentando un modelo al otro: esta metodología no la he escuchado a nadie, pero funciona increíblemente bien. La sensación es la de ir afilando una estaca hasta que adquiere una punta reluciente. Funciona así: con paciencia y en sucesivas iteraciones (aconsejo mínimo 5 veces, y en en cuenta que si ChatGPT tarda 40min te va a llevar un buen rato) enfrenta el resultado (el PDF) de un modelo a otro. Con un prompt sencillo del estilo: "Otro comité de expertos opina esto. ¿Cómo lo ves? Si estás de acuerdo o lo contrario dime por qué, y genera un nuevo PDF si lo ves preciso". El resultado se lo cruzas al modelo contrario. Así, en sucesivas iteraciones, búsquedas de internet, papers, etc. irán encontrando y afilando más cosas. ¿Cuándo acabar? Cuando AMBOS modelos digan que está perfecto y no puedan mejorar más el trabajo del contrario. Esto es tan absurdamente rompedor que pienso que los resultados de TODOS los modelos actuales mejorarían si siguieran esta metodología (apoyándose en una espiral rollo "adversarial model". No entiendo por qué nadie se ha dado cuenta de esto, si lo ha hecho, por qué no se le da más bombo. Funciona impresionantemente bien en cualquier ámbito, inclusive programación y matemáticas. Es mas, mi teoría es que esto podría hacerse todavía mejor haciéndolo no solo con dos modelos: sino con una mayor combinatoria, añadiendo quizás Perplexity Sonar Pro, etc. RESULTADOS Increíbles. Obviamente no puedo saber si mejores que el mejor de los comités científico-sanitarios del mundo, pero le están dando a Miriam una nueva dimensión del caso, tests adicionales que hacer, posibles pruebas, etc. Obviamente la IA milagros no hace, pero pienso que puede ya, a día de hoy, ayudar a muchos pacientes. Y Sanidad Pública debería invertir mucho, pero mucho, en esto. Voy a preguntarle a Miriam si puedo poner el PDF completo de resultados más avanzado que conseguimos, para que os hagáis una idea de su calidad. Ya me ha dado más o menos permiso, pero quiero asegurarme 100%.
Javi López ⛩️ tweet mediaJavi López ⛩️ tweet media
Español
211
1.6K
5.1K
2.2M
Marius
Marius@marius_vibes·
@gabriberton better yet hooks that run after every edit that runs all your quality checks and contracts between layers and forget about bad code forever
English
0
0
3
2.4K
Gabriele Berton
Gabriele Berton@gabriberton·
Vibe coding creates lots of dead code. Run this often. You're welcome --- Delete all dead code. Use ruff and vulture ---
English
160
264
6.4K
692.6K