Marco Marti
3.3K posts

Marco Marti
@marcodelic256
AI, building, Markets & Sound Founder | @FeelClearme

I'm 22 years old and Claude Code is deteriorating my brain. Every single day for the last 6 months I've had 6 to 8 Claude Code terminals open, waiting for a response just so I can hit 'enter' 75% of the time. And it's doing something to me. In convos with a couple of friends, it's been a point that's been brought up pretty frequently. None of us feel as sharp as we used to. I don't know if it's just us, or others in their 20s are feeling the same thing, but it's something I've been thinking about a lot. P.S. I know this is a problem with my reliability/usage of it, not Claude Code itself, but the effects are real nonetheless

n8n's official Claude Code connector can now create and edit workflows! This goes way beyond plugging an API into MCP. It's purpose built for LLMs. Includes a new workflow TypeScript SDK so workflows are written as code instead of JSON, with more reliable validation. Works anywhere MCPs are supported (n8n 2.18.5+). 🔗 Full video: bit.ly/42Gi0VO


In the last four Claude Code CLI releases, we’ve shipped 50+ stability and performance fixes. Faster resume, stable auth, lower memory, fewer hangs: 🧵


GPT-5.5 by @OpenAI is now live in the Arena, landing across multiple leaderboards. Here’s how it ranks by modality: - Code Arena (agentic web dev): #9, a strong +50pt jump over GPT-5.4 - Document Arena (analysis & long-content reasoning): #6, on par with Sonnet 4.6 - Text Arena: #7, Math #3, Instruction Following: #8 - Expert Arena: #5 - Search Arena: #2 - Vision Arena: #5 Strong, well-rounded performance, especially in Code (+50 pts vs GPT-5.4). Congrats to @OpenAI on the release. Full category breakdowns by modality in the thread.

🚀 DeepSeek-V4 Preview is officially live & open-sourced! Welcome to the era of cost-effective 1M context length. 🔹 DeepSeek-V4-Pro: 1.6T total / 49B active params. Performance rivaling the world's top closed-source models. 🔹 DeepSeek-V4-Flash: 284B total / 13B active params. Your fast, efficient, and economical choice. Try it now at chat.deepseek.com via Expert Mode / Instant Mode. API is updated & available today! 📄 Tech Report: huggingface.co/deepseek-ai/De… 🤗 Open Weights: huggingface.co/collections/de… 1/n

With GPT-5.5, Codex now gets more of the job done across the browser, files, docs, and your computer. We've expanded browser use so Codex can interact with web apps, and test flows, click through pages, capture screenshots, and iterate on what it sees until it completes the task.




When you're brainstorming on something that is really important and complex with different files, reports, data and so on, a really good prompt that I found for Opus or GPT 5.4 XHigh is to say : now spawn a sub-agent, brief him on what you did and ask it to: review your work; propose a different angle; challenge you; propose fixes if needed.




👋 Is there a specific issue you're hitting? If so, would you mind running /feedback and sharing the id here? That would be most helpful for debugging. There were a number of harness changes that may have caused this, all of which are fixed in the latest (last known issue was fixed in 2.1.116 today). We will be sharing more in a bit, and have also shared a few updated on X/Threads as we've been investigating. General tips: 1. Use Opus 4.7 + xhigh/max effort 2. Make sure you're using the latest version of Claude Code (currently 2.1.116)




