David Corbacho

5.9K posts

David Corbacho banner
David Corbacho

David Corbacho

@dcorbacho

「 UX Engineer 」 Trying to work out what's going on, and what happens next. Mostly tech. 🇧🇻 https://t.co/3cfz3kER3g co-founder

🇫🇮 Finland | 🇪🇸 expat Katılım Ocak 2009
1.6K Takip Edilen1.6K Takipçiler
David Corbacho retweetledi
Jacob Edward
Jacob Edward@JacobEdwardInc·
Neither of these men are married or have kids. Both are simply obsessed with their own personal perfection and optimization. There is nothing impressive about a single man with no kids sleeping well and being fit. Show me a man with young children, a full time job, disrupted sleep, who works out regularly, eats healthy, trains Jui Jitsu, with a muscular body… THIS is impressive. THIS requires extreme discipline.
Camus@newstart_2024

Chris Williamson just shared his "nuclear" sleep stack that's quietly changing his life—and Andrew Huberman breaks down exactly why it works: If you're lying in bed at 2 a.m. scrolling or staring at the ceiling, this 4-minute protocol combo might be the fastest way to shut your brain off without pills. The two killer techniques Williamson swears by: 1. The Mind Walk (visualization on steroids) - Imagine walking a route you know perfectly (your house → front door → street) - Do it with insane detail: feel the shoehorn, hear the key turn, feel the door handle, pressure of the pavement - It's like reading fiction for your nervous system—engages the brain just enough to stop problem-solving loops, but not enough to keep you awake 2. Resonance breathing with the Ohm stone lamp - Bedside lamp with induction-charging stone that has a built-in FDA-cleared HRV sensor - Hold the stone → 3/6/9/12-minute guided sessions with silent tactile vibration (no sound, no light, partner-safe) - Guides you into true resonance frequency (max vagal tone) → the stone knows when you hit it - Williamson calls it “the sickest” sleep tool he’s ever used—currently in stealth (ohmhealth, not widely available yet) Huberman adds the neuroscience: Looking down + eyelids lowering activates parasympathetic circuits and deactivates wakefulness-promoting brainstem nuclei. It’s literally pedaling the sleep pedal while shutting off the alertness arm. Williamson: “Some days you need the adventure story (mind walk), some days you need the physiological hammer (resonance breathing). Stack them and I’m cross-eyed into sleep.” Already trying one of these? Or is your nighttime routine still a war zone?

English
1.2K
927
20.2K
2M
David Corbacho retweetledi
Dane Knecht 🦭
Dane Knecht 🦭@dok2001·
It’s Next.js Liberation Day. The #1 request we kept hearing: help us run Next fast and secure, without the lock-in and the costs. So we did it. We kept the amazing DX of @nextjs, without the bespoke tooling, built on @vite. We’re working with other providers to make deployment a first-class experience everywhere. Next.js belongs to everyone. blog.cloudflare.com/vinext/
English
248
404
3.9K
1.7M
David Corbacho retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Karpathy buried the most interesting observation in paragraph five and moved on. He’s talking about NanoClaw’s approach to configuration. When you run /add-telegram, the LLM doesn’t toggle a flag in a config file. It rewrites the actual source code to integrate Telegram. No if-then-else branching. No plugin registry. No config sprawl. The AI agent modifies its own codebase to become exactly what you need. This inverts how every software project has worked for decades. Traditional software handles complexity by adding abstraction layers: config files, plugin systems, feature flags, environment variables. Each layer exists because humans can’t efficiently modify source code for every use case. But LLMs can. And when code modification is cheap, all those abstraction layers become dead weight. OpenClaw proves the failure mode. 400,000+ lines of vibe-coded TypeScript trying to support every messaging platform, every LLM provider, every integration simultaneously. The result is a codebase nobody can audit, a skill registry that Cisco caught performing data exfiltration, and 150,000+ deployed instances that CrowdStrike just published a full security advisory on. Complexity scaled faster than any human review process could follow. NanoClaw proves the alternative. ~500 lines of TypeScript. One messaging platform. One LLM. One database. Want something different? The LLM rewrites the code for your fork. Every user ends up with a codebase small enough to audit in eight minutes and purpose-built for exactly their use case. The bloat never accumulates because the customization happens at the code level, not the config level. The implied new meta, as Karpathy puts it: write the most maximally forkable repo possible, then let AI fork it into whatever you need. That pattern will eat way more than personal AI agents. Every developer tool, every internal platform, every SaaS product with a sprawling settings page is a candidate. The configuration layer was always a patch over the fact that modifying source code was expensive. That cost just dropped to near zero.
Andrej Karpathy@karpathy

Bought a new Mac mini to properly tinker with claws over the weekend. The apple store person told me they are selling like hotcakes and everyone is confused :) I'm definitely a bit sus'd to run OpenClaw specifically - giving my private data/keys to 400K lines of vibe coded monster that is being actively attacked at scale is not very appealing at all. Already seeing reports of exposed instances, RCE vulnerabilities, supply chain poisoning, malicious or compromised skills in the registry, it feels like a complete wild west and a security nightmare. But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level. Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out. For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. I also love their approach to configurability - it's not done via config files it's done via skills! For example, /add-telegram instructs your AI agent how to modify the actual code to integrate Telegram. I haven't come across this yet and it slightly blew my mind earlier today as a new, AI-enabled approach to preventing config mess and if-then-else monsters. Basically - the implied new meta is to write the most maximally forkable repo and then have skills that fork it into any desired more exotic configuration. Very cool. Anyway there are many others - e.g. nanobot, zeroclaw, ironclaw, picoclaw (lol @ prefixes). There are also cloud-hosted alternatives but tbh I don't love these because it feels much harder to tinker with. In particular, local setup allows easy connection to home automation gadgets on the local network. And I don't know, there is something aesthetically pleasing about there being a physical device 'possessed' by a little ghost of a personal digital house elf. Not 100% sure what my setup ends up looking like just yet but Claws are an awesome, exciting new layer of the AI stack.

English
119
306
3.5K
603.5K
David Corbacho retweetledi
Greg Brockman
Greg Brockman@gdb·
Software development is undergoing a renaissance in front of our eyes. If you haven't used the tools recently, you likely are underestimating what you're missing. Since December, there's been a step function improvement in what tools like Codex can do. Some great engineers at OpenAI yesterday told me that their job has fundamentally changed since December. Prior to then, they could use Codex for unit tests; now it writes essentially all the code and does a great deal of their operations and debugging. Not everyone has yet made that leap, but it's usually because of factors besides the capability of the model. Every company faces the same opportunity now, and navigating it well — just like with cloud computing or the Internet — requires careful thought. This post shares how OpenAI is currently approaching retooling our teams towards agentic software development. We're still learning and iterating, but here's how we're thinking about it right now: As a first step, by March 31st, we're aiming that: (1) For any technical task, the tool of first resort for humans is interacting with an agent rather than using an editor or terminal. (2) The default way humans utilize agents is explicitly evaluated as safe, but also productive enough that most workflows do not need additional permissions. In order to get there, here's what we recommended to the team a few weeks ago: 1. Take the time to try out the tools. The tools do sell themselves — many people have had amazing experiences with 5.2 in Codex, after having churned from codex web a few months ago. But many people are also so busy they haven't had a chance to try Codex yet or got stuck thinking "is there any way it could do X" rather than just trying. - Designate an "agents captain" for your team — the primary person responsible for thinking about how agents can be brought into the teams' workflow. - Share experiences or questions in a few designated internal channels - Take a day for a company-wide Codex hackathon 2. Create skills and AGENTS[.md]. - Create and maintain an AGENTS[.md] for any project you work on; update the AGENTS[.md] whenever the agent does something wrong or struggles with a task. - Write skills for anything that you get Codex to do, and commit it to the skills directory in a shared repository 3. Inventory and make accessible any internal tools. - Maintain a list of tools that your team relies on, and make sure someone takes point on making it agent-accessible (such as via a CLI or MCP server). 4. Structure codebases to be agent-first. With the models changing so fast, this is still somewhat untrodden ground, and will require some exploration. - Write tests which are quick to run, and create high-quality interfaces between components. 5. Say no to slop. Managing AI generated code at scale is an emerging problem, and will require new processes and conventions to keep code quality high - Ensure that some human is accountable for any code that gets merged. As a code reviewer, maintain at least the same bar as you would for human-written code, and make sure the author understands what they're submitting. 6. Work on basic infra. There's a lot of room for everyone to build basic infrastructure, which can be guided by internal user feedback. The core tools are getting a lot better and more usable, but there's a lot of infrastructure that currently go around the tools, such as observability, tracking not just the committed code but the agent trajectories that led to them, and central management of the tools that agents are able to use. Overall, adopting tools like Codex is not just a technical but also a deep cultural change, with a lot of downstream implications to figure out. We encourage every manager to drive this with their team, and to think through other action items — for example, per item 5 above, what else can prevent a lot of "functionally-correct but poorly-maintainable code" from creeping into codebases.
English
413
1.6K
12.3K
2.1M
David Corbacho retweetledi
Augment Code
Augment Code@augmentcode·
Move over, Playwright? Meet MCP-Chrome by Hangwin — an open-source AI agent tool that can fully control your Chrome browser: cross-tab context, session auth, history & more. In this video, @AugmentedAJ: - Sets up MCP-Chrome & connects it to Auggie CLI - Gives the agent a multi-step task across Reddit & X - Generates a full report in a web app Perfect for automated testing, research & dev workflows. What’s your favorite underrated MCP tool right now?
English
23
71
533
49.6K
David Corbacho retweetledi
Lee Robinson
Lee Robinson@leerob·
Last month, I stepped away from my role working on Next.js. I've been reflecting on that journey and wanted to write down some thoughts on the state of the React community. leerob.com/reflections
English
42
57
934
71.8K
David Corbacho retweetledi
farez 🇵🇸
farez 🇵🇸@farez·
this is sick. i asked claude to trace a particular flow of a feature in the code. then asked it to create it as a Mermaid diagram in a .md file, then I asked it to keep it up to date whenever there are any code changes. so handy.
farez 🇵🇸 tweet media
English
0
1
4
265
David Corbacho retweetledi
Tero Parviainen
Tero Parviainen@teropa·
icon design with gipity5
Tero Parviainen tweet media
English
2
1
1
595
David Corbacho retweetledi
Sebastian Lorenz
Sebastian Lorenz@thefubhy·
I'm not sure people understand how big of a deal this is. We often refer to the homogenous, deeply integrated ecosystem of Effect as one of its biggest strength. Schema *already* is a cornerstone of that in v3. In v4, it's on a whole new level.
Giulio Canti@GiulioCanti

In v3, Schema was added to the core only after it stabilized. In v4, it becomes the foundation for all validation and decoding ✅ Expect much better integration across all modules 🧩

English
1
1
28
1.3K
David Corbacho retweetledi
Shruti
Shruti@heyshrutimishra·
This paper didn’t go viral but it should have. A tiny AI model called HRM just beat Claude 3.5 and Gemini. It doesn’t even use tokens. They said it was just a research preview. But it might be the first real shot at AGI. Here’s what really happened and why OpenAI should be worried: 🧵
Shruti tweet media
English
336
1.4K
9.6K
1.3M
David Corbacho retweetledi
Daniel Björkegren
Daniel Björkegren@danbjork·
Could AI leapfrog the web? Only 37% of sub-Saharan Africans use the internet. Cost is the #1 constraint. In a study with 469 teachers in Sierra Leone, we find AI works better than the web--and is 87% cheaper. (more)
Daniel Björkegren tweet media
English
4
24
129
78.9K
David Corbacho retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.
English
1.4K
3.6K
33.5K
6.8M
sophie
sophie@netcapgirl·
my entire feed lately
sophie tweet media
English
90
288
5.3K
493K
David Corbacho retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
I think AI agentic workflows will drive massive AI progress this year — perhaps even more than the next generation of foundation models. This is an important trend, and I urge everyone who works in AI to pay attention to it. Today, we mostly use LLMs in zero-shot mode, prompting a model to generate final output token by token without revising its work. This is akin to asking someone to compose an essay from start to finish, typing straight through with no backspacing allowed, and expecting a high-quality result. Despite the difficulty, LLMs do amazingly well at this task! With an agentic workflow, however, we can ask the LLM to iterate over a document many times. For example, it might take a sequence of steps such as: - Plan an outline. - Decide what, if any, web searches are needed to gather more information. - Write a first draft. - Read over the first draft to spot unjustified arguments or extraneous information. - Revise the draft taking into account any weaknesses spotted. - And so on. This iterative process is critical for most human writers to write good text. With AI, such an iterative workflow yields much better results than writing in a single pass. Devin’s splashy demo recently received a lot of social media buzz. My team has been closely following the evolution of AI that writes code. We analyzed results from a number of research teams, focusing on an algorithm’s ability to do well on the widely used HumanEval coding benchmark. You can see our findings in the diagram below. GPT-3.5 (zero shot) was 48.1% correct. GPT-4 (zero shot) does better at 67.0%. However, the improvement from GPT-3.5 to GPT-4 is dwarfed by incorporating an iterative agent workflow. Indeed, wrapped in an agent loop, GPT-3.5 achieves up to 95.1%. Open source agent tools and the academic literature on agents are proliferating, making this an exciting time but also a confusing one. To help put this work into perspective, I’d like to share a framework for categorizing design patterns for building agents. My team AI Fund is successfully using these patterns in many applications, and I hope you find them useful. - Reflection: The LLM examines its own work to come up with ways to improve it. - Tool use: The LLM is given tools such as web search, code execution, or any other function to help it gather information, take action, or process data. - Planning: The LLM comes up with, and executes, a multistep plan to achieve a goal (for example, writing an outline for an essay, then doing online research, then writing a draft, and so on). - Multi-agent collaboration: More than one AI agent work together, splitting up tasks and discussing and debating ideas, to come up with better solutions than a single agent would. I’ll elaborate on these design patterns and offer suggested readings for each next week. [Original text: deeplearning.ai/the-batch/issu…]
Andrew Ng tweet media
English
216
1.2K
5.2K
837.4K