Mike Dougherty
8.6K posts

Mike Dougherty
@doughertym
‘The world is full of magic things, patiently waiting for our senses to grow sharper.'― W.B. Yeats https://t.co/4FQ0muBjHV


Boil the Oceans You know the phrase: “don’t boil the ocean.” Everyone’s said it in some overly ambitious meeting. It’s good advice in normal times. It keeps teams focused. It prevents scope creep. But we are no longer in normal times, and I think it’s time to retire saying it. Artificial Superintelligence means it’s time to boil the ocean. We’ll start with a few lakes first. I was recently with a university endowment’s head of private investing who told me their engineers were terrified for their jobs after seeing what Claude Code could do. And I get it — that’s the natural first reaction. But it’s the wrong one. It’s a zero-sum reaction to a positive-sum moment. Instead of worrying about doing the same thing we’ve been doing for cheaper, why not focus on doing the thing we never even dreamed of doing? Why can’t that endowment achieve 50% net IRR instead of 10%? Why can’t a startup deliver a service that is 100x better than the incumbent? Why can’t we have fusion energy? Why can’t we talk to every single user and have a perfect understanding of every bug in our product? These aren’t rhetorical questions anymore. They’re engineering problems with paths to solutions. Here is what I think is actually going on with the fear: our fear of the future is directly proportional to how small our ambitions are. If your plan is to keep doing exactly what you’re doing, then yes, a machine that can do it faster and cheaper is terrifying. But if your plan is to do something dramatically bigger, then the machine is the best news you’ve ever gotten. If you’re a worker — someone who trades labor for a living — this is the moment to become a builder. Start a business. And if you’re already management or capital, it’s time to go 10x more hardcore on what your aspirations could be. Not eking out 5% efficiency gains. Not increasing profit margins 2% by lowering cost and firing people. Those are the old games. The new question is: what would it look like to build a product or service so good that people would happily pay 10x what they pay now? The net result of this is more jobs, not fewer. As Ryan Petersen likes to say, the human desire for more things is absolutely limitless. We can actually fulfill that desire now — if we have the agency to prompt it for ourselves. Buckminster Fuller coined the term “ephemeralization” in 1938: doing more and more with less and less until eventually you can do everything with nothing. His entire vision of progress was about technology enabling radical expansion of human capability through dematerialization. He traced this from stone bridges to iron trusses to steel cables — each iteration stronger, longer, lighter, cheaper. He wasn’t describing job destruction. He was describing civilization getting better at being civilization. This is Jevons Paradox for everything. When you make a resource dramatically more efficient, you don’t use less of it — you use vastly more. Steam engines didn’t reduce coal consumption. They made coal so useful that demand exploded. The same thing is about to happen with intelligence, with labor, with every service and product we can imagine. But Jevons Paradox doesn’t activate on its own. It requires capital and management to actually raise their ambitions — to boil lakes and oceans instead of drowning them in committee That’s what startups have always been good at: moving fast in the face of radical uncertainty, building for the 10x future while everyone else is optimizing for the 1.05x present. Time to start.



The smartest RL paper I've read this year just dropped on arXiv: Replication Learning of Option Pricing (RLOP). Not because it claims to beat the market, but because it finally stops optimizing for the wrong thing. Every previous RL trading system I've seen optimizes P&L. Makes sense, right? Except in live markets, you don't die from bad average returns. You die from one catastrophic drawdown when correlations break and your hedge fails. RLOP flips the objective: minimize shortfall probability and Expected Shortfall (tail risk), not expected profit. The results are telling. On SPY and XOP options, RLOP doesn't just reduce hedging shortfall, it outperforms parametric models during stress events. When volatility spikes and everyone's deltas are wrong, risk-aware RL holds up. Profit-maximizing RL doesn't. This matters because we're entering the agentic economy era. McKinsey says $3-5T in AI-to-AI commerce in five years. Gartner says AI "machine customers" will control $30T by 2030. CZ predicts agents will make "one million times more payments than humans." The infrastructure is already live: Circle Nanopayments, x402/a402 protocol, agent wallets on Solana/Sui/Base. But here's the problem nobody's talking about: if your trading agent optimizes purely for profit and ignores tail risk, the first Black Swan event wipes out months of gains. The agentic economy doesn't just need payment rails and wallets. It needs agents that understand risk the way a professional trader does. At Beep, we're building exactly this on Sui: RL systems that trade with risk budgets, not just alpha targets. Zero-fee stablecoin rails so agents can rebalance without bleeding on gas. The tech for machine-to-machine finance exists. The question is whether those machines understand what kills you in real markets. If you're building agent trading systems, read the RLOP paper. Then ask yourself: is your agent optimizing to win on average, or to survive the worst case?

I wrote about the exponential improvement path of AI, the early signs of massive transformations in the nature of work (including software companies where nobody codes any more), and how one week in February is an omen of our future as things get weirder. open.substack.com/pub/oneusefult…


AI agents will soon graduate to fully-fledged economic actors that buy services, compute, and even data in the course of accomplishing high-level goals. 1-2 years before we start seeing this at scale.

openclaw in 2026 is what chatgpt was in 2022 - a viral glimpse into the very near future ..

I finally found a way to give my Openclaw the ability to pay for things, without me, and without giving it my credit cards Most people don't wanna give their Openclaws a credit card (rightfully so), so the next best way to give them their own banks is with Stablecoins And if you have a product with a paywall, you need to give other Openclaws the ability to pay to use it, or you're leaving money on the table Beep on SUI is basically Stripe for Agents, with USDC you can have Openclaw pay for things that it needs and never bug you again If you have an Agent-based product: add this line and make more money from other Agents If you want your Openclaw to be more autonomous: give him some USDC and let him pay for things he needs



Nano Banana 2 is out. I had early access for the past few days, and tested it across a ton of prompts. It's leveled up for a bunch of use cases - infographics, ads, action shots, even cartoons. And it's crazy fast! Some styles + prompts you should try 👇

It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.

New in Claude Code: Remote Control. Kick off a task in your terminal and pick it up from your phone while you take a walk or join a meeting. Claude keeps running on your machine, and you can control the session from the Claude app or claude.ai/code



A few years ago, @burkaygur and @gorkem coined the term “generative media platform.” Most people didn’t get it at first. Fast forward to today: AI inference for video, images and audio is one of the fastest growing markets in history. @fal has published the first State of Generative Media report that captures how the category they named has grown up fast. It looks incredible (of course) and there’s tons of interesting data to dig through on enterprise vs. personal adoption, ROI, vertical usage, etc. So many of findings they pulled together jumped out, but here are a few: - Major AI video and image model releases arrived every 4-6 weeks in 2025. The timeline they put together is a good reminder of just how frenetic the pace felt (from the Studio Ghibli era of GPT Image 1 to Nano Banana’s mind blowing debut) - Industry adoption of AI generated media is accelerating. Advertising leads the way at 56% adoption. Entertainment at 43%, Creative tools 31%, Education 30%, E-Commerce 19%, and Architecture & Real Estate at 8%. - Enterprise deployments are running a median of 14 different models in production. Task-specific optimization models consistently outperform single “omni models,” and teams are really playing around. The full report is super thorough and worth checking out if you’re deep on this space and want to nerd out. But it’s also a great resource if you haven’t been tracking closely or forget which models are best for which use cases, tons of food for thought in here on how to push your creative workflows in 2026.



