Brad Noble
6.2K posts






It’s very unclear to me what the upper bound on daily token use per person is going to. Orders of magnitude beyond this for sure.



The proliferation of AI demands a new "proof of work." Personalized, intelligent spam is still spam. Personalized, intelligent unwanted sales calls are still...unwanted. Humans "just checking in" can now be superpowered and never drop a ball again...which means all communication is going to be so crowded as to be unusable. Bitcoin's origins date back to a system called Hashcash, proposed by Adam Back in 1997 ("Hashcash was originally proposed as a mechanism to throttle systematic abuse of un-metered internet resources such as email, and anonymous remailers in May 1997"). Postal mail requires a postage stamp, and that small cost prevented abuse. Want to send out a billion letters? That's going to cost you a few hundred million dollars. That explains why you don't get 1000 pieces of physical junk mail every day. But email? Virtually free. Hence subject to abuse. Hashcash would force the *sender* to do a certain amount of [then!] CPU work, which the recipient could instantly verify. An intentional asymmetry. 20 seconds to send, .001 seconds to verify. It never took off because Bayesian etc spam filtering got better, things like CAN-SPAM were passed, etc. But I implemented Hashcash back in the day, and thought it was the right solution since it used the laws of economics to control the problem. Increase cost, decrease supply. Ensure it's not worth the cost unless enough economic value is created. Fast forward to 2026. AI-powered email, phone calls, text messages, and all other forms of communication are about to explode. And given AI's "computer use" wizardry, everyone can just have AI use existing systems to pump out more, more, more...and look indistinguishable from humans. The Turing test has been rendered essentially obsolete, so we don't need a better Captcha. We need an economic solution. Bitcoin took proof of work and turned it into a currency / a store of value. One option is to simply "charge" per receipt/connection, to create an economic constraint. Another is to force/throttle based on proof of work in a way that hopefully is brute-force GPU resistant -- which is the exact same thing as "charging," but without a currency. But we are quickly headed towards a communications catastrophe, and rather than forcing agents to get "smarter" and sneak past more filters (a never-ending virus v anti-virus battle), there's a real opportunity to create a proof of work standard and use an economic solution.



Alongside Custom Agents, we're also quietly releasing an "extreme pre-alpha" of something called Notion Workers. I joined Notion dreaming we would one day make it a developer platform, and this is just the start! github.com/makenotion/wor…




*gets up on soap box* With the announcement of this new "code mode" from Anthropic and Cloudflare, I've gotta rant about LLMs, MCP, and tool-calling for a second Let's all remember where this started LLMs were bad at writing JSON So OpenAI asked us to write good JSON schemas & OpenAPI specs But LLMs sucked at tool calling, so it didn't matter. OpenAPI specs were too long, so everyone wrote custom subsets Then LLMs got good at tool calling (yay!) but everyone had to integrate differently with every LLM Then MCP comes along and promises a write-once-integrate everywhere story. It's OpenAPI all over again. MCP is just a OpenAPI with slightly different formatting, and no real justification for doing the same work we did to make OpenAPI specs and but different MCP itself goes through a lot of iteration. Every company ships MCP servers. Hype is through the roof. Yet use of MCP use is super niche But now we hear MCP has problems. It uses way too many tokens. It's not composable. So now Cloudflare and Anthropic tell us it's better to use "code mode", where we have the model write code directly Now this next part sounds like a joke, but it's not. They generate a TypeScript SDK based on the MCP server, and then ask the LLM to write code using that SDK Are you kidding me? After all this, we want the LLM to use the SAME EXACT INTERFACE that human programmers use? I already had a good SDK at the beginning of all this, automatically generated from my OpenAPI spec (shout-out @StainlessAPI) Why did we do all this tool calling nonsense? Can LLMs effectively write JSON and use SDKs now? The central thesis of my rant is that OpenAI and Anthropic are platforms and they run "app stores" but they don't take this responsibility and opportunity seriously. And it's been this way for years. The quality bar is so much lower than the rest of the stuff they ship. They need to invest like Apple does in Swift and XCode. They think they're an API company like Stripe, but their a platform company like an OS. I, as a developer, don't want to build a custom chatgpt clone for my domain. I want to ship chatgpt and claude apps so folks can access my service from the AI they already use Thanks for coming to my TED talk








🚨OpenAI Realtime API released! Demo app is hosted on Val Town 😱 Remix it with one click, link below
