

0xMetaLabs
848 posts

@0xMetaLabs
Building technologies that power the future of Web3, AI & Cloud from startups to enterprises, globally. Scaling beyond MVP? Let’s connect.







New for financial services: ready-to-run Claude agent templates for building pitches, conducting valuation reviews, closing the books at month-end, and more. Install them as plugins in Cowork and Claude Code, or use our cookbooks to run them in production as Managed Agents.


My guest today is Brian Chesky (@bchesky), founder and CEO of Airbnb and one of the great consumer founders of the last 20 years. Paul Graham coined "founder mode" based on Brian's experience running Airbnb. This conversation is about what comes after it, what he calls AI founder mode, and how it will force founders to focus even more on the details. We talk about his eleven-star exercise for finding product market fit, why your first hire should be a recruiter, and why Airbnb's $100B IPO became one of the saddest days of his life. Brian still comes across like the 17 year-old at the Rhode Island School of Design (RISD) who picked to study industrial design. His heroes are all artists. Da Vinci, Van Gogh, Walt Disney, and Steve Jobs, all of whom were working the week they died because they loved what they did. Rick Rubin taught him that an artist is only an artist when they make things for themselves. Now Brian believes AI is the opportunity for all of us to do the same. Enjoy! Timestamps: 1:00 Studying Industrial Design 11:33 AI Founder Mode 17:02 Lack of Consumer AI Companies 22:10 Small Teams and Focused Problems 30:52 The Evolution from Founder to CEO 38:13 The 11-Star Experience 41:07 AI as a Canvas for Creativity 48:17 Detaching from Success 53:12 Founder-Led Moats 58:34 The Next Chapter of Airbnb 1:03:08 What Endures in the Age of AI 1:06:43 Lessons from Bodybuilding 1:10:20 The CEO's No. 1 Job 1:17:01 Activating Talent 1:20:39 The Kindest Thing











Jeff Bezos on taking big swings:


Peter Thiel, co-founder of Palantir and PayPal, is leading a $140mn investment in a US start-up that plans to use wave energy to fuel giant fleets of floating data centres. ft.trib.al/BxRK2rJ


Introducing SubQ - a major breakthrough in LLM intelligence. It is the first model built on a fully sub-quadratic sparse-attention architecture (SSA), And the first frontier model with a 12 million token context window which is: - 52x faster than FlashAttention at 1MM tokens - Less than 5% the cost of Opus Transformer-based LLMs waste compute by processing every possible relationship between words (standard attention). Only a small fraction actually matter. @subquadratic finds and focuses only on the ones that do. That's nearly 1,000x less compute and a new way for LLMs to scale.


BREAKING: Nvidia, $NVDA, and PulteGroup are partnering with Span to install in-home mini data centers. Each packs 16 Blackwell GPUs, 4 AMD EPYC CPUs, and 3TB RAM, powered by unused household electricity for AI inference.
