Sequoia Capital

8.5K posts

Sequoia Capital banner
Sequoia Capital

Sequoia Capital

@sequoia

We help the daring build legendary companies from idea to IPO and beyond.

Menlo Park, CA Katılım Mart 2009
1.6K Takip Edilen747.4K Takipçiler
Sabitlenmiş Tweet
Sequoia Capital
Sequoia Capital@sequoia·
In honor of 50 years of Apple, we're sharing - for the first time ever - Don Valentine's original 1977 memo for Sequoia's investment into Apple Computer. #Apple50
Sequoia Capital tweet media
English
174
826
5.9K
2.2M
Sequoia Capital retweetledi
Lauren Reeder
Lauren Reeder@laurenmhreeder·
Our friend @bcherny created Claude Code and told me he hasn't written a line of code himself in 2026. His team is living in the future at @AnthropicAI. We talked about why coding is effectively solved, how loops are changing the way we work, and why the printing press is the right analogy for what's coming to software. Hint: it’s going to be a massive value creation opportunity. 00:00 Introduction 00:55 Claude Code Crowd Check 02:39 Origin Story of Claude Code 03:35 From Typeahead to Agents 05:07 Is Coding Solved 06:50 Boris Personal Workflow 08:51 Future Teams and Generalists 10:26 SaaS Apocalypse Predictions 12:57 Audience Q&A Deep Dive 23:35 Closing and What’s Next
English
2
5
34
5.5K
Sequoia Capital retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
Thanks @Konstantine and @sequoia for such a fun and wide-ranging chat! Loved the final question - von Neumann FTW 😀
Konstantine Buhler@Konstantine

Sir @demishassabis has a mind for synthesis. His favorite book is about a grand theory of everything. His preferred philosophers are seen by some as opposites. His life's work ranges from board games to Nobel-winning science. We're grateful to have hosted Demis and his @GoogleDeepMind team at @sequoia AI Ascent last week for a fireside chat. He kindly gave us permission to share this, and you can watch the full video here: 00:00 Intro 00:38 The Common Thread 01:29 Games as AI Training 02:59 Startup Advice 1.0 04:39 Founding DeepMind 07:25 DeepMind and AGI 08:52 AI for Science 10:37 Biology Breakthroughs and Isomorphic 12:42 New Sciences 20:29 Philosophy

English
57
77
852
118.8K
Sequoia Capital retweetledi
Liam Corrigan
Liam Corrigan@lmcorrigan1·
Dmitri is one of the most tenacious and admirable leaders in tech. His commitment to manifesting self-driving over 15+ years at Waymo is what is required to build the hardest, most ambitious technology. And it's a joy to experience as a user.
Konstantine Buhler@Konstantine

Waymo vehicles have a 13x lower rate of serious accidents than human drivers. Six generations of hardware. Thousands of innovations in AI and software. 170M+ fully autonomous miles. For 20 years @Dmitri_Dolgov has been focused on making driving safer. While dozens of companies have come and gone, Dmitri persisted. He knows that getting to "good" is easy, getting to "great" is hard and getting to "super human safety" is extreme. Dmitri chooses extreme. My favorite part about him? He’s extremely humble about all this and remains focused on what's next: scaling. We're grateful to have hosted Dmitri and his @Waymo team at @sequoia AI Ascent last week. Full video here: 00:00 Introduction 01:25 Origins 02:45 DARPA Challenge 04:18 Google Self Driving 05:44 Startup Grind 07:11 The AV Hype Cycle 09:47 Waymo World Model 12:41 End-to-End 15:28 Gen 6 Hardware and Scaling 19:53 Safety Stories and the Future

English
1
2
18
8.9K
Sequoia Capital retweetledi
Konstantine Buhler
Konstantine Buhler@Konstantine·
Waymo vehicles have a 13x lower rate of serious accidents than human drivers. Six generations of hardware. Thousands of innovations in AI and software. 170M+ fully autonomous miles. For 20 years @Dmitri_Dolgov has been focused on making driving safer. While dozens of companies have come and gone, Dmitri persisted. He knows that getting to "good" is easy, getting to "great" is hard and getting to "super human safety" is extreme. Dmitri chooses extreme. My favorite part about him? He’s extremely humble about all this and remains focused on what's next: scaling. We're grateful to have hosted Dmitri and his @Waymo team at @sequoia AI Ascent last week. Full video here: 00:00 Introduction 01:25 Origins 02:45 DARPA Challenge 04:18 Google Self Driving 05:44 Startup Grind 07:11 The AV Hype Cycle 09:47 Waymo World Model 12:41 End-to-End 15:28 Gen 6 Hardware and Scaling 19:53 Safety Stories and the Future
English
21
39
327
57.8K
Sequoia Capital retweetledi
Abhishek Malani
Abhishek Malani@abhishekm1636·
20 million fully autonomous rides. Half of them in the last two months. @dmitri_dolgov on building @Waymo through the AV winter and how the Waymo Driver go to 13x safer than a human.
Konstantine Buhler@Konstantine

Waymo vehicles have a 13x lower rate of serious accidents than human drivers. Six generations of hardware. Thousands of innovations in AI and software. 170M+ fully autonomous miles. For 20 years @Dmitri_Dolgov has been focused on making driving safer. While dozens of companies have come and gone, Dmitri persisted. He knows that getting to "good" is easy, getting to "great" is hard and getting to "super human safety" is extreme. Dmitri chooses extreme. My favorite part about him? He’s extremely humble about all this and remains focused on what's next: scaling. We're grateful to have hosted Dmitri and his @Waymo team at @sequoia AI Ascent last week. Full video here: 00:00 Introduction 01:25 Origins 02:45 DARPA Challenge 04:18 Google Self Driving 05:44 Startup Grind 07:11 The AV Hype Cycle 09:47 Waymo World Model 12:41 End-to-End 15:28 Gen 6 Hardware and Scaling 19:53 Safety Stories and the Future

English
1
2
21
6.6K
Sequoia Capital retweetledi
Alfred Lin
Alfred Lin@Alfred_Lin·
At @sequoia’s AI Ascent last week, @gdb told me something that stuck: in late 2024, AI wrote ~20% of @OpenAI's code. That number is now 80%. We also got into why human attention, not compute, is the real bottleneck in AI-augmented work, plus what it might mean to run an org of 100,000 agents. We’re grateful to Greg for joining us and many of the top founders/builders in AI. You can watch the full video here: 00:00 Intro 00:49 Compute Hunger Explained 02:13 Scaling Laws Mystery 03:31 New Architectures Ahead 04:42 How Close to AGI 06:46 Startup Playbook for AI 09:24 Inside OpenAI with Codex 11:11 Teams and Governance Shift 14:52 Security and Responsible Deployment 25:33 Science Frontiers and Wrap Up
English
4
10
111
16.6K
Sequoia Capital retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights: The first theme I tried to push on is that LLMs are about a lot more than just speeding up what existed before (e.g. coding). Three examples of new horizons: 1. menugen: an app that can be fully engulfed by LLMs, with no classical code needed: input an image, output an image and an LLM can natively do the thing. 2. install .md skills instead of install .sh scripts. Why create a complex Software 1.0 bash script for e.g. installing a piece of software if you can write the installation out in words and say "just show this to your LLM". The LLM is an advanced interpreter of English and can intelligently target installation to your setup, debug everything inline, etc. 3. LLM knowledge bases as an example of something that was *impossible* with classical code because it's computation over unstructured data (knowledge) from arbitrary sources and in arbitrary formats, including simply text articles etc. I pushed on these because in every new paradigm change, the obvious things are always in the realm of speeding up or somehow improving what existed, but here we have examples of functionality that either suddenly perhaps shouldn't even exist (1,2), or was fundamentally not possible before (3). The second (ongoing) theme is trying to explain the pattern of jaggedness in LLMs. How it can be true that a single artifact will simultaneously 1) coherently refactor a 100,000-line code base *and* 2) tell you to walk to the car wash to wash your car. I previously wrote about the source of this as having to do with verifiability of a domain, here I expand on this as having to also do with economics because revenue/TAM dictates what the frontier labs choose to package into training data distributions during RL. You're either in the data distribution (on the rails of the RL circuits) and flying or you're off-roading in the jungle with a machete, in relative terms. Still not 100% satisfied with this, but it's an ongoing struggle to build an accurate model of LLM capabilities if you wish to practically take advantage of their power while avoiding their pitfalls, which brings me to... Last theme is the agent-native economy. The decomposition of products and services into sensors, actuators and logic (split up across all of 1.0/2.0/3.0 computing paradigms), how we can make information maximally legible to LLMs, some words on the quickly emerging agentic engineering and its skill set, related hiring practices, etc., possibly even hints/dreams of fully neural computing handling the vast majority of computation with some help from (classical) CPU coprocessors.
Stephanie Zhan@stephzhan

@karpathy and I are back! At @sequoia AI Ascent 2026. And a lot has changed. Last year, he coined “vibe coding”. This year, he’s never felt more behind as a programmer. The big shift: vibe coding raised the floor. Agentic engineering raises the ceiling. We talk about what it means to build seriously in the agent era. Not just moving faster. Building new things, with new tools, while preserving the parts that still require human taste, judgment, and understanding.

English
247
714
5.4K
747.9K
Sonya Huang 🐥
Sonya Huang 🐥@sonyatweetybird·
Every year for AI Ascent, @gradypb, @Konstantine and I get to share some perspectives on AI and where things are headed. This year's talk was about the arrival of agents and the race to deploy them across the application layer, and how founders can compete in this crazy intense market ("Get MAD!" Moats, Affordance, Diffusion). 00:00 Introduction 01:10 AI Wave Calibration 01:56 Three Differences of AI 04:28 Inflection Points to AGI 07:06 Building on Top Strategy 07:29 MAD Moats Framework 09:33 Affordance and Diffusion 12:07 Agents Are Here Now 13:46 Agent Stack and Trajectory 21:12 Future of Work and Meaning
English
9
5
99
28.4K
Stephanie Zhan
Stephanie Zhan@stephzhan·
@karpathy and I are back! At @sequoia AI Ascent 2026. And a lot has changed. Last year, he coined “vibe coding”. This year, he’s never felt more behind as a programmer. The big shift: vibe coding raised the floor. Agentic engineering raises the ceiling. We talk about what it means to build seriously in the agent era. Not just moving faster. Building new things, with new tools, while preserving the parts that still require human taste, judgment, and understanding.
English
53
168
1.4K
727.3K
Konstantine Buhler
Konstantine Buhler@Konstantine·
Sir @demishassabis has a mind for synthesis. His favorite book is about a grand theory of everything. His preferred philosophers are seen by some as opposites. His life's work ranges from board games to Nobel-winning science. We're grateful to have hosted Demis and his @GoogleDeepMind team at @sequoia AI Ascent last week for a fireside chat. He kindly gave us permission to share this, and you can watch the full video here: 00:00 Intro 00:38 The Common Thread 01:29 Games as AI Training 02:59 Startup Advice 1.0 04:39 Founding DeepMind 07:25 DeepMind and AGI 08:52 AI for Science 10:37 Biology Breakthroughs and Isomorphic 12:42 New Sciences 20:29 Philosophy
English
40
199
1.5K
395.4K
Sequoia Capital
Sequoia Capital@sequoia·
AI Ascent videos out starting tomorrow 👀
Sequoia Capital tweet media
English
9
7
135
11.1K