Utsav Shah

400 posts

Utsav Shah banner
Utsav Shah

Utsav Shah

@utsav_sha

Engineer at Titan MSP. Previously a founder, staff engineer at Vanta, tech lead at Dropbox

Brooklyn, NYC Katılım Haziran 2016
313 Takip Edilen664 Takipçiler
Utsav Shah retweetledi
Molly O’Shea
Molly O’Shea@MollySOShea·
BREAKING: Inside General Catalyst’s $1.5B AI Roll-Up Engine: The Creation Strategy Marc Bhargava, Managing Director at General Catalyst leads the Creation Strategies for incubations, transformations, & venture buyouts, as well as leading early stage crypto & fintech investing. He breaks down one of the most significant shifts happening inside venture & private markets today: AI Roll-Ups. As one of the 3 largest venture players with ~$40B AUM, General Catalyst has quietly built a $1.5B AI roll-up engine reshaping a $16 trillion market – historically defined by low margins & slow modernization. To date, Marc has led GC's investments in Long Lake, Rox, Eudia, Titan MSP, Dwelly, Pallet, Kick, Serval, Vivere, Cartesia, Aaru, Civic Roundtable, Physical Intelligence, Agora, etc. as well as sourced Mercor, Windsurf, Together AI. 𝐓𝐈𝐌𝐄𝐒𝐓𝐀𝐌𝐏𝐒 (00:00) Marc Bhargava, General Catalyst (02:54) Inside the $1.5B Creation Strategy & why AI rollups emerged (05:42) Early successes: Crescendo, Long Lake, Titan MSP, Eudia (08:15) The 4 categories of task automation (10:42) How GC narrowed 70 industries → 10 (13:18) The $16T services AI opportunity (15:46) Competing with traditional PE: long-term compounding vs. short-term flips (18:11) Funding mechanics: incubate → automation proof → acquisition → scale (20:33) How GC builds hybrid teams: AI-native + PE + operators (22:47) Why Fortune 100 AI transformation often fails (24:55) Will AI shrink or expand the workforce? (27:22) How GC decides when to build an AI rollup vs. invest in SaaS (29:49) Global expansion & where talent lives today (32:10) Why AI rollups are still massively underrated & what's coming next (44:19) Global Expansion & Talent Distribution (47:13) General Catalyst's Investment Philosophy (48:56) Looking Ahead: The Future of AI Rollups
English
14
36
329
313.1K
Utsav Shah retweetledi
Ryo Lu
Ryo Lu@ryolu_·
the 9-9-6 local maxima trap you can optimize for looking busy, hitting metrics, being “productive” – but you might be climbing the wrong hill entirely. real breakthroughs happen in the spaces between. when you’re walking and your mind wanders. when you sit with a problem long enough that the obvious solutions dissolve and something deeper emerges. when you have the luxury of thinking “what if we’re approaching this completely wrong?” my process is simple: i’ll open a Notion doc on my phone and just walk. sometimes for hours. the walking rhythm unlocks something – maybe it’s the bilateral movement, maybe it’s getting away from screens, but ideas start connecting in ways they never do at a desk. i’ll quickly jot stuff down as interesting thoughts pass. then i come back and just sit with the problem. draw some pictures, build it out a bit. no rushing to conclusions. no pressure to ship something by end of day. just... what is this, really? what can it connect to or evolve into? what would this look like if it were in its most beautiful configuration? once i see it clearly, execution becomes effortless. the focused bursts where you’re completely in flow – that’s when the real work happens. but you can’t force your way there. you have to earn it with the slow, patient thinking first. the irony is that this “inefficient” approach ships better stuff faster than grinding 12-hour days. but it requires believing that thinking time isn’t wasted time. that walking isn’t procrastination. that sometimes the most productive thing you can do is to not do. many teams don’t get this. they need to see keyboards clicking and meetings happening. but the best work – the stuff that actually moves the needle – happens in the flow moments when no one’s watching.
martin_casado@martin_casado

Anecdotally, I’ve found the people most vocal and showy about grinding hard (9-9-6) tend to have less throughput than a garden variety workaholic. I suspect this is because they’ve internalized endurance pace all the time. And loose the ability to sprint when needed.

English
131
271
3.3K
623.1K
Utsav Shah
Utsav Shah@utsav_sha·
As a product builder, it's always most enjoyable to work at a company that creates or redefines a category. There's no one to copy from - so it's encouraged to have folks that deeply understand both customers and technology and can experiment rapidly
English
0
0
1
185
Utsav Shah
Utsav Shah@utsav_sha·
My favorite part of the internet is running into @EnvoyProxy error messages on popular websites
Utsav Shah tweet media
English
0
0
2
226
Utsav Shah
Utsav Shah@utsav_sha·
It's important to understand technology deeply so you can understand its limitations. Eg: Bigger model != better reasoning. Reasoning introduces latency. Therefore, low latency use-cases (like voice AI) are still capped on reasoning-heavy tasks.
Utsav Shah tweet media
English
0
0
0
166
Utsav Shah
Utsav Shah@utsav_sha·
It's easier to build software now, and there's more opportunities to build useful software. It's still hard to build "tasteful" software, and it's still hard to be creative enough to know what software to build.
English
0
1
2
185
Utsav Shah
Utsav Shah@utsav_sha·
I vibe-coded a voice AI app that acts as a call center for someone running a services business. It's been running for a month, and it's better than their previous $30k/year call center labor spend. The infrastructure cost last month was $25. AI is definitely taking jobs.
Utsav Shah tweet media
English
1
0
7
304
Utsav Shah
Utsav Shah@utsav_sha·
Startup ideas don't matter. But every billion dollar startup had a billion dollar idea that they focused on at some point. Does this idea enable you to learn and be creative to come to that billion dollar idea? Is it a good idea space? That matters.
English
0
0
2
217
Utsav Shah retweetledi
Shensi Ding
Shensi Ding@shensi·
to the person who said that we would never close a specific, huge, company-defining deal. fuck you. we closed it this morning. startups are about hope and persistence. startups are about getting punched in the face, stabbed in the back, and still getting up. this is why you'll never be a successful founder. this is why you didn't cut it at an earlier stage startup. you're dead to me. and you were wrong.
English
126
23
847
113.1K
Utsav Shah retweetledi
Sugu Sougoumarane
Sugu Sougoumarane@ssougou·
Joining Supabase For some time, I've been considering a Vitess adaptation for Postgres, and this feeling had been gradually intensifying. The recent explosion in the popularity of Postgres has fueled this into a full-blown obsession. As these databases grow, users are going to face a hard limit once they max out the biggest available machine. The project to address this problem must begin now, and I'm convinced that Vitess provides the most promising foundation. After exploring various environments, I found the best fit with Supabase. I’m grateful for how they welcomed me. Furthermore, their open-source mindset, fully remote work culture, and, most importantly, the empathetic leadership of @kiwicopple resonated with me. Now, it’s time to make this happen. Regarding PlanetScale You might wonder why I didn’t consider building this at PlanetScale. After nearly three years away, I've come to recognize that it’s a different company now, with its own priorities and vision. I had to draw a line. It required some introspection, but I finally shifted my perspective from "What should the co-founder of PlanetScale do?" to "What should the co-creator of Vitess do?". Once I framed the question this way, the answer became clear.
English
34
53
469
83.8K
Utsav Shah
Utsav Shah@utsav_sha·
Why is Kumail Nanjiani the keynote speaker at Datadog's conference? @datadoghq
Utsav Shah tweet media
Filipino
1
0
0
268
Utsav Shah
Utsav Shah@utsav_sha·
Sonnet 4 is great at vibe-refactoring code. AI is coming for our jobs..
Utsav Shah tweet media
English
0
0
1
177
Utsav Shah retweetledi
Gautam Kedia
Gautam Kedia@thegautam·
TL;DR: We built a transformer-based payments foundation model. It works. For years, Stripe has been using machine learning models trained on discrete features (BIN, zip, payment method, etc.) to improve our products for users. And these feature-by-feature efforts have worked well: +15% conversion, -30% fraud. But these models have limitations. We have to select (and therefore constrain) the features considered by the model. And each model requires task-specific training: for authorization, for fraud, for disputes, and so on. Given the learning power of generalized transformer architectures, we wondered whether an LLM-style approach could work here. It wasn’t obvious that it would—payments is like language in some ways (structural patterns similar to syntax and semantics, temporally sequential) and extremely unlike language in others (fewer distinct ‘tokens’, contextual sparsity, fewer organizing principles akin to grammatical rules). So we built a payments foundation model—a self-supervised network that learns dense, general-purpose vectors for every transaction, much like a language model embeds words. Trained on tens of billions of transactions, it distills each charge’s key signals into a single, versatile embedding. You can think of the result as a vast distribution of payments in a high-dimensional vector space. The location of each embedding captures rich data, including how different elements relate to each other. Payments that share similarities naturally cluster together: transactions from the same card issuer are positioned closer together, those from the same bank even closer, and those sharing the same email address are nearly identical. These rich embeddings make it significantly easier to spot nuanced, adversarial patterns of transactions; and to build more accurate classifiers based on both the features of an individual payment and its relationship to other payments in the sequence. Take card-testing. Over the past couple of years traditional ML approaches (engineering new features, labeling emerging attack patterns, rapidly retraining our models) have reduced card testing for users on Stripe by 80%. But the most sophisticated card testers hide novel attack patterns in the volumes of the largest companies, so they’re hard to spot with these methods. We built a classifier that ingests sequences of embeddings from the foundation model, and predicts if the traffic slice is under an attack. It leverages transformer architecture to detect subtle patterns across transaction sequences. And it does this all in real time so we can block attacks before they hit businesses. This approach improved our detection rate for card-testing attacks on large users from 59% to 97% overnight. This has an instant impact for our large users. But the real power of the foundation model is that these same embeddings can be applied across other tasks, like disputes or authorizations. Perhaps even more fundamentally, it suggests that payments have semantic meaning. Just like words in a sentence, transactions possess complex sequential dependencies and latent feature interactions that simply can’t be captured by manual feature engineering. Turns out attention was all payments needed!
English
175
553
5.2K
1.3M
Utsav Shah retweetledi
Daniel Nguyen
Daniel Nguyen@daniel_nguyenx·
"Cursor, please fix this small bug" Cursor:
English
359
1.5K
23.4K
3.4M