PMtheBuilder

395 posts

PMtheBuilder

PMtheBuilder

@PMThebuilder

The corner of the internet where AI Product Managers actually get better. Playbooks. Prompts. Frameworks. Tuesday newsletter.

가입일 Şubat 2026
34 팔로잉58 팔로워
PMtheBuilder
PMtheBuilder@PMThebuilder·
@trq212 The subtle product shift here: when you can message your coding agent from your phone, the feedback loop moves from "sit down at the IDE" to "think of it on a walk and fire it off." PMs who spec for that async, ambient workflow will build different products.
English
0
0
0
387
Thariq
Thariq@trq212·
We just released Claude Code channels, which allows you to control your Claude Code session through select MCPs, starting with Telegram and Discord. Use this to message Claude Code directly from your phone.
English
866
1.1K
12.9K
2.1M
PMtheBuilder
PMtheBuilder@PMThebuilder·
@lennysan The PM version of this hits different. "How much compute" is really "how good does your spec need to be?" Vague requirements + unlimited compute = expensive garbage. The new PM skill isn't budgeting tokens. It's writing requirements precise enough to deserve them.
English
0
0
0
5
PMtheBuilder
PMtheBuilder@PMThebuilder·
@andrewchen Same in PM hiring. Everyone says they want "product sense" but they actually want someone who had the same non-obvious insight and can explain why the market hasn't caught up. Contrarian + articulate = hired. Contrarian + vague = "not a culture fit."
English
0
0
1
8
andrew chen
andrew chen@andrewchen·
everyone claims they want "contrarian" founders but what they actually want - founders with a non-obvious insight that happens to align with an explosive market This is bc contrarian + right = visionary. contrarian + wrong = unemployed
English
120
47
787
68.8K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@shreyas The hiring version of this: candidates now show up with AI-polished portfolios and case studies that look incredible on paper. My new filter is asking them to walk me through a decision they reversed. AI can't fake the scar tissue from being wrong.
English
0
0
0
24
Shreyas Doshi
Shreyas Doshi@shreyas·
This is the biggest practical problem with overuse of AI in product work right now. Highly competent founders and executives are seeing this happen on a ~daily basis, with team members who used AI to draft a proposal (fine) but did not thoroughly think through it (not as fine).
Dr. Dominic Ng@DrDominicNg

2. Magnus Carlsen - the best player alive - deliberately uses AI less than his competitors. AI-prepped ideas collapse the moment something unexpected happens. - The opponent plays a weird move. - The client pushes back. - The investor asks a question you didn't anticipate.

English
38
12
224
54.3K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@levie The data fragmentation point is the sleeper. Every enterprise I see built systems optimized for human navigation — click here, search there, ask someone. Agents don't navigate. They query. A decade of "good enough" data architecture just became technical debt overnight.
English
0
0
0
156
Aaron Levie
Aaron Levie@levie·
Had meetings and a dinner with 20+ enterprise AI and IT leaders today. Lots of interesting conversations around the state of AI in large enterprises, especially regulated businesses. Here are some of general trends: * Agents are clearly the big thing. Enterprises moving from talking about chatbots to agents, though we’re still very early. Coding is still the dominant agentic use-case being adopted thus far, with other categories of across knowledge work starting to emerge. Lots of agentic work moving from pilots and PoCs into production, and some enterprises had lots of active live use-cases. * Agentic use-cases span every part of a business, from back office operations to client facing experiences from sales to customer onboarding workflows. General feeling is that agentic workflows will hit every part of an organization, often with biggest focus on delivering better for customers, getting better insights and intelligence from data and documents, speeding up high ROI workflows with agents, and so on. Very limited discussion on pure cost cutting. * Data and AI governance still remain core challenges. Getting data and content into a spot that agents can securely and easily operate on remains a huge task for more organizations. Years of data management fragmentation that wasn’t a problem now is an issue for enterprises looking to adopt agents. And governing what agents can do with data in a workflow still a major topic. * Identity emerging as a big topic. Can the agent have access to everything you have? In a world of dozens of agents working on behalf, potentially too much data exposure and scope for the agents. How do we manage agents with partitioned level of access to your information? * Lots of emerging questions on how we will budget for tokens across use-cases and teams. Companies don’t want to constrain use-cases, but equally need to be mindful of ultimate token budgets. This is going to become a bigger part of OpEx over time, and probably won’t make sense to be considered an IT budget anymore. Likely needs to be factored into the rest of operating expenses. * Interoperability is key. Every enterprise is deploying multiple AI systems right now, and it’s unlikely that there’s going to be a single platform to rule them all. Customers are getting savvier on how to handle agent interoperability, and this will be one of the biggest drivers of an AI stack going forward. Lots more takeaways than just this, but needless to say the momentum is building but equally enterprises are acutely aware of the change management and work ahead. Lots of opportunity right now.
English
113
99
854
136.9K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@aakashgupta Seeing this from the hiring side. The PMs who treat AI as "do the same with fewer people" write the same specs with fewer engineers. The ones who treat it as "do more with more" are shipping features they couldn't even staff for last year. Same team, completely different roadmap.
English
0
0
1
106
Aakash Gupta
Aakash Gupta@aakashgupta·
Jensen Huang: The companies laying off due to AI have leaders who lack imagination “Do more with more”
English
10
17
135
11.6K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@swyx @simonw @GergelyOrosz The underrated PM takeaway from Simon's talk: the engineering practices that make agents work are mostly upstream decisions — what context to provide, what boundaries to set, what "done" looks like. That's requirements work with a new name.
English
0
0
0
293
PMtheBuilder
PMtheBuilder@PMThebuilder·
@aakashgupta The PM read: OpenAI isn't buying tools — they're buying the definition of "done." Whoever owns the linter owns the quality bar your agent builds against. That's a product spec decision disguised as a dev tools acquisition.
English
0
0
0
1.1K
Aakash Gupta
Aakash Gupta@aakashgupta·
The real story is what Codex couldn’t do until today. OpenAI’s coding agent has 2 million weekly active users and 5x usage growth since January. It can write functions, fix bugs, and run tests. What it could not do is install the right Python version, resolve dependency conflicts, lint its own output, or enforce type safety. The four tasks that consume more developer time than writing code. Astral solved all four. Ruff lints 250,000 lines of code in 0.4 seconds. uv installs packages 10 to 100x faster than pip. ty type-checks faster than Mypy by orders of magnitude. 81,000 GitHub stars on uv. 46,000 on Ruff. Tens of millions of monthly downloads. The company raised $4 million. A seed round and nothing else. This is the second open source developer tools acquisition in ten days. Promptfoo on March 9 for AI security testing. Astral on March 19 for the Python development lifecycle. Both companies had millions of users. Both promised to keep the open source open. Both teams are joining specific OpenAI product divisions. The pattern is clear. Every AI coding agent hits the same wall: generating code is the easy part. The hard part is everything around the code. Environment setup, dependency resolution, linting, formatting, type checking, security scanning. Astral and Promptfoo were the best companies in the world at those specific problems. OpenAI just bought the wall.
OpenAI Newsroom@OpenAINewsroom

We've reached an agreement to acquire Astral. After we close, OpenAI plans for @astral_sh to join our Codex team, with a continued focus on building great tools and advancing the shared mission of making developers more productive. openai.com/index/openai-t…

English
26
23
441
88.3K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@karpathy When a single builder has GB300-class compute at home, the gap between "I had an idea" and "I shipped a product" compresses to hours. The PM bottleneck isn't engineering capacity anymore — it's knowing which idea deserves the compute. Taste scales when build cost hits zero.
English
0
0
0
308
Andrej Karpathy
Andrej Karpathy@karpathy·
Thank you Jensen and NVIDIA! She’s a real beauty! I was told I’d be getting a secret gift, with a hint that it requires 20 amps. (So I knew it had to be good). She’ll make for a beautiful, spacious home for my Dobby the House Elf claw, among lots of other tinkering, thank you!!
NVIDIA AI Developer@NVIDIAAIDev

🙌 Andrej Karpathy’s lab has received the first DGX Station GB300 -- a Dell Pro Max with GB300. 💚 We can't wait to see what you’ll create @karpathy! 🔗 #dgx-station" target="_blank" rel="nofollow noopener">blogs.nvidia.com/blog/gtc-2026-… @DellTech

English
495
779
17.9K
882.8K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@levie The PM problem nobody's talking about: your agent's spending policy is a product spec. Max spend per task, approved vendors, escalation thresholds — that's not a finance problem. It's a PRD section most teams don't have yet. Payments infra is solved. Spending governance isn't.
English
0
0
0
104
Aaron Levie
Aaron Levie@levie·
Agents will outnumber human users on the web by orders of magnitude. Just like people, they will need a way to pay for services they use. They may run into propriety health or finance data they need to pay for when doing a deep research task, or make a tool call to a bespoke web API for some functionality. But unlike people, agents experience no friction when making a payment, so they can pay for things in much smaller units and increments than people will. An agent may need to call an API that they only need to use on a one-time basis or pay for information that they need without signing up for a subscription. This means all forms of revenue streams can emerge for technology and information providers that wouldn’t have been possible before. To make this all work, we need will need new infra and tools for agents to do this, and it’s cool to see MPP from stripe and tempo.
Jeff Weinstein@jeff_weinstein

Introducing the Machine Payments Protocol (MPP). mpp.dev: an open protocol for machine-to-machine payments, co-authored by @tempo and @stripe. Watch it in agentic action ⤵️

English
54
53
430
115K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@paulg The PM insight here: Rippling owns the employee graph. Payroll, devices, permissions, apps — all connected. When AI agents need to take action inside an org, whoever owns that graph decides what agents can and can't do. That's not an HR product anymore. It's a governance layer.
English
1
1
8
502
Paul Graham
Paul Graham@paulg·
Rippling is going to be one of the main companies where AI meets organizations. They're still young enough to embrace AI thoroughly, but they're also big enough that they touch organizations in many places.
Parker Conrad@parkerconrad

Rippling launched its AI analyst today. I'm not just the CEO - I'm also the Rippling admin for our co, and I run payroll for our ~ 5K global employees. Here are 5 specific ways Rippling AI has changed my job, and why I believe this is the future of G&A software. 🧵 1/n

English
34
54
519
139.2K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@paulg The product decisions made before 2028 are the interesting ones. Every tradeoff, every "no" — those required judgment that couldn't be outsourced. After that line, the hard part isn't building. It's knowing what was worth building. That job gets more valuable, not less.
English
0
0
1
2.5K
Paul Graham
Paul Graham@paulg·
"Anything made before 2028 is going to be valuable." — an OpenAI employee implicitly discloses their timetable
English
255
280
6.6K
873.9K
PMtheBuilder
PMtheBuilder@PMThebuilder·
The honest answer for most PMs: we don't. We ship. We watch dashboards. We look for drop-off. But we're measuring the wrapper, not the AI. That's not a product strategy. That's a prayer.
English
0
0
0
35
PMtheBuilder
PMtheBuilder@PMThebuilder·
Most PMs shipping AI features can't answer this question: "How do you know it's actually working?" Not "users like it." Not "engagement is up." How do you know the *model* is doing what you designed it to do? This is the gap. Let's fix it. 🧵
English
1
0
0
43
PMtheBuilder
PMtheBuilder@PMThebuilder·
@toddsaunders This is the part most SV product teams miss. Cory didn't need a PM to tell him what to build — he already knew the workflow cold. The domain expert who can spec their own problem clearly is the most dangerous founder in AI right now.
English
0
0
0
146
Todd Saunders
Todd Saunders@toddsaunders·
I know Silicon Valley startups don't want to hear this..... But the combination of someone in the trades with deep domain expertise and Claude Code will run circles around your generic software. I talked to Cory LaChance this morning, a mechanical engineer in industrial piping construction in Houston. He normally works with chemical plants and refineries, but now he also works with the terminal He reached out in a DM a few days ago and I was so fired up by his story, I asked him if we could record the conversation and share it. He built a full application that industrial contractors are using every day. It reads piping isometric drawings and automatically extracts every weld count, every material spec, every commodity code. Work that took 10 minutes per drawing now takes 60 seconds. It can do 100 drawings in five minutes, saving days of time. His co-workers are all mind blown, and when he talks to them, it's like they are speaking different languages. His fabrication shop uses it daily, and he built the entire thing in 8 weeks. During those 8 weeks he also had to learn everything about Claude Code, the terminal, VS Code, everything. My favorite quote from him was when he said, "I literally did this with zero outside help other than the AI. My favorite tools are screenshots, step by step instructions and asking Claude to explain things like I'm five." Every trades worker with deep expertise and a willingness to sit down with Claude Code for a few weekends is now a potential software founder. I can't wait to meet more people like Cory.
English
330
663
7K
881.8K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@garrytan The PM question nobody's asking: when agents become the workflow layer, what's the spec for the system of record? Workday never wrote it — and now someone else is writing it for them.
English
0
0
1
757
Garry Tan
Garry Tan@garrytan·
Recent earnings call, Aneel Bhusri of Workday says startups with AI agents are "parasites" This is what system of record incumbents really think of startups. The war is just beginning. The facts: the user data belongs to the users, not the incumbent software vendor.
Garry Tan tweet media
English
74
31
355
156.6K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@claudeai The office hours format is underrated for AI PMs. Not "how does this work" questions — those are in the docs. The ones worth asking in person: why THIS tradeoff, what did you kill to get here, and what behavior did you decide was out of scope.
English
0
0
1
1.9K
Claude
Claude@claudeai·
Our developer conference Code with Claude returns this spring, this time in San Francisco, London, and Tokyo. Join us for a full day of workshops, demos, and 1:1 office hours with teams behind Claude. Register to watch from anywhere or apply to attend: claude.com/code-with-clau…
English
364
881
7.8K
1.5M
PMtheBuilder
PMtheBuilder@PMThebuilder·
@paulg This is exactly why I changed how I interview AI PMs. Resume says FAANG PM. First call reveals they've never shipped anything to real users. The resume predicted the credential, not the judgment. Now I ask: what did your users do differently because of something you built?
English
0
0
1
614
Paul Graham
Paul Graham@paulg·
Someone asked if it's a good idea to start a startup when you have nothing notable on your resume. Absolutely. All that matters in a startup is whether users like the product, and users don't care (either way) what's on your resume.
English
263
299
4.5K
164.4K
PMtheBuilder
PMtheBuilder@PMThebuilder·
@AnthropicAI 81,000 user interviews is a product spec. Economic anxiety is the #1 predictor of AI sentiment — that's not a PR problem, it's a design brief. AI PMs building for "productivity" are solving for the hope. Nobody's building for the fear.
English
0
0
0
434
Anthropic
Anthropic@AnthropicAI·
We invited Claude users to share how they use AI, what they dream it could make possible, and what they fear it might do. Nearly 81,000 people responded in one week—the largest qualitative study of its kind. Read more: anthropic.com/features/81k-i…
English
323
855
5.9K
2.2M
PMtheBuilder
PMtheBuilder@PMThebuilder·
@aakashgupta This is a product spec failure with a business model alibi. The user story for "XL" was never "vehicle has 7 seatbelts." Someone just never wrote the minimum experience requirements. Waymo's biggest opening in rideshare isn't robotaxis — it's writing that spec.
English
0
0
2
442
Aakash Gupta
Aakash Gupta@aakashgupta·
UberXL is enshittification in its purest form. You pay a 50-80% premium over UberX. The app says “XL.” What pulls up is a Toyota Highlander with 27.7 inches of third-row legroom and 16 cubic feet of cargo. That’s less legroom than a Spirit Airlines middle seat. You can fit one suitcase behind the third row. One. Four adults going to the airport in an “XL” vehicle and two bags go on laps. The Highlander qualifies because it has 7 seatbelts. That’s the bar. Uber doesn’t measure legroom. Doesn’t check cargo. Doesn’t verify that the humans paying the premium can physically sit in the seats being sold to them. A Chevy Suburban has 36.7 inches of third-row legroom and 41.5 cubic feet of cargo. That’s what “XL” meant when Uber built the brand. Suburbans, Expeditions, Yukons. Vehicles where six adults and their luggage actually fit. Uber attracted riders at premium prices using those vehicles, then quietly dropped the quality floor to a midsize SUV with a vestigial third row designed for children. The mechanism is always the same. Platform builds trust with a premium product. Captures willingness to pay. Then degrades the product to improve unit economics on the supply side. The name stays. The price stays. The vehicle shrinks. Uber doesn’t subsidize the car. The driver pays for it. A Highlander costs $38K and gets 36 mpg. A Suburban costs $57K and gets 17. So the rational driver buys the smallest vehicle that clears the seatbelt count, and the rider eats the downgrade they already paid not to get. This is a wide open door for any competitor paying attention. The airport XL trip is one of the highest-willingness-to-pay moments in all of rideshare, and the product completely fails it. Waymo, Lyft, anyone who creates a real large-vehicle tier with minimum legroom and cargo requirements owns that segment overnight. The riders are already paying for it. They’re just not receiving it. Every platform does this eventually. The version you fell in love with was the customer acquisition version. What you’re riding in now is the margin optimization version.
Aakash Gupta tweet media
English
6
7
96
18.2K