Serrah Linares

9.8K posts

Serrah Linares banner
Serrah Linares

Serrah Linares

@SerrahL

Inventor. GTM Leader. Forbes contributor. Building at Stealth. Following #tech #growthmarkets #startups. Views my own.

Seattle, WA Katılım Temmuz 2009
14.6K Takip Edilen17.4K Takipçiler
Serrah Linares retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
I don't think most PMs realize the PRD is becoming obsolete. For the last decade, the PM's core artifact was a qualitative spec. Clear requirements, user stories, acceptance criteria. The engineering team interpreted it, built something close, and the PM spent two weeks reconciling what shipped with what they wrote. The best AI companies replaced that entire loop with evals. A set of inputs your product needs to handle. A task that generates outputs. A scoring function that produces a number between 0 and 1. No ambiguity. No interpretation gap. Ankur Goyal built the eval platform behind Vercel, Replit, Ramp, Notion, and Airtable. An $800M company. He walked through building an eval from zero on this episode and the score went from 0 to 0.75 in under 20 minutes. That's a PM shipping a measurable quality bar before a single line of product code exists. Here's the part that changes the PM role permanently. When the product passes the eval and users still hate it, the eval is wrong. That's on the PM. Evals make PM judgment quantifiable in a way PRDs never did. You can't hide behind "the spec was ambiguous." There's a number now. Six months ago, PM interviews asked "how do you use AI in your workflow." The next wave of interviews is going to ask you to write an eval. The PMs who can encode user intent as a scoring function are building the one skill that survives every model change, every framework swap, every agent rewrite. Write the eval.
Aakash Gupta@aakashgupta

Evals are the new PRD. The companies building AI products that actually work are running 12.8 eval experiments per day. Here is the playbook with @ankrgyl, Founder and CEO of @braintrust ($800M valuation, behind Vercel, Replit, Ramp, Zapier, Notion, Airtable): ⏱ 1:43 Why vibe checks stop scaling ⏱ 6:35 Evals are the new PRD ⏱ 8:45 The Claude Code evals controversy ⏱ 18:48 Building an eval live from zero ⏱ 29:51 Connecting Linear MCP and iterating ⏱ 39:12 Why you need evals that fail ⏱ 43:36 Offline vs online evals ⏱ 47:40 Three mistakes killing eval culture The core framework: every eval is exactly three things. A set of inputs your product needs to handle. A task that takes those inputs and generates outputs. A scoring function that produces a number between 0 and 1. We built one from scratch on camera. Score went from 0 to 0.75 in under 20 minutes.

English
30
33
347
76.5K
Serrah Linares retweetledi
ELON CLIPS
ELON CLIPS@ElonClipsX·
Jensen Huang: I love the fact that Elon is present at the point of action. He goes where the problem is. “Elon is deep in so many different topics, yet he's also a really good systems thinker. And so he's able to think through multiple disciplines. And he obviously pushes things, questions everything, whether, number one, is it necessary, Number two, does it have to be done this way? And in other words, does it have to take this long? And so he has the ability to question everything to the point where everything is down to its minimal amount. You can't take anything else out. And yet the necessary capabilities of the product retains. And so he is as minimalist as you could possibly imagine. And he does it at a system scale. I also love the fact that is present at the point of action. He'll just go there. If there's a problem, he'll just go there and ask to see the problem. When you do all of this in combination, you overcome a lot. When you act personally with so much urgency, it causes everybody else to act with urgency.” From: Lex Fridman Podcast, March 23, 2026
English
27
137
830
39K
Serrah Linares retweetledi
Nozz
Nozz@NoahEpstein_·
Just got off a client call. Within 20 minutes the entire project was scoped, planned, and ready to build. No human touched it. Here's how the backend of my agency actually runs: 1. Client call happens. AI transcribes the full thing in real time. 2. Transcription gets fed to an analysis agent. It breaks down exactly what the client needs, what's feasible, what the blockers are. Not generic meeting notes. Structured analysis against our best practices. 3. From that analysis, another agent generates a full phased delivery plan. Timelines, technical specs, sequencing. Ready to send to the client. 4. Client approves. The AI dev team takes over and builds the entire solution. Workflows, integrations, logic, error handling. The full thing. 5. My human devs step in. They don't build from scratch. They review, catch edge cases, and add the creative nuances that make it exceptional. 6. Polish agent cleans it up. Client ready. Delivered. From call to scoped project in minutes. From scope to built product in days instead of weeks. 7 people. 5 devs and 2 PMs. The AI handles the repetitive 80%. The humans handle the 20% that needs real expertise and creative thinking. The devs aren't burned out grinding through builds anymore. Every project gets their best work. Quality went up. Speed went up. Margins went through the roof. This is what people mean when they say AI agents are changing business. Not chatbots answering support tickets. Full end to end operations running autonomously with human oversight where it counts.
English
14
10
161
13.7K
Serrah Linares retweetledi
Antonio Grasso
Antonio Grasso@antgrasso·
AI is now a CEO-level responsibility. Since investment is rising and ROI expectations are explicit, managers must identify where AI removes friction, set a baseline, and launch a deployment with measurable impact. Source @BCG Link on.bcg.com/40aGbKS via @antgrasso
Antonio Grasso tweet media
English
11
68
82
3.2K
Serrah Linares retweetledi
Allie K. Miller
Allie K. Miller@alliekmiller·
Yesterday, I met with Anthropic and OpenAI and Google. (Separately, of course.) And while the conversations were largely confidential, I do want to share some aggregated reflections on the day as well as general SF takeaways. ⬇️ 1) Competitive advantage as a solo practitioner really does come from taking action and finding an area with a bit of friction and doubling down. Ex: memory management right now isn’t perfect, but allocating an hour to improving that system gives you a ton of leverage over others 2) SF continues to be the number one place for AI work. I know that’s not surprising. I would put New York at a healthy second place. SF tends to be more about crazy agent experiments for the thrill of capability and discovery and NYC tends to be more about kinda crazy agent experiments to find new ways to make money. Not saying either is better. But I met several people renting two apartments to straddle these worlds. You want the frontier of SF and enterprise insights of NYC. It’s one reason I travel between them so much. 3) All AI labs want to hear more from people. All of them. What are you using it for, what do you like, what do you hate, what do you need. Users have a TON of power on the direction of these tools. Keep testing and tweeting at them!! 4) There is very clearly a third customer cohort that is bubbling and underserved. It’s not developers…it’s not the business professional basic users…it’s builders. Everyone can build now. It’s marketing and sales folks vibe coding. It’s legal folks building complex skills. It’s a finance expert building a side project. This is a really undertapped customer base. They feel the Cursors of the world are too complex and doc summarization tools of the world are too basic. 5) Not sure if it was just sample size, but far fewer people were wearing tech gear compared to when I lived in SF. Everyone was still dressed casually, but I used to see Splunk and Optimizely and Slack and VC gear everywhere. People seem more in stealth swag now. 6) We may soon have our world model moment. 7) Speed of iteration and shipping is faster than I’ve ever seen. We see the nonstop drops from Anthropic. We see that because of scale, providers can get a much faster feedback loop of products or features that aren’t hitting. A lot of 2025 was experimentation, but ever since the OpenClaw moment over the holidays, the releases from all three labs have been more concentrated on…things that sorta look and feel like OpenClaw. 8) Small teams can pull off more than ever before. Small teams are the powerhouses of innovation right now. This means that finding new ways to share knowledge, break silos, and remove duplicate work is going to be even more important. AI agents functioning as actually teammates that support an entire system is key. 9) Build more Skills. Build better Skills. 10) Misinformation on AI tools and leaks spread FAST. I’ve seen so many fake stories on these AI labs. Your company needs to actually TEST these tools on your actual use cases to know which models and tools are best and you need to not make large-scale snap decisions based on a rumor of a rumor of a rumor. We will see more volatility. Plan for it. 11) You can feel the seriousness of this moment. Even during random conversations I had in line at a cafe. Lots of folks worried about job loss and lack of meaning. 12) Mac minis were sold out ;)
English
89
64
580
105.9K
Serrah Linares retweetledi
Luke Pierce
Luke Pierce@lukepierceops·
Anthropic and OpenAI are both building PE-backed consulting arms to deploy AI inside companies. Let that sink in for a second. The two companies building the most powerful AI on earth looked at the market and said "businesses can't figure out how to use this. We need to go in and do it for them." They are literally telling you where the gap is. Companies have access to the best AI models ever built. And most of them are still running on spreadsheets, disconnected tools, and manual processes because nobody showed them how to actually implement it. That's the whole game right now. Not building better models (obviously) or shipping new features. IMPLEMENTATION. Getting AI inside real workflows. Mapping the processes, building the systems, and making it stick. I've been doing exactly this for 4 years and have worked with 80+ companies at this point. It started with automation and naturally flowed into Ai. And every single engagement starts the same way. Not with AI or automation but with a process map. Because AI alone won't fix broken operations. Companies now understand that. They have not yet seen true ROI from Ai. You have to understand how the business actually runs before you touch a single tool. Where does the data live? Where are the bottlenecks? What's manual that shouldn't be? What breaks when volume goes up? That's the work, and that's what Anthropic and OpenAI just told the entire market is worth billions. Every company is going AI-first over the next 3-5 years. The demand for people who can actually make that happen is about to be unlike anything we've seen. The labs told you where the gaps are. Now go fill them.
Luke Pierce tweet media
English
140
192
2.2K
351.6K
Serrah Linares retweetledi
The Icahnist
The Icahnist@TheIcahnist·
AI dealmaking is exploding across consulting. Since 2023, the Big 4 and MBBs have inked dozens of AI partnerships, investments, and acquisitions Three takeaways: 1. Everyone is racing to own the AI stack via ecosystems 2. Enterprise agents are being embedded directly into tools like Salesforce, ServiceNow, and Workday. 3. Data + workflow automation are becoming the real differentiators in consulting.
The Icahnist tweet media
English
4
14
98
6.8K
Serrah Linares retweetledi
Alex Vacca
Alex Vacca@itsalexvacca·
Most founders say they have an ICP. What they actually have: 1 vague profile. What they need: 1 ICP. 3 segments. 2 personas. 3 copy variations per targeting combination. That's 6 lists and 18 tests. That's all you need to find what actually converts. I broke down this entire system in detail. Feel free to copy it.
Alex Vacca@itsalexvacca

x.com/i/article/2031…

English
1
4
80
18.2K
Serrah Linares retweetledi
Alex Vacca
Alex Vacca@itsalexvacca·
We built 12 Claude Skill files that run our entire GTM operation inside Clay (and I'm giving it all away) Prompts give you generic output. These skill files on the other hand are built from hundreds of Clay tables across 80+ B2B clients at $7M ARR. Each one does a specific job: → Company Research Agent → Personalization Writer → ICP Scorer → LinkedIn Profile Analyzer → Data Cleaner & Normalizer → Objection Handler → Email Sequence Writer → Competitor Analyzer → Job Posting Analyzer → Technographic Qualifier → News & Signal Synthesizer → Account Brief Generator How it works: drop it into Clay → map your columns → run. No prompt engineering. No switching tools. Just output. Giving the full pack away free. Reply "SKILLS" and I'll send it.
Alex Vacca tweet media
English
538
39
503
55.3K
Serrah Linares retweetledi
Jason ✨👾SaaStr.Ai✨ Lemkin
Per-seat pricing is dying. Usage-based and outcome-based models are winning. AI is accelerating that shift faster than anyone predicted. At SaaStr AI Annual 2026, we break down the new pricing playbooks. May 12-14. SF Bay.
English
5
4
24
4.4K
Serrah Linares retweetledi
Sales Talent Agency
Sales Talent Agency@bestsalesroles·
Most CSMs are effective across 30–50 accounts. Centralized AI can push that to 100s. In his latest editorial for Topline Media @kylecnorton (CRO, @Owner) breaks down why post-sales is the biggest AI opportunity too few GTM teams are talking about: topline.beehiiv.com/p/the-key-ai-d…
English
0
2
3
185
Serrah Linares retweetledi
Bilawal Sidhu
Bilawal Sidhu@bilawalsidhu·
Probably the most current look at Palantir’s maven smart system software. Here’s the DoW’s Chief AI officer showing how it works:
English
383
1.2K
9.5K
2.4M
Serrah Linares
Serrah Linares@SerrahL·
Where Americans moved in 2025
Serrah Linares tweet media
English
0
0
0
180
Serrah Linares retweetledi
Dustin
Dustin@r0ck3t23·
Former Google CEO Eric Schmidt just revealed what one programmer does from 7 PM to 4 AM. Wakes up. Eats breakfast. Reviews what got invented overnight. Schmidt: “It’s mind boggling.” Concept of 10x programmer has always existed. That multiplier just became infinite. No longer writing code. Directing autonomous systems that write it for them. Schmidt: “He said, I write the spec of what I want, and then I write a test function, an evaluation function and then I turn it on at 7:00 in the evening. When does it finish? Oh four in the morning. And then he gets up, has breakfast and then he sees what’s been invented.” Absolute deletion of biological bottleneck. Elite programmers who can architect, parallelize, and control these autonomous systems become infinitely more valuable. Not writing syntax. Directing the machine. Everyone else in execution loop is now mathematically replaceable. Schmidt: “It’s always been true that the very top programmers were worth ten times more than the ones right below. Those people will become more valuable, not less valuable because these systems need to be controlled by humans.” And here’s the real prediction. What happens to the shape of the entire economy. Schmidt: “You’re going to have a relatively small number of very large companies, and a very large number of very small companies, because you don’t need as many people.” Middle disappears. AI compresses the headcount. Math eliminates the need. This is Barbell Economy. Traditional mid-sized enterprise becomes structural liability overnight. Board dominated by massive hyperscalers providing planetary compute. And millions of hyper-lean, three-person startups using AI agents to generate billion-dollar outputs. Company relies on mass human headcount to justify valuation? You’re standing on collapsing middle of bridge. No longer predicting displacement of junior knowledge worker. Actively measuring it. Stanford research confirms 20 percent drop in hiring for early-career developers since late 2022. Not temporary hiring freeze. AI actively writing 70 to 90 percent of company’s product code? Unit economics of entire engineering department permanently inverted. Teams that historically required ten junior engineers now run with two senior architects and an AI agent. Entry-level tier automated out of existence. Schmidt has been warning about this for two years. Difference now? Numbers are catching up to prediction. Hiring falling at entry level. Headcount shrinking across white collar sectors. Organizations that aggressively adopt this ratio monopolize their sector. Ones that move too slowly don’t get second chance to adapt. And by the time the hiring data goes public, the window is already closed.
English
16
41
231
63K
Serrah Linares retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
New Harvard Business Review research reveals that excessive interaction with AI is causing a specific type of mental exhaustion ( or AI brain fry), which is particularly hitting high performers who use the tech to push past their normal limits. A survey of 1,500 workers reveals that AI is intensifying workloads rather than reducing them, leading to a new form of mental fog. While AI is generally supposed to lighten the load, it often forces users into constant task-switching and intense oversight that actually clutters the mind. This mental static happens because you aren't just doing your job anymore; you are managing multiple digital agents and double-checking their work, which creates a massive cognitive burden. The study found that 14% of full-time workers already feel this fog, with the highest impact seen in technical fields like software development, IT, and finance. High oversight is the biggest culprit, as supervising multiple AI outputs leads to a 12% increase in mental fatigue and a 33% jump in decision fatigue. This isn't just a personal health issue; it directly impacts companies because exhausted employees are 10% more likely to quit. For massive firms worth many B, this decision paralysis can lead to millions of dollars in lost value due to poor choices or total inaction. Essentially, we are working harder to manage our tools than we are to solve the actual problems they were meant to fix. --- hbr .org/2026/03/when-using-ai-leads-to-brain-fry
Rohan Paul tweet media
English
141
371
1.5K
565.5K