Peter Farago

1.8K posts

Peter Farago banner
Peter Farago

Peter Farago

@peter_farago

@RunLLM @Hacker0x01 @Acompli @FlurryMobile. Cooking relaxes.

San Francisco Bay Area Katılım Kasım 2008
482 Takip Edilen783 Takipçiler
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Ask Claude to build you a financial model in Excel. You'll get back reasonable structure, plausible assumptions, formulas that link together correctly. Now you have to check it. Do you open every cell and inspect every formula? If you do that, you might as well have built it yourself. If you don't, you're trusting a junior employee who works at superhuman speed but might have encoded some very strange assumptions that didn't stand out at first glance. Validating agent-generated work is the problem nobody is talking about. Agents have made creation cheap. They haven't made it any easier to know whether what was created is actually right. The bottleneck used to be writing the code, building the model, drafting the document. Now it's checking the output. And our tools — spreadsheets, code review, document editors — were all designed for a world where humans did the creating. None of them are built for the volume or the speed agents produce at. @profjoeyg and I wrote about this, and what we think validation actually has to look like going forward: open.substack.com/pub/frontierai…
English
2
9
47
13.5K
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
AI agents shouldn't have a job title. The entire AI industry is racing to build "AI SDRs," "AI SREs," and "AI SOC analysts." You can't walk through SF without seeing a billboard for one. We get why — customers search for these terms, and if your site doesn't speak their language, you lose the SEO battle before you make your pitch. But here's the problem: when you name your agent after a job title, you're promising it can do everything that person does. Including the stuff that never made it into the job description. The result is mismatched expectations, eroded trust, and products that underdeliver on their own marketing. Meanwhile, the agent category with the deepest adoption, the strongest data flywheels, and the most widespread quality? Coding agents. And none of them called themselves an "AI software engineer." That's not a coincidence. The full post explains why job title thinking constrains what an agent can actually do: open.substack.com/pub/frontierai…
English
0
2
4
135
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
We all know we live in the AI bubble, but the bubble is smaller than you might think. Even at some of the most innovative companies in the world, AI adoption is a big hurdle. Right now, you might be tempted to focus on the people who are excited to adopt — but that might not be a sustainable long-term strategy. Here's why you're going to have to break out of the bubble 👇
English
2
3
4
625
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
The idea that a startup will build an agent to help understand all your enterprise data is really appealing — unfortunately, it's incredibly difficult to defend. Enterprises have data everywhere, and a single front door that helps you find and analyze what you need at the right time is the holy grail. With LLMs, many startups are promising this future. The reality, however, is that these products are indefensible. The frontier model labs are desperately competing for enterprise attention, and they have all the advantages. The full post breaks down why 👇
English
2
3
8
746
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Predicting the doom of SaaS companies is all the rage right now, but... are they actually going to die? Maybe! Some SaaS companies are very likely to be disrupted. Others have more defensibility. Where do you fall on the spectrum? @profjoeyg and I put together a SaaS extinction test 👇
English
1
2
2
113
Peter Farago retweetledi
RunLLM
RunLLM@RunLLM·
Reliability expert Heinrich Hartmann thinks most AI SRE tools are solving the wrong problem. The Senior Principal SRE and SREcon EMEA Chair argues the real on-call problem isn't diagnosis. It's that the engineer who gets paged at 3 a.m. has never touched the service that's down. They don't know how it works, what broke last time, or what fixed it. AI can close that gap. Not by replacing the on-call engineer, but by making sure they have the right context before the pager fires. New post on the RunLLM blog: tinyurl.com/4e5z3s67
English
0
1
2
145
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Engineering leaders are staring at spreadsheets of AI agents, each claiming a 90% score on a different benchmark. But if they’re so good on paper, why do they fail in your actual codebase? Benchmarks measure SAT scores — but agents require on-the-job competency. A sterile, zero-shot score tells you nothing about how an AI handles your specific legacy debt or infrastructure quirks. Next time a vendor shows you a leaderboard, ignore it. Hand them your last outage report instead and ask: How would your agent have helped us here? Hire the specialist for your stack, not the smartest generalist.
English
2
3
4
601
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Your agents should be doing lots of useless work because it maximizes the chances that one of the things that it does is the right thing. This is the opposite of how people work today, but we're already seeing coding agents let people try lots of things and pick what works. That changes the economics of work — a single task is no longer that expensive or valuable — and those economics are going to extend into the way agent work. @profjoeyg and I break it down 👇
English
1
3
5
441
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Build an agent in a space that has a difficult-to-solve problem with a hard-to-adopt product sounds like setting yourself up for failure. But if you can do it right, it will be a huge win. @profjoeyg and I break down why hard-hard AI agents are some of the most interesting and most defensible products to build 👇 open.substack.com/pub/frontierai…
English
0
4
4
962
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Our last blog post of the year looking back at how quickly things changed this year — less crazy than previous years but still pretty fast. We reviewed our predictions for 2025, many of which were accurate but not particularly relevant anymore, and shared some lessons learned. Back after the holidays! 👇
English
1
2
2
149
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Looking back on 2025, lots of things changed in AI, but relatively little shocked us. AI will change a lot, but it might turn out to be a regular, old platform shift (like cloud, mobile, etc.). @profjoeyg and I on why 👇
English
1
3
3
472
Peter Farago
Peter Farago@peter_farago·
@sama We’ll see, Sam—we’ll—see.
English
0
0
1
11
Sam Altman
Sam Altman@sama·
Small-but-happy win: If you tell ChatGPT not to use em-dashes in your custom instructions, it finally does what it's supposed to do!
English
3.2K
1.1K
29.1K
7M
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Trillion dollar data center buildouts are all the rage. Why is all of this kicking off at once? The infrastructure investment we're seeing tells us a lot about the future of inference and the economics of intelligence. @profjoeyg and I break down why intelligence might not be zero marginal cost looking ahead 👇
English
1
3
5
547
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
"Build opinionated products" is not new advice, but it's more important than ever. If you're not careful, your agents can be everything to everyone. That might sound wonderful at first, but it's going to cause you headaches later. Here's why 👇
English
1
2
4
1.2K
Peter Farago retweetledi
RunLLM
RunLLM@RunLLM·
🗓️ ICYMI: Run of the Week | Oct 27, 2025 AI Tech Debt + AI Code vs Reliability + AI for AI 📌 Can AI Agents Clean Up Our Tech Debt? As soon as we learn to build better software, we create debt. And not only in code, but also knowledge, processes, and communication. Vikram Sreekanti and Joseph Gonzalez argue that the next generation of AI agents could quietly clean up these messes in the background, tackling our knowledge debt the same way coding agents refactor code. 👉 Read: tinyurl.com/3u3hmnma 📌 Is AI Code Tanking Site Reliability? AI coding tools have doubled developer throughput—but they’ve also flooded production with code that no one truly owns. When 70% of incidents come from changes, accelerating change means accelerating risk. As engineers click Accept faster than they can reason, assumptions rot, context fades, and operational debt piles up. The irony? The teams adopting AI the fastest may be the first to break under the weight of their own success. 👉 Read: tinyurl.com/2sfwnah6 📌 What Happens When AI Serves Other AI? | Ep 48 | LLMs on the Run “The dynamic between products and their customers is changing. As AIs start interacting with AIs, we have to rethink how we build, optimize, and support technology in an AI-centric world.” — Prof Joey Gonzalez 👉 Watch: tinyurl.com/d598ubjb #RunLLM #LLMsOnTheRun #JosephGonzalez #VikramSreekanti #AgenticAI #AISRE #AIEngineering #AICoding #TechDebt #ReliabilityEngineering #AIUX #AIOps
RunLLM tweet media
English
0
1
3
147
Peter Farago retweetledi
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
For the most part, everyone's use of AI today is synchronous and interactive... but it doesn't have to be that. As agents proliferate, we'll see more and more agents working in the background, doing things for us that we didn't want to bother doing ourselves. The most obvious form of this will be knowledge tebt: fixing tech debt in your codebase, finding and updating outdated messaging, etc. @profjoeyg and I cover why agents are well-suited to this work and what's left to be done 👇
English
1
4
5
862
Peter Farago retweetledi
RunLLM
RunLLM@RunLLM·
What happens when your customers aren’t people, but other AI? The next generation of successful software won’t just use AI. It will serve it. In Ep 48 of LLMs on the Run, @profjoeyg (Joey) looks at what happens when AI becomes both the provider and the customer — and how that changes everything from system design to support operations. ✅ Support that handles thousands of AI-generated tickets per minute ✅ Infrastructure built for irregular, high-velocity interactions ✅ UX evolving into AIX — experiences built for AI users, not humans “The dynamic between products and their customers is changing. As AIs start interacting with AIs, we have to rethink how we build, optimize, and support technology in an AI-centric world.” — Prof Joey Gonzalez #LLMsOnTheRun #RunLLM #JosephGonzalez #AISRE #AgenticAI #AIUX #AIX #AIEngineering #AIInfrastructure
English
0
1
2
144
Peter Farago retweetledi
RunLLM
RunLLM@RunLLM·
Your SRE team is about to go bankrupt—and AI coding tools are why. Every CTO celebrates the productivity gains: 2× throughput, 50% faster development. But AI-generated code enters production with zero ownership. Reading code is not the same as writing it. The hidden costs: ✅ 70% of production incidents come from changes—and AI just multiplied your change velocity ✅ Engineers click "Accept" without the cognitive weight that comes from actually writing code ✅ Six months later, nobody remembers why it was written that way or what will break when assumptions change ✅ The teams using AI most aggressively will be the first to collapse under operational load ✅ AI adoption doesn't eliminate responsibility; it redistributes it. 👉 Read the full post: tinyurl.com/2sfwnah6 #EngineeringLeadership #SRE #TechnicalDebt #AIcoding #DevOps #CTO
RunLLM tweet media
English
0
1
4
934