RunLLM

544 posts

RunLLM banner
RunLLM

RunLLM

@RunLLM

The AI SRE for mission-critical systems that provides transparent investigations, evidence-backed root cause analysis, and continuous runbook improvements.

Присоединился Nisan 2022
14 Подписки783 Подписчики
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
The idea that a startup will build an agent to help understand all your enterprise data is really appealing — unfortunately, it's incredibly difficult to defend. Enterprises have data everywhere, and a single front door that helps you find and analyze what you need at the right time is the holy grail. With LLMs, many startups are promising this future. The reality, however, is that these products are indefensible. The frontier model labs are desperately competing for enterprise attention, and they have all the advantages. The full post breaks down why 👇
English
2
3
8
616
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Predicting the doom of SaaS companies is all the rage right now, but... are they actually going to die? Maybe! Some SaaS companies are very likely to be disrupted. Others have more defensibility. Where do you fall on the spectrum? @profjoeyg and I put together a SaaS extinction test 👇
English
1
2
2
91
RunLLM
RunLLM@RunLLM·
Reliability expert Heinrich Hartmann thinks most AI SRE tools are solving the wrong problem. The Senior Principal SRE and SREcon EMEA Chair argues the real on-call problem isn't diagnosis. It's that the engineer who gets paged at 3 a.m. has never touched the service that's down. They don't know how it works, what broke last time, or what fixed it. AI can close that gap. Not by replacing the on-call engineer, but by making sure they have the right context before the pager fires. New post on the RunLLM blog: tinyurl.com/4e5z3s67
English
0
1
2
125
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Engineering leaders are staring at spreadsheets of AI agents, each claiming a 90% score on a different benchmark. But if they’re so good on paper, why do they fail in your actual codebase? Benchmarks measure SAT scores — but agents require on-the-job competency. A sterile, zero-shot score tells you nothing about how an AI handles your specific legacy debt or infrastructure quirks. Next time a vendor shows you a leaderboard, ignore it. Hand them your last outage report instead and ask: How would your agent have helped us here? Hire the specialist for your stack, not the smartest generalist.
English
2
3
5
582
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Your agents should be doing lots of useless work because it maximizes the chances that one of the things that it does is the right thing. This is the opposite of how people work today, but we're already seeing coding agents let people try lots of things and pick what works. That changes the economics of work — a single task is no longer that expensive or valuable — and those economics are going to extend into the way agent work. @profjoeyg and I break it down 👇
English
1
3
5
434
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Every agent product needs to be thinking about how to build trust. You might think that the answer is to own more responsibility. But if you look at the company that has done this the best — @cursor_ai — they built trust by minimizing the unit of work, and as a result maximizing the feedback loop. @profjoeyg and I break down Cursor's unfair UX advantage — and how other agents can recreate it 👇
English
1
2
5
399
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Build an agent in a space that has a difficult-to-solve problem with a hard-to-adopt product sounds like setting yourself up for failure. But if you can do it right, it will be a huge win. @profjoeyg and I break down why hard-hard AI agents are some of the most interesting and most defensible products to build 👇 open.substack.com/pub/frontierai…
English
0
4
4
957
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Our last blog post of the year looking back at how quickly things changed this year — less crazy than previous years but still pretty fast. We reviewed our predictions for 2025, many of which were accurate but not particularly relevant anymore, and shared some lessons learned. Back after the holidays! 👇
English
1
2
2
145
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Looking back on 2025, lots of things changed in AI, but relatively little shocked us. AI will change a lot, but it might turn out to be a regular, old platform shift (like cloud, mobile, etc.). @profjoeyg and I on why 👇
English
1
3
3
468
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
We believe in hard work at @RunLLM. We also believe that driving yourself crazy by trying to work 72 hours a week is crazy. On the blog this week, we wrote about the 9-9-6 trap — why you should do things other than work and how you build a company for the long-term:
English
1
1
3
212
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
We wrote about trillion-dollar datacenter buildouts last week — but what's driving that? Increased token demand. This week, @profjoeyg and I wrote about the more important part of the inference economy: skyrocketing demand for tokens. What's driving that, and how do you manage your token use? Check out the post! 👇
English
1
2
6
619
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
Trillion dollar data center buildouts are all the rage. Why is all of this kicking off at once? The infrastructure investment we're seeing tells us a lot about the future of inference and the economics of intelligence. @profjoeyg and I break down why intelligence might not be zero marginal cost looking ahead 👇
English
1
3
5
543
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
"Build opinionated products" is not new advice, but it's more important than ever. If you're not careful, your agents can be everything to everyone. That might sound wonderful at first, but it's going to cause you headaches later. Here's why 👇
English
1
2
4
1.2K
RunLLM
RunLLM@RunLLM·
🗓️ ICYMI: Run of the Week | Oct 27, 2025 AI Tech Debt + AI Code vs Reliability + AI for AI 📌 Can AI Agents Clean Up Our Tech Debt? As soon as we learn to build better software, we create debt. And not only in code, but also knowledge, processes, and communication. Vikram Sreekanti and Joseph Gonzalez argue that the next generation of AI agents could quietly clean up these messes in the background, tackling our knowledge debt the same way coding agents refactor code. 👉 Read: tinyurl.com/3u3hmnma 📌 Is AI Code Tanking Site Reliability? AI coding tools have doubled developer throughput—but they’ve also flooded production with code that no one truly owns. When 70% of incidents come from changes, accelerating change means accelerating risk. As engineers click Accept faster than they can reason, assumptions rot, context fades, and operational debt piles up. The irony? The teams adopting AI the fastest may be the first to break under the weight of their own success. 👉 Read: tinyurl.com/2sfwnah6 📌 What Happens When AI Serves Other AI? | Ep 48 | LLMs on the Run “The dynamic between products and their customers is changing. As AIs start interacting with AIs, we have to rethink how we build, optimize, and support technology in an AI-centric world.” — Prof Joey Gonzalez 👉 Watch: tinyurl.com/d598ubjb #RunLLM #LLMsOnTheRun #JosephGonzalez #VikramSreekanti #AgenticAI #AISRE #AIEngineering #AICoding #TechDebt #ReliabilityEngineering #AIUX #AIOps
RunLLM tweet media
English
0
1
3
141
RunLLM ретвитнул
Vikram Sreekanti
Vikram Sreekanti@vsreekanti·
For the most part, everyone's use of AI today is synchronous and interactive... but it doesn't have to be that. As agents proliferate, we'll see more and more agents working in the background, doing things for us that we didn't want to bother doing ourselves. The most obvious form of this will be knowledge tebt: fixing tech debt in your codebase, finding and updating outdated messaging, etc. @profjoeyg and I cover why agents are well-suited to this work and what's left to be done 👇
English
1
4
5
858
RunLLM
RunLLM@RunLLM·
What happens when your customers aren’t people, but other AI? The next generation of successful software won’t just use AI. It will serve it. In Ep 48 of LLMs on the Run, @profjoeyg (Joey) looks at what happens when AI becomes both the provider and the customer — and how that changes everything from system design to support operations. ✅ Support that handles thousands of AI-generated tickets per minute ✅ Infrastructure built for irregular, high-velocity interactions ✅ UX evolving into AIX — experiences built for AI users, not humans “The dynamic between products and their customers is changing. As AIs start interacting with AIs, we have to rethink how we build, optimize, and support technology in an AI-centric world.” — Prof Joey Gonzalez #LLMsOnTheRun #RunLLM #JosephGonzalez #AISRE #AgenticAI #AIUX #AIX #AIEngineering #AIInfrastructure
English
0
1
2
138
RunLLM
RunLLM@RunLLM·
Your SRE team is about to go bankrupt—and AI coding tools are why. Every CTO celebrates the productivity gains: 2× throughput, 50% faster development. But AI-generated code enters production with zero ownership. Reading code is not the same as writing it. The hidden costs: ✅ 70% of production incidents come from changes—and AI just multiplied your change velocity ✅ Engineers click "Accept" without the cognitive weight that comes from actually writing code ✅ Six months later, nobody remembers why it was written that way or what will break when assumptions change ✅ The teams using AI most aggressively will be the first to collapse under operational load ✅ AI adoption doesn't eliminate responsibility; it redistributes it. 👉 Read the full post: tinyurl.com/2sfwnah6 #EngineeringLeadership #SRE #TechnicalDebt #AIcoding #DevOps #CTO
RunLLM tweet media
English
0
1
4
927
RunLLM
RunLLM@RunLLM·
🗓️ ICYMI: Run of the Week | Oct 20, 2025 AI with depth + AI finds outages before customers do + Humans slow AI Down ⚖️ Are Narrowly Scoped Agents More Valuable Than Generic Ones? @profjoeyg and @vsreekanti argue that general-purpose agents like meeting notetakers may be sliding toward commoditization — while deeply specialized agents, built with stronger expertise and tighter workflow integration, are proving far more valuable. 👉 Read: tinyurl.com/43s5n9u2 🙀 Can AI Spot Outages Before Your Customers Do? What’s worse than downtime? Hearing about it from your customers first. Dashboards were green. No alerts were firing. And yet users were the ones flagging the issue. Can AI-powered detection systems catch what others miss before customers do? 👉 Read: tinyurl.com/4tyxnmkk 🎬 When Humans Can’t Keep Up with AI | Ep 47 | LLMs on the Run “As AI advances, many of the roles humans fill today could become the bottleneck. We need AI that can support the engineers — and the AI engineers — of the future.” — Prof Joey Gonzalez 👉 Watch: tinyurl.com/yfzv4dwv #LLMsOnTheRun #JosephGonzalez #AIUX #AgenticAI #AISRE #Observability #IncidentResponse #AIEngineering
RunLLM tweet media
English
0
0
1
133