Andy Walters

2.6K posts

Andy Walters banner
Andy Walters

Andy Walters

@andywalters

founder & ceo at https://t.co/PqCFXyWUb6 -- AI agents to grow SaaS businesses

Austin, TX शामिल हुए Haziran 2008
2.1K फ़ॉलोइंग7.8K फ़ॉलोवर्स
Fred Lambert
Fred Lambert@FredLambert·
Tesla fans using the “4-second disengagement” as a gotcha are missing the forest for the trees. Yes, the driver was technically in control of the vehicle at the moment of impact. But she was in control because FSD was already failing by driving too fast ahead of this sharp turn — it was heading straight into a concrete barrier at highway speed with no sign of correcting. Everyone who has frequently used FSD or Autopilot and paints this 4-second disengagement as a “gotcha” moment is being disengenous, and that includes Elon Musk. I have tens of thousands of miles on FSD, and I’ve experienced the system coming too fast into a turn at least half a dozen times. We’ve said this before and we’ll keep saying it: the problem with FSD isn’t what happens when the driver is paying attention and the system works. The problem is what happens when the system gives you every reason to trust it, and then suddenly doesn’t work. The driver has to recognize the failure, assess the situation, decide on a correction, and physically execute it, all in less time than the system needs to create the danger. Musk and Tesla’s propagandists can point to the logs all they want. The video shows what actually matters: FSD approaching a standard highway curve at full speed with zero indication it was going to navigate it. That’s the failure. Everything that happened after, including the panicked disengagement, is a consequence of that failure. The framing that this was “manual driving, not FSD” is technically true for the final 4 seconds and deeply dishonest about the full sequence of events. It’s exactly the kind of liability shell game that courts are increasingly rejecting, as that $243 million verdict makes clear. Tesla created the system, sold it as “Full Self-Driving,” and profits from the ambiguity. At some point, it has to own the consequences.
Electrek.co@ElectrekCo

Tesla says FSD was off before Cybertruck crash — but the video tells a different story electrek.co/2026/03/18/tes… by @fredlambert

English
209
332
3.5K
238.8K
Andy Walters
Andy Walters@andywalters·
@jasonlk The critical question at scale becomes how well do these agents understand and tell your value story across all touchpoints. If the agents aren’t singing from the same hymnal, you’re going to get less deals with higher churn because the wrong promises are being made.
English
0
0
0
138
Jason ✨👾SaaStr.Ai✨ Lemkin
If you're just starting with AI agents: one for outbound, one for inbound qualification, one for customer support. Connect them through your CRM. That's it. Don't orchestrate 20 agents in month one. The compounding effect only kicks in after each agent is actually working well. That will take many months.
English
17
6
66
7K
Christos Tzamos
Christos Tzamos@ChristosTzamos·
1/4 LLMs solve research grade math problems but struggle with basic calculations. We bridge this gap by turning them to computers. We built a computer INSIDE a transformer that can run programs for millions of steps in seconds solving even the hardest Sudokus with 100% accuracy
English
239
787
5.9K
1.6M
Andy Walters
Andy Walters@andywalters·
@Chris_Orlob Yes! This is why we created an AI business case generator that quantifies the cost of inaction, so reps can easily sell on value. Would love to get your feedback. agent.press
English
0
0
0
205
Chris Orlob
Chris Orlob@Chris_Orlob·
Most AEs lose deals because they can't build urgency. They find pain. They demo features. They quote price. But they never answer the million-dollar question: "What happens if we do nothing?" Here's how to build the cost of inaction (and close more deals):
English
7
4
130
12.9K
Dustin
Dustin@r0ck3t23·
Anthropic just said no to the Pentagon. Then their biggest rival backed them up. The Department of War gave Anthropic a 5:01 PM Friday deadline. Drop the safeguards against mass surveillance and fully autonomous weapons. Or lose the $200 million contract and get labeled a supply chain risk. Amodei: “These threats do not change our position. We cannot in good conscience accede to their request.” “Supply chain risk” is a designation typically stamped on foreign adversaries. It would have derailed every critical partnership Anthropic has. He held the line anyway. Then Sam Altman went on CNBC. Altman: “I don’t personally think the Pentagon should be threatening DPA against these companies.” The two fiercest rivals in AI just drew the same red line in public. Simultaneously. No coordination. No joint statement. Just two competitors independently concluding that some lines cannot be crossed. Altman: “For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety.” Altman and Amodei declined to clasp hands in a group photo at India’s AI summit last week. Today Altman defended him on live television. 70 OpenAI employees signed an open letter titled “We Will Not Be Divided.” Google engineers voiced support. The industry unified in hours. Trump responded on Truth Social with a six-month federal phaseout of Anthropic’s products. Here is what this moment actually is. The two companies building the most powerful technology in human history just told the government there are uses of that technology they will not permit. Not for $200 million. Not under threat of the Defense Production Act. Not under any pressure the government can apply. Mass surveillance of Americans. Fully autonomous weapons operating without human oversight. These are the lines. The architects of superintelligence just declared they answer to something beyond the contract. That has never happened before.
English
319
608
3.7K
953.7K
Andy Walters
Andy Walters@andywalters·
if you really believe this you are mentally handicapping yourself and your team. the best programmers in the world are not reviewing every line of code their agents write. they are not using it as a copilot. they are, in fact, generating entire features in one shot. what you imagine is impossible even in 5 years is not even 5 years away. just look at the rate of progress.
English
1
0
0
145
Andy Walters
Andy Walters@andywalters·
@russ98593 Doesn’t change our direction, we are building software that’s high stakes and customer facing :)
English
0
0
2
19
Russell Joshua
Russell Joshua@russ98593·
@andywalters how does this change your direction ? still provide specific solutions or enable customers explore/build for themselves
English
1
0
1
21
Andy Walters
Andy Walters@andywalters·
A non-technical CEO and customer of ours built a churn early warning system using Claude Code. It analyzes his entire user base daily, identifies at risk customers, and sends Slack notifications to his team. He did it in a weekend. He didn’t pay thousands to a consultant. He didn’t fork over hundreds a month to ChurnZero or another one-size-fits-all solution. He built a solution perfectly fit to his company’s problems. We’re witnessing a turning point on par with the “ChatGPT Moment” in the AI adoption timeline. The “Claude Code Moment” will be when agents entered the public consciousness but more importantly when the landscape of the buy vs. build calculus was irrevocably tipped in favor of BUILD. No, this does not mean companies should rush to vibe code a Salesforce or Sage replacement. What it does mean is that for critical business problems—where existing market solutions are a poor fit, have high implementation times, or have high costs—rolling your own solution is now a credible alternative. No, I do not think SaaS is dead. But I do think most SaaS businesses will no longer be competing against other SaaS firms. They will be competing against what an employee with Claude Code can vibe code in two weeks. That employee is brimming with highly idiosyncratic domain expertise, business processes, change management realities, and systems knowledge. This raises the bar tremendously for SaaS companies. Your solution has to be 10x better than what this employee could offer, and Claude Code is getting better by the day. So what can SaaS companies offer that vibe coders can’t? - Guaranteed outcomes. The vibe coder cannot guarantee their solution, or point to case studies. A competitive SaaS can sell metrics it can guarantee will go up and to the right. - Reliability at scale. For high stakes use cases, e.g. customer facing or those that touch transactions, any business in their right mind will pay a premium for proven software. - Constant updates. An implicit value of a SaaS contract to a customer is that the solution gets better over time, without the customer having to worry about it. Now more than ever, SaaS companies should make their shipping velocity a major selling point. None of this is easy for SaaS incumbents. Every one of them should be on high alert and commissioning their team to understand how they are going to compete and win in this new wild west. As I’ve said before, software ate the world, but AI eats software. And AI is just sitting down to dine!
English
3
1
14
831
Ethan
Ethan@EZebroni·
Hey @elonmusk, can you make us a full size SUV? I am trying to convince my wife to have a fourth kid. A full-size SUV will fix the “four kids won’t fit in our Tesla” problem. Help!
English
702
278
9.4K
998.9K
Andy Walters
Andy Walters@andywalters·
@OnlyCFO We're building an AI agent to conduct conversational post-churn exit interviews & save accounts -- way better than the industry standard form. Would you have any interest in exploring?
English
0
0
0
21
OnlyCFO
OnlyCFO@OnlyCFO·
The true cost of churn is brutal. Most people just think about the ARR lost ($150k in this example). But the total cost is MUCH higher. In the example below the cost is $1M if they churn after year 1. -Didn’t payback CAC -Missed expansion opportunities -8 yrs of missed profits
OnlyCFO tweet media
English
10
4
64
4.9K
Hubert Thieblot
Hubert Thieblot@hthieblot·
Explain your product in one sentence. Be clear about what it does. No buzzwords. If you can do that, I’ll consider investing. Hit me.
English
1.2K
23
860
111.9K
Andy Walters
Andy Walters@andywalters·
@dmytring @hthieblot We’re building a single AI agent with “skills” for each phase of the SaaS customer lifecycle: Inbound Onboarding Self Serve Support Renewals / Expansions Point solutions for these lead to a fragmented CX, and a unified agent is the only path for the coming agentic web.
English
1
0
0
30
Sawyer Merritt
Sawyer Merritt@SawyerMerritt·
Anyone have self-driving stats higher than this on their Tesla? • Miles driven by the car on its own: 2,013 • Miles driven manually: 0 15% city, 85% highway. Not a single disengagement. A follower sent me this of his Model 3 on FSD V14.
Sawyer Merritt tweet media
English
283
114
2.4K
132.2K
Marc Benioff
Marc Benioff@Benioff·
I agree Gavin, the NYT distortion of @DavidSacks’ leadership in AI, Crypto & Quantum isn’t journalism—it’s almost strategic sabotage. While American’s bicker, our rivals are studying David’s every move. I’ve known David for decades, and I’ve never seen him sharper or more necessary. We need to remember, America wins the century by elevating builders, not tearing them down. David, let’s win this. 🇺🇸🚀 #Ohana
Gavin Baker@GavinSBaker

Deeply strange @nytimes article about @DavidSacks Leading in AI is good for America. And there is no way for America to lead in AI without American investors in AI doing well. Irrespective of whether those investors are David’s friends or his enemies. And like everyone who has been in Silicon Valley for a long time, David has enemies in Silicon Valley who are also doing well by investing in AI. The most disappointing part of the article is that there an interesting debate to be had about the wisdom of selling deprecated GPUs to China that are 18 months ahead of Chinese domestic alternatives and roughly 15 months behind our state of the art. As someone who is an active investor in national defense and super patriotic, I think this is a good idea but reasonable minds can disagree and zero attempt was made to engage with the relevant issues. From a conflict of interest perspective, I think they are being appropriately managed and this has been to David’s economic detriment. His defamation attorneys letter to the NYTimes makes it clear that an exhaustive, good faith effort was made to divest from all potential conflicts. But it is quasi-impossible for David to fully divest from *every* company he and/or Craft has invested in that might *conceivably* benefit from good AI policy making. At the limit, theoretically every company in America and the American government itself (i.e. government bonds) benefit from good AI policy making. I would guess that most of David’s assets are in private companies - if he were to leave the private sector entirely and put his assets into a blind trust he would still know what he owns as they are not liquid. Even if he were to do some dog and pony show of full divestment and a blind trust, does any reasonable person think he would not be able to walk back into Craft with his current economics intact? And everyone who is even remotely qualified to shape AI policy has the same theoretical conflicts of interest. I am 100% ok with talented citizens being able to have a dual role in the government and the private sector. That is actually the entire point of the SGE program. I think there is an argument to be made that it promotes and incentivizes ethical behavior. The downside of malfeasance for David is enormous and there is minimal upside relative to what he already has. Separately, the @nytimes urgently needs to provide remedial math education for these journalists and their editors. The idea that 500,000 GPUs sold to the UAE could generate anywhere near $200 billion in revenue to Nvidia is ridiculous. I look forward to the correction that will be assiduously posted to the @NYTimesPR account which has 90k followers vs. the main account with 52.8m followers. I should note that while I do not know David well, we have many good friends in common and I like him personally. More importantly, I am grateful for his service, which has unquestionably cost him a vast amount of money. And my superstar sister-in-law is a partner at Craft, for which David is lucky.

English
201
305
4.4K
621.5K
Andy Walters
Andy Walters@andywalters·
@kobyjconrad How I spent my Thanksgiving (cold emailing angels) :D
Andy Walters tweet media
English
1
0
2
45
Koby Conrad 🌻
Koby Conrad 🌻@kobyjconrad·
Are you taking Thanksgiving off or are you signing SAFE’s & wiring funds to the homies? 🥹 Not a shitpost - I’m thankful to have such incredible friends & family I’ve angel invested $228k in last 4y now marked up to over $1.8M All my returns come from my closest connections
English
25
2
134
10.5K
Andy Walters
Andy Walters@andywalters·
FSD almost certainly saved my family from an accident yesterday, it swerved off road at 50 mph to avoid a sideswipe.
English
2
1
15
1.7K