Larridin

153 posts

Larridin banner
Larridin

Larridin

@Larridin

Measure, optimize, and maximize the value of AI across your organization.

San Francisco, CA Katılım Ekim 2024
216 Takip Edilen145 Takipçiler
Larridin
Larridin@Larridin·
Successful AI initiatives and rollouts aren't purely top-down mandates; nor are they solely bottom-up experimentation. They're both. The top-down piece matters because alignment has to start somewhere. Executive teams need to look at the landscape, make a call that this is the direction, and cascade the message clearly: "This is where things are going. The business needs to adapt." But mandates alone don't drive adoption. You can tell people to use a tool. You can't make them use it well. That's where the bottom-up energy comes in. Every organization has people who are already experimenting: building their own workflows, testing new approaches, pushing boundaries before anyone asked them to. They're staying up late because they're curious, not because they were told to. That energy is an accelerant. The trick is recognizing it and channeling it, while making sure not to squash it with heavy-handed governance, nor to ignore it because it doesn't fit the official roadmap. Top-down alignment sets direction. Bottom-up enthusiasm provides velocity. One without the other stalls.
English
0
1
1
12
Larridin
Larridin@Larridin·
Morgan Stanley's latest CIO survey asked what categories of software executives most want to consolidate. Application software was number one. CIOs are holding massive budgets and looking at sprawling tool stacks that grew organically over years. Point solutions multiplied. Now nobody knows what's actually in use or whether it's worth keeping. The underlying technology is accessible to everyone now. The SaaS incumbents are shipping new features faster than at any point in history. The same capabilities that a startup demo'd last month are rolling out from your existing vendors next quarter. A point solution has to offer something genuinely new; real capability that your existing platforms can't match. Otherwise, why add another vendor? Another integration? Another security review? Another line item to manage? This consolidation is a recognition that managing 200 vendors across a limited range of functionality isn't sustainable (and that most of those vendors are building similar tools).
English
0
0
0
22
Larridin
Larridin@Larridin·
New episode of the Larridin AI Impact podcast. Russ sat down with Bask Iyer (former Global CIO at Johnson & Johnson, Honeywell, VMware, and Dell, now CEO of BaskMind) to talk about what AI accountability actually requires at the enterprise level. What they cover: → Why most enterprises don't know which AI agents are running or who authorized them → The framework that cuts nine-month POCs down to one month → Why having the same tech stack as your competitors isn't a strategy → The CFO test: if AI saves time but the bottom line doesn't move, something's wrong Full episode out now. Link in comments 👇
English
1
1
1
36
Larridin
Larridin@Larridin·
Work sprawl leads to context sprawl. When your team collaborates across 17 different tools (project trackers, docs, communication channels), your data is fragmented. Your processes are inconsistent. Your institutional knowledge is scattered. Any system trying to synthesize information, answer questions, or automate workflows needs context to work accurately. If your context is scattered across a dozen tools with inconsistent data quality, the outputs will be wrong. Confidently, plausibly wrong. Garbage in, garbage out. Same as it ever was. This is why consolidation matters. Fragmented context produces fragmented results.
English
0
1
1
11
Larridin
Larridin@Larridin·
New episode of the Larridin AI Impact podcast. Russ sat down with Dan Zhang, CFO of ClickUp, to talk about how she evaluates AI vendors, measures ROI across her finance team, and converts skeptics into believers, without hype or slide decks. Dan oversees finance, accounting, legal, HR, and operations for a global team of more than 1,000 employees. In this conversation, she breaks down: → The 2x2 framework she uses to sort AI vendors by autonomy and attribution → Why she gives teams $100 to experiment with unproven tools → How corporate math on sales quotas and team ratios is getting rewritten → The two hiring profiles she's actively building her team around → Why "boiling the ocean" is back on the menu for finance teams One of our best episodes yet.
English
1
0
0
65
Larridin
Larridin@Larridin·
We see usage for 1000+ AI tools, and @Glean stands out. It's one of the tools we get asked about the most. There's a big reason why - we found in some enterprises, AI usage scaled up 20x after deploying Glean. Once people start using it, they start to become more sophisticated users across the board. When we looked specifically at sellers, three things jumped out: -They were doing more sophisticated work. -They were getting it done faster. -They were doing it with less help from others. Less waiting around for others. From a pure productivity standpoint, Glean is clearly working. Hats off to @jainarvind and the team. Clip from our webinar. Link in comments.
English
1
0
2
117
Larridin
Larridin@Larridin·
How do you evaluate whether a tool is worth the spend? One framework we've seen work: a simple 2x2 matrix built on two questions. How autonomous is the tool? Can it complete tasks on its own, or does it require constant human involvement? How clearly can you attribute value? Can you point to a specific outcome and say "this tool did that," or is the value diffuse and hard to measure? The quadrants shake out like this: High autonomy + High attribution: The holy grail. A support agent that closes tickets on its own. Either it solved the problem or it didn't. Easy to measure, easy to justify. Low autonomy + High attribution: Co-pilot territory. Coding assistants, note-takers, writing tools. Clearly helpful, but hard to draw a straight line to revenue or cost savings. Worth monitoring closely; allocate a percentage of payroll budget and watch the outcomes. High autonomy + Low attribution: Platform and security investments. Necessary infrastructure, but the value is spread across everything. Hard to isolate ROI, but you can't operate without it. Low autonomy + Low attribution: The sandbox. Questionable on both axes. This is where shiny demos live. Give teams a small budget to experiment. If a tool in the sandbox graduates into one of the other quadrants, have a real budget conversation.
English
0
0
0
39
Larridin
Larridin@Larridin·
Enterprise AEs traditionally carry $1.2M to $1.5M in annual quota. That number wasn't arbitrary. It was built on human bandwidth: - How many meetings one person can book - How many proposals they can write - How much admin work eats into selling time. Dan Zhang, CFO of ClickUp, says that math is breaking down fast. With tools handling admin work and accelerating pipeline, some reps in her CFO circles are now closing a million dollars per quarter. The old ratio assumed reps spent significant time on non-selling activities such as updating Salesforce, writing follow-up emails, and prepping proposals. If that time goes to zero, the ceiling on what one person can close goes way up. And it doesn't stop at sales. The same ripple effects hit everywhere. Sales engineers traditionally had ratios like 2:1 or 3:1 to salespeople, depending on product complexity. But if demo tools can spin up customized environments in minutes, that ratio changes. CSMs. Back office. Support. Every accepted ratio, the ones we've used for decades to plan headcount and budgets, is up for revision.
English
0
0
0
63
Larridin
Larridin@Larridin·
How do you evaluate a shiny AI demo that looks impressive but has no clear ROI? One approach we've seen work: give teams $100 to play with it. Not $10,000. Not a formal pilot. Just a hundred bucks to see if it's real. If it graduates from "interesting" to "actually useful" — if it moves into a quadrant where you can measure autonomy or attribute value — then you have a real budget conversation. This keeps experimentation alive, without letting spending spiral. It also forces clarity: is this tool actually doing something, or does it just demo well? Most tools never leave the sandbox. That’s okay; experiment, then try something else.
English
0
0
0
22
Larridin
Larridin@Larridin·
You wouldn't measure a startup the same way you measure a mature company. So why measure early-phase AI investment the same way you'd measure mature deployment? Early phase: cost avoidance, risk reduction, foundation building. The value is in what you prevented, not what you created. Mid phase: acceleration, capability building, proficiency gains. The value is in what's compounding. Mature phase: competitive separation, revenue capture, defensible advantage. The value is in what competitors can't replicate. Match the measurement to the maturity.
English
0
1
1
23
Larridin
Larridin@Larridin·
OutSystems found 49% of organizations describe their agentic AI capabilities as "advanced or expert." The same research found 94% are concerned about sprawl, complexity, and risk. Those numbers don't add up. If half of organizations are "advanced," why are nearly all of them worried? Maybe their self-assessment doesn't match reality. People think they're further along than they are. They've deployed agents without building governance. They've moved fast without building controls. The confidence is high. The infrastructure isn't there. That gap is where the problems will show up.
English
0
0
0
45
Larridin
Larridin@Larridin·
Is the C-Suite overly optimistic about AI adoption? Our 2026 State of Enterprise AI Report surveyed almost 400 tech and finance leaders. What we found: executives believe they know how their companies are using AI. The people below them aren't so sure. Join CEO Russ Fradin on today's webinar as he walks through: → Which industries are above 80% adoption — and which are below 50% → Which functions are applying AI well while others lag → Why organizations with strong governance have 2x the ROI expectations Can't make it? Sign up anyway and we'll send the recording.
English
1
0
0
20
Larridin
Larridin@Larridin·
We asked senior leaders to estimate their organization's AI adoption rate. Then we measured it. The gap between the estimates and the measurements was massive. Not "within range." Executives were overestimating by factors of 2x, 3x, sometimes more. And they were (falsely) confident in their estimates. This disparity is a symptom of a visibility gap that exists in almost every enterprise. The data flowing up doesn't capture what's actually happening at the workflow level. Our April 28th webinar hits this straight on: "What the C-Suite Gets Wrong About AI." We'll cover where the blind spots are and what to do about them. Link to sign up is in the comments.
English
1
0
0
18
Larridin
Larridin@Larridin·
When you had redundant, non-AI project management tools, you wasted money. Now, when you have redundant AI tools, you waste money AND create data exposure you can't see. Every AI tool employees interact with is a tool they might feed proprietary data into. Customer records. Financial projections. Source code. Strategic plans. Even using the same tool on a personal account versus an enterprise account creates different risk profiles, based on training data policies. The governance challenge includes answering this question: "Where is data flowing that we don't know about?" That's a harder question, one most organizations can't answer.
English
1
0
0
24
Larridin
Larridin@Larridin·
Developers brought Cursor to work. Solo builders discovered Claude Code before IT departments knew it existed. Enterprise licenses followed habits that were already formed. Individual adoption without organizational capture is just distributed spend. The person using a tool on their laptop generates value for themselves. The organization capturing that usage - measuring impact, encoding expertise into shareable workflows, scaling what works - that's where value compounds. Seeing what individuals are doing is the first step. Building systems to capture lessons learned, and share them, is the second.
English
0
0
0
29