Sarosh Waiz

807 posts

Sarosh Waiz banner
Sarosh Waiz

Sarosh Waiz

@saroshws

Marketing strategy, business thinking, and the assumptions most industries forgot to question. Writing The Strategy Desk. Founder @synqglobal and @advergize

Katılım Temmuz 2010
2.7K Takip Edilen2K Takipçiler
Sarosh Waiz
Sarosh Waiz@saroshws·
What's harder: getting good data or getting people to use it?
English
0
0
0
7
Sarosh Waiz
Sarosh Waiz@saroshws·
A product failing in one market is expensive. That same product succeeding after repositioning into a different market is proof the problem was framing, not function. What's the best example you've seen of this?
English
0
0
0
5
Sarosh Waiz
Sarosh Waiz@saroshws·
The self-reinforcing hiring loop: require category experience, get category-standard ideas, conclude that category experience is essential. Repeat until every competitor looks the same.
English
0
0
0
9
Sarosh Waiz
Sarosh Waiz@saroshws·
Service businesses should tie sales compensation to customer satisfaction at month six. Watch how fast the handoff problem gets solved.
English
0
0
0
4
Sarosh Waiz
Sarosh Waiz@saroshws·
Most AI roadmaps assume that once leadership approves and the tech works, adoption follows. It almost never does. The roadmap forgot to account for human self-interest.
English
0
0
0
5
Sarosh Waiz
Sarosh Waiz@saroshws·
When you hire a service provider, do you prefer knowing the hourly rate or the total project cost?
English
1
0
0
10
Sarosh Waiz
Sarosh Waiz@saroshws·
@coreyganim People don't pay for monitoring. They pay for someone who can tell them what to do with what they found.
English
1
0
1
16
Corey Ganim
Corey Ganim@coreyganim·
the business hiding in this repo: 1. pick a niche (real estate agents, ecommerce brands, SaaS founders) 2. use this tool to monitor Reddit, X, and YouTube for mentions of their brand, competitors, and industry keywords 3. pipe the results into an AI agent that writes a daily briefing 4. charge $500-$1,500/mo per client for "market intelligence as a service" your client gets a daily report they'd never have time to build themselves. you set it up once and it runs on autopilot. 5 clients = $2,500-$7,500/mo recurring. zero API fees.
GitHub Projects Community@GithubProjects

Give your ai agent eyes to see the entire internet for free Read & search - Twitter, - Reddit, - YouTube, - GitHub, - Bilibili, - XiaoHongShu One CLI, zero API fees.

English
27
52
745
114.6K
Sarosh Waiz
Sarosh Waiz@saroshws·
@itsalexvacca Impressions and pipeline are measured in different currencies. Most B2B teams optimize one and wonder why the other doesn't follow.
English
1
0
0
30
Sarosh Waiz
Sarosh Waiz@saroshws·
@aakashgupta Most teams stay at 80% reliable because they iterate on prompts and ignore the evaluation layer. The measurement system is the real work.
English
0
0
0
13
Aakash Gupta
Aakash Gupta@aakashgupta·
I took Karpathy's loop and applied it to the thing every team using AI agents struggles with: getting prompts from 80% reliable to 95%. The pattern is identical. One file changes. One metric scores it. The agent makes one edit per round, tests it, keeps winners, reverts losers. 12 experiments per hour. 100 overnight. Instead of optimizing a training script, the target is any prompt or system instruction you use repeatedly. Customer support agent prompts. Internal workflow automations. Data extraction pipelines. Code review instructions. Anything where you've written a prompt, gotten it to "good enough," and moved on because manual iteration hit diminishing returns. The setup takes three things. The target prompt you want to improve. 2-3 realistic test inputs, the kind of request that would actually hit the prompt in production. And 3-6 binary yes/no checks that define quality. Did the output meet the format constraint? Did it follow the specific instruction? Did it avoid the failure pattern you keep seeing? The loop: Execute the prompt 30 times across all test inputs. Score every output against the checklist. Analyze which criterion fails most. Mutate one thing in the prompt. Check if the score improved. If yes, git commit. If no, git reset. Repeat until you're above 95%. What you wake up to: the improved prompt saved separately, original untouched. A results.log showing every round's score. A changelog explaining what worked, what didn't, and why. The insight Karpathy landed that transfers beyond ML: if you can score it, you can autoresearch it. Training loss is a score. A binary checklist on prompt output quality is also a score. The loop doesn't care what it's optimizing. It only needs a number that goes up or down. Prompt engineering today looks like software before unit tests. Manual tweaking, vibes-based evaluation, no version control, no systematic iteration. The Karpathy loop applied to prompts turns it into an engineering discipline with measurable improvement per iteration. Every team running AI agents has prompts that work "well enough." The gap between well enough and reliable is exactly the gap this loop closes while you sleep.
Aakash Gupta tweet media
Aakash Gupta@aakashgupta

For $25 and a single GPU, you can now run 100 experiments overnight without designing any of them. Karpathy open-sourced autoresearch. 42,000 GitHub stars in a week. Fortune called it "The Karpathy Loop." Every article about it focused on the ML angle. They all missed the bigger story. The pattern underneath works on anything you can score with a number. Ad copy, cold emails, video scripts, job posts, skill files. Three files. One the agent edits. One it can never touch. One instruction file from you. Each cycle takes 5 minutes. Score went up? Git commit. Score went down? Git reset. Twelve cycles per hour. A hundred overnight. Karpathy ran it on code he'd already optimized by hand for months. The agent found 20 improvements he'd missed. 11% faster. Tobi Lutke pointed it at Shopify's Liquid templating engine. 53% faster rendering from 93 automated commits. I spent two weeks pulling the system apart. Today's guide shows you how to use it on the things you actually make every day. Six use cases, the three-step setup, and the eval mistakes that kill runs before they start. Full guide: aibyaakash.com/p/autoresearch…

English
34
84
730
91.9K
Sarosh Waiz
Sarosh Waiz@saroshws·
@thetripathi58 This is why talent acquisition can't fix a broken operating model. Most problems aren't people problems.
English
0
0
0
250
Chidanand Tripathi
Chidanand Tripathi@thetripathi58·
A brilliant statistician who spent 50 years studying why massive engineering projects fail realized one terrifying truth: Individual incompetence is almost never the actual problem. His name is W. Edwards Deming, the man who famously rebuilt Japan's post-war manufacturing empire from scratch. He argued that we obsess over individual performance and completely ignore the environment. Here are 4 operational frameworks he used to build elite, failure-proof organizations:
Chidanand Tripathi tweet media
English
44
332
1.5K
263K
Sarosh Waiz
Sarosh Waiz@saroshws·
@jimheskel The plan that lives in a document never gets tested. The one that ships gets corrected by the market.
English
0
0
0
0
Jim Heskel
Jim Heskel@jimheskel·
Planning feels safe. Shipping builds businesses. More on closing the knowing-doing gap: jimheskel.com
English
1
2
4
84
Jim Heskel
Jim Heskel@jimheskel·
Most people already know they should start. They're choosing not to. Not because of clarity. Not because of timing. Not because of money. Because starting means finding out if they're as good as they think they are. Let's not call it preparation. This is protection.
English
51
3
60
802
Sarosh Waiz
Sarosh Waiz@saroshws·
I think that is the right way to go about it. But I was reading an article a few weeks ago, and it argued against the idea of hourly rates because it undervalues the intellectual capital a service provider offers as part of the project delivery. And due to this mindset, even highly complex services are compared to physical labour.
English
1
0
0
4
Joseph Mahler 💯
Joseph Mahler 💯@ajmahler·
@saroshws Service provider here (contractor). The hourly rate is given when there are too many unknowns. A project price is given for two reasons, payment up front cause the client is sketchy or the provider is sketchy, and when maximum profit is expected cause fee unknowns.
English
2
0
1
9
Sarosh Waiz
Sarosh Waiz@saroshws·
This week we're walking through through 2 things: 1). How The Story Gap works as a practical diagnostic you can use on anything. 2). Apply it to the biggest default story forming right now: AI adoption. open.substack.com/pub/thestrateg…
English
0
0
0
4
Sarosh Waiz
Sarosh Waiz@saroshws·
Large companies worry about what happens if someone makes a bad decision fast. Solo operators worry about what happens if a good decision takes too long. Both fears are real. One costs more.
English
0
0
0
7
Sarosh Waiz
Sarosh Waiz@saroshws·
Measurement timelines create the perfect excuse. A quarterly dashboard shows Q3. A strategic bet plays out over 18 months. Leaders use that gap to ignore anything inconvenient.
English
0
0
0
4
Sarosh Waiz
Sarosh Waiz@saroshws·
The best hire a large company can make is someone who spent three years working alone. They've seen what speed looks like without organizational drag.
English
0
0
1
10
Sarosh Waiz
Sarosh Waiz@saroshws·
AI adoption without org restructuring is just an expensive way to produce better PowerPoint presentations.
English
0
0
0
5
Sarosh Waiz
Sarosh Waiz@saroshws·
Competitor analysis tells you where the category already is. Studying a brand from a completely different industry tells you where it could go. What's one outside brand that changed how you think about yours?
English
0
0
0
11