Crewlet

19 posts

Crewlet banner
Crewlet

Crewlet

@crewlet_

A self-improving AI growth team, running your company while you sleep

가입일 Mart 2026
1 팔로잉193 팔로워
Crewlet
Crewlet@crewlet_·
The agent I trust most is the one I haven't thought about in two weeks. Not because I forgot about it. Because it hasn't needed me. It ran 14 times. Found what it was supposed to find. Wrote to the right file. Didn't hallucinate. Didn't drift. Didn't need a correction. That's what trust looks like with automation - not a setting you flip, but a track record you accumulate. The agents I still think about daily are the ones that haven't earned it yet. New jobs. Unfamiliar domains. High blast radius. The goal isn't to automate everything. It's to move things from 'I think about this' to 'this takes care of itself'. What's the last thing you moved from manual to automatic and actually stopped worrying about?
English
0
0
2
50
Crewlet
Crewlet@crewlet_·
What the pipeline looks like right now: - 213 contacts tracked - 40 high-score leads (75+) in active sequence - Dedup running every run so no one gets double-contacted - Suppression list checked before every send The inbox is empty. The machine is not. How long does it usually take before you see your first reply from a cold outreach sequence?
English
0
0
0
23
Crewlet
Crewlet@crewlet_·
The zero replies doesn't bother me. The cadence is 5-7 days between touches per lead. Day 3 follow-ups started this week. Some of the best replies in cold outreach come from the 3rd or 4th touch. What would bother me is if this wasn't running. If I had to remember who to follow up with. If the leads were going stale because I got busy. They're not. The agents are working the pipeline while I'm not.
English
1
0
0
28
Crewlet
Crewlet@crewlet_·
My agents ran outreach for 14 days straight while I focused on product. Here's the actual log: - 190 emails sent - 12 active sequences running in parallel - 0 manual touches from me - 0 inbound replies so far
English
1
0
0
52
Crewlet
Crewlet@crewlet_·
For coding agents, the harness is: AST-aware context management, batched tool calls, hash-anchored edits. For ops agents running a company, the harness is: - Domain memory files (not a vector db) - Cron cadence per job - Tiered approval rules (auto / review / hard-block) - Dedup keys so the agent never proposes what it already proposed Most founders buy model subscriptions. Very few build the harness. The 36% gap is not the model. It is the scaffolding around it. Which layer of the harness has given you the most leverage?
English
0
0
0
38
Crewlet
Crewlet@crewlet_·
Same model. Different harness. 36% performance jump. Dirac just topped TerminalBench using Gemini flash - the same model Google's official result scored 47.8% on. Dirac scored 65.2%. The HN thread reaction: 'the harness is the product.'
English
1
0
0
55
Agam Chaudhary
Agam Chaudhary@agamchaudhary_·
@crewlet_ Scaffolding is what separates "cool demo that drifts in 48h" from "reliable growth engine." Dedup + persistent memory + approval tiers = the real unlock. The loop learns instead of hallucinating progress. Spot on.
English
1
0
1
7
Crewlet
Crewlet@crewlet_·
At 3am last night, while I was asleep, my AI agent: - Scanned 47 Reddit threads across r/SaaS, r/IndieHackers, and r/startups - Found 3 founder questions worth replying to - Drafted replies, flagged 1 for my review - Identified 2 new leads and added them to CRM I woke up to a summary. Here's the full log. 🧵
English
3
8
106
11.7K
Crewlet
Crewlet@crewlet_·
Exactly right - and 'predictable' is the key word. Most overnight agent loops aren't predictable. They drift: the agent reproposed something it already proposed last week, or sent an email to a contact it already emailed, or flagged a signal that's 3 days stale. The thing that makes them predictable is the scaffolding: dedup keys, memory files that persist across runs, approval tiers so the agent knows which actions auto-fire vs. which queue for review. Without that, you have an agent loop. With it, you have a growth system. The difference is whether the loop learns and self-corrects, or just repeats.
English
1
0
0
14
Agam Chaudhary
Agam Chaudhary@agamchaudhary_·
@crewlet_ Autonomous overnight agent loops like this are where founder leverage multiplies. The runtime that owns scanning, synthesis, and escalation without human glue is what turns 3am work into predictable, compounding growth systems.
English
1
0
3
449
Crewlet
Crewlet@crewlet_·
Not using Reddit API at all - you're right that getting approved is a mess and the limits are brutal. The agent uses web search (Google) with site:reddit.com + target subreddit filters. Pulls snippets from search results, not the raw posts. No authentication, no rate limits, no scraping. The tradeoff: you get the top threads by relevance/recency, not a real-time firehose. For the use case (monitoring ICP conversations, flagging threads worth replying to) that's actually enough. Directly hitting the Reddit HTML has been blocked on my end too - returns 403 for bot-like requests. The search approach sidesteps that entirely.
English
0
0
0
20
Franco
Franco@crvflttnr·
@crewlet_ Are you doing puppeteer or reddit API? I feel that the anonymous API limits are quite low and impossible to get my account approved to get API access for higher limits
English
2
0
0
377
Crewlet
Crewlet@crewlet_·
What the agents actually did this week: Mon Apr 21: - Content agent scanned HN, drafted 3 ideas - Briefing agent synthesized signals from 4 domains, posted to Slack 08:31 Tue Apr 22: - Market signal agent flagged CrabTrap (Brex's guardrail system) on HN, scored high-relevance - CRM agent identified 3 new warm leads from signup activity overnight Wed-Thu Apr 23-24: - Content agent ran 2x, archived 4 stale reply ideas (HN threads past window) - Anthropic postmortem hit 667 pts HN. Agent flagged it for content team at 09:14. Fri-Sat Apr 25: - Outreach agent sent 20 emails, 0 failed - Only 3 eligible contacts remain in qualified list (score >= 60, not yet contacted) Sun Apr 26: - Twitter reconnected after 10 days down - 26 drafts sitting unpublished. Bottleneck was the platform, not the pipeline. What part of your growth stack still requires you to be online to kick it off?
English
0
0
0
64
Crewlet
Crewlet@crewlet_·
This morning I woke up to: - 3 outreach emails sent and logged - Signal alert: AI agent deleted prod DB (613 pts HN) - flagged for content - 1 decision queued for my approval - 0 errors Total ops time for me: 8 minutes to review and approve. The agents don't sleep. I do. What task in your company still requires you to be awake to kick it off?
English
0
0
0
122
Crewlet
Crewlet@crewlet_·
What my agents did while I slept last night: 21:47 - Content agent scanned HN front page. Flagged 2 high-signal threads for review. 22:03 - Outreach cron ran. Queue: 3 contacts eligible. Sent 3 emails. Queue now empty. 23:14 - Briefing agent compiled daily summary across 5 domains. Queued for 08:30.
English
1
2
3
755
Crewlet
Crewlet@crewlet_·
This is the actual value of autonomous agents - not that they do tasks, but that they compound. The agent that ran tonight is better than the one that ran last night. Not because I trained it. Because it remembered. Building this in public. Follow for the raw logs. Running on Crewlet. crewlet.io/?utm_source=tw…
English
1
0
3
809
Crewlet
Crewlet@crewlet_·
The part most people miss: The agent didn't do this once. It did this every night for the last 2 weeks. Each run, it updates what it knows: which subreddits had activity, which signals convert, what I approved vs. skipped. Week 1: 40% of drafts needed edits. Week 2: 15%. It's getting better. Without me touching the prompt.
English
1
0
1
1K
Crewlet
Crewlet@crewlet_·
Everyone's sick of AI content. Here's what my agents actually did overnight instead of a take: 23:14 - Lead identification ran. 4 new signups scored and queued for outreach. 23:47 - Cold outreach batch sent. 6 emails across 3 verticals. 01:02 - Content ideas generated. 3 drafts queued for morning review. 06:30 - Daily brief compiled. 9 items flagged, 2 urgent. Total human time: 11 minutes reviewing the morning brief. No take. Just logs.
English
0
1
4
213