Tam HN

1.8K posts

Tam HN banner
Tam HN

Tam HN

@ctvv3010

love learning about health, learning, data science, AI.

Katılım Ocak 2018
2.4K Takip Edilen218 Takipçiler
Tam HN
Tam HN@ctvv3010·
@HustleFundVC wait this happened to me last week. 16 clients, 6 industries, 3 months taught me the same lesson. the constraint always forces better architecture.
English
0
0
0
6
Hustle Fund 🦛🌽💛
Hustle Fund 🦛🌽💛@HustleFundVC·
Should Southeast Asian founders raise from US investors or stay regional? Brian Ma's answer: talk to everyone. Investors who know the region find it easier. But there are US funds that know Southeast Asia well and are actively looking for companies there. Don't pre-filter yourself out of conversations before they start.
English
1
2
6
1.5K
Tam HN
Tam HN@ctvv3010·
@hthieblot this is the exact problem. did too much at once and learned minimal effective dose the hard way. Clarity first. AI second. Always.
English
0
0
0
100
Hubert Thieblot
Hubert Thieblot@hthieblot·
Unpopular opinion: It is absolutely okay for a founder to give up. If you're 3+ years in, 8+ pivots deep, barely paid yourself, uninspired, and your spirit is broken, you've run out of emotional runway. Reset your energy, not your ambition. Then come back and swing again.
English
86
32
957
48.7K
Tam HN
Tam HN@ctvv3010·
@adambhighfill I did something similar last year. 16 clients, 6 industries, 3 months taught me the same lesson. everyone is obsessing over prompt engineering. you should be obsessing over context engineering.
English
0
0
0
6
Adam Highfill
Adam Highfill@adambhighfill·
This is awesome.
Daniel Vassallo@dvassallo

I've been in the process of building a custom home for 5 years. Bought the land in 2021. Got the building permit this year. Haven't started construction yet. During those 5 years, I accumulated thousands of emails with dozens of architects, engineers, surveyors, contractors, government agencies, title companies, and others. Hundreds of PDFs I opened once and never found again. My project management system was email search and my own memory. I could always find individual emails when I needed them. What I couldn't do was see the project. How much money have we actually spent, and on what? Who are all the contractors we talked to, and how did we find each one? What happened with the easement, not one email about it, but the full arc across three years? Why did we stop using the original surveyor? The answers were all in my inbox. But they were spread across hundreds of threads. No single email contained the story. The story only existed in the connections between them. So I tried something. I pointed OpenClaw at my full email inbox and said: read all my emails in chronological order and figure out what happened with this project over the last 5 years. Build me a timeline. Find all the documents. Track the money. Map the people. That's it. I didn't sort anything. I didn't classify anything. I didn't tell it which threads mattered. I just pointed at the inbox and let it work. And it worked way better than I expected. It found 1,850 emails across 450 threads involving 58 people at 35 organizations. From that, it produced 511 timeline events describing what actually happened over 5 years. Not "Daniel emailed the architect" but "Easement delay threatens grading permit" or "architect warns the entire permit depends on securing the neighbor's access agreement." Real project history in PM language. It identified 690 documents and classified each one: invoice, permit, survey map, legal agreement, environmental report, estimate, and so on, and it linked them to the timeline events that referenced them. It extracted 170 finance records from email bodies and PDF attachments. Invoices, payments, estimates, and receipts with amounts, dates, and payees pulled from messy documents. It mapped out 58 contacts with their roles, their organizations, and how they related to the project over time. All interlinked. Click a timeline event, see the emails that produced it and the documents attached. Click a payment, trace it back to the invoice and the email thread. Click a person, see every event they were involved in. It built a dashboard on top of it and for the first time in 5 years, I could actually see the whole project. The full arc. Every dollar. Every person. Every decision. Stitched together from raw correspondence into something I can sit down and browse. The key insight for me was realizing this is basically an ETL process: Extract, Transform, and Load. The emails are the source data. The agent does the extraction from emails and loading into a database. But the really powerful part is the Transform: the LLM reads the raw correspondence with enough context to do intelligent enrichment across hundreds of threads spanning months and years. And by enrichment I don't mean summarization. I mean it actually reconstructed the narrative of the project. It traced how we almost hired the wrong well driller. We originally hired one company, paid a deposit, and were ready to go. Then the architect heard from someone in his network that they weren't reliable. We pivoted to a different driller who came recommended through a chain of referrals the agent traced back to its origin. The new company came out, drilled 140 feet, hit an artesian well with water pressure above ground level, and finished in two weeks. The original deposit got refunded. The agent reconstructed that entire sequence from first contact to final invoice, across dozens of emails and multiple contractors, and presented it as one coherent story. It reconstructed the full permit saga. Four separate permits with the county, each with its own cycle of applications, reviews, correction letters, resubmissions, and approvals. Years of back and forth. The agent built the complete timeline for each permit and linked every document and payment to the right stage. It tracked the money flow end to end. Not just "we paid the architect X." It found every invoice, matched them to the work described in the email threads, categorized the spending, and produced a financial history of the entire project broken down by architect, engineer, surveyor, contractor, county fees, and everything else. It mapped out relationships between people that I had half-forgotten. Which engineer referred which surveyor. Which contractor's crew member later became a separate vendor. Which county reviewer handled which permit. All of it was in the email, I just never had the time to stitch it together myself. One of the most fun things it did was writing honest personality profiles for each contact based purely on their communication style. How responsive they are. How they handle pushback. Whether they tend to over-promise. Whether they're the kind of person who answers at 11pm or takes five days to reply. Reading an AI's unfiltered take on the people you've been doing business with for years, based on nothing but their emails, is surprisingly entertaining and uncomfortably accurate. The thing that surprised me most is how much structure was already hiding in the email. I didn't add information. The agent found what was already there. The timeline, the document graph, the money flows, the cast of characters. It was all latent in the correspondence. Five years of decisions and negotiations and payments, all recorded in email, just never connected. I think a lot of people are sitting on projects like this without realizing it. Your renovation emails are a project database waiting to be assembled. Your legal correspondence is a case file. Your immigration threads are an application history. The raw material has been accumulating for months or years. It's rich, timestamped, and complete. It's just in a format designed for messaging, not for understanding. Point an agent at it. Let it read everything. Let it do the transform. The whole story was in my inbox the entire time. I just needed something that could read all of it at once.

English
1
0
1
717
Tam HN
Tam HN@ctvv3010·
@adambhighfill this is the exact problem. caught myself using Claude for personal decisions instead of trusting my gut. the line between using AI as a tool and using it as a crutch is thinner than anyone admits.
English
0
0
0
8
Tam HN
Tam HN@ctvv3010·
@elonmusk the data on this across 16 client projects. Automated a broken process: 'We automated garbage.' (85% of AI automation projects fail). the constraint always forces better architecture.
English
0
0
1
7
Elon Musk
Elon Musk@elonmusk·
True currency is steadfast friendship
English
15.7K
31.6K
332.3K
98.1M
Tam HN
Tam HN@ctvv3010·
@NYDrewReynolds I feel called out.. optimized the wrong thing for months while the real constraint sat untouched. the prompt is the last 5%. the context is the other 95%.
English
0
0
0
14
Drew - ⚡️SMB Automation
Drew - ⚡️SMB Automation@NYDrewReynolds·
A lot of teams think they have a communication problem when they actually have a systems problem. People over-communicate to compensate for weak process design. More messages. More follow-ups. More meetings. More checking. Cleaner workflows reduce the need for extra communication because the process is already doing part of the work.
English
1
0
2
91
Tam HN
Tam HN@ctvv3010·
@Powercommitment @lukepierceops I did something similar last year. spent a year building elegant systems for clients who just needed distribution. the constraint always forces better architecture.
English
0
0
0
45
Tam HN
Tam HN@ctvv3010·
@code4scale @lukepierceops this happened to me. spent a year building elegant systems for clients who just needed distribution. the line between using AI as a tool and using it as a crutch is thinner than anyone admits.
English
0
0
0
41
Tam HN
Tam HN@ctvv3010·
@yanndine I feel called out.. automated a broken process and called it a win. we automated garbage. everyone is obsessing over prompt engineering. you should be obsessing over context engineering.
English
0
0
0
46
Yann
Yann@yanndine·
I put the entire Claude GTM Execution Playbook into ONE Notion doc. 7 sections. No fluff. - How Memory 2.0 works: Claude now synthesises every conversation into a memory summary every 24 hours and loads it into every new session automatically without a single prompt from you - How Cowork Projects execute tasks and remember every run so by week 3 you get week-over-week deltas and by week 8 Claude is identifying trends without you setting anything up again - How Dispatch works: assign tasks from your phone while Claude works on your desktop, with Keep Awake enabled so overnight research, file organisation, and analytics pulls are done before you sit down - How to enable Computer Use for desktop and browser, run a full UX audit in under 15 minutes, and pull analytics from any platform you are logged into without touching the interface manually - Scheduled task setup: how to distil any recurring GTM workflow into a rules document, connect required tools, and train through feedback so outputs accumulate your preferences over time - How to build interactive process flows, campaign funnels, ICP relationship diagrams, and GTM sequence maps directly inside the conversation and export them to Notion or a client portal - How to use the Ideas section to find your first high-value win, filter by function, and get a pre-built prompt already written so your first session produces a finished output not a setup conversation This is the setup I would have KILLED for before spending weeks triggering the same tasks manually, re-briefing Claude every session, and building reports by hand that should have been running on a schedule. Like + comment "CLAUDE" and I'll send it over (must be connected for priority access)
Yann tweet media
English
68
5
68
3.2K
Tam HN
Tam HN@ctvv3010·
@AlfieJCarter I feel called out.. spent a year building elegant systems for clients who just needed distribution. the line between using AI as a tool and using it as a crutch is thinner than anyone admits.
English
0
0
0
136
Alfie Carter
Alfie Carter@AlfieJCarter·
I put the entire Claude Cowork Playbook for GTM Engineers into ONE Notion doc. 9 sections. No fluff. - The 3 differences between Cowork and Chat that actually matter for GTM work: file limits, output format, and prompting language, plus why outcome-first instructions get the deliverable done before you come back - Four settings steps to configure before running any GTM workflow: guard rails that stop Cowork overwriting client files, memory features, tool access, and working folder setup - How to use local file access to process 100+ receipts into formatted Excel, convert campaign decks from static images into editable PowerPoint, and handle files too large for Chat to touch - How persistent memory works for GTM teams: build it correctly by showing Cowork what you changed and having it write permanent rules to CLAUDE. md and memory. md so every future session starts with your GTM context already loaded - Connector setup for Gmail, Google Drive, Google Calendar, and Notion, plus how to cross-reference meeting transcripts and notes in one task to surface every commitment that didn't make it into the follow-up - How to build a GTM skill the right way: run the actual workflow first, iterate until output is exactly right, then have Cowork capture the entire process into a reusable file that runs the same way every time - When to use a Cowork Project vs a Chat Project for GTM workstreams and how Cowork writes updated principles directly to the instruction file without manual uploads - Current state of the browser extension: what it can do, why it is not yet reliable for client-facing GTM work, and where Chat still has the edge for research-heavy tasks - Scheduled task setup in three layers: distilling your GTM workflow into a rules document, connecting required tools, and training through feedback so recurring outputs get sharper every week This is the setup I would have KILLED for before spending weeks copy-pasting campaign outputs, re-uploading the same client briefs, and re-briefing Claude from scratch at the start of every GTM session. Like + comment "COWORK" and I'll send it over (must be connected for priority access)
Alfie Carter tweet media
English
100
11
127
8.4K
Tam HN
Tam HN@ctvv3010·
@MakadiaHarsh ran into this building. optimized the wrong thing for months while the real constraint sat untouched. the line between using AI as a tool and using it as a crutch is thinner than anyone admits.
English
0
0
0
22
Harsh Makadia
Harsh Makadia@MakadiaHarsh·
The most dangerous sentence in business right now: "Let's add AI to it." I hear it on every other call. Founders who have a working product that users love, and instead of scaling what works, they want to bolt AI onto it. Not because users asked. Because investors are asking. Because competitors are marketing it. Because it sounds good in a pitch deck. I've talked 3 clients out of "adding AI" this year. In each case, we found that what they actually needed was better onboarding, faster load times, or just fixing 2 bugs that had been there for months. Sometimes the most advanced move is ignoring the hype and fixing what's broken.
English
18
1
22
1.8K
Tam HN
Tam HN@ctvv3010·
@NYDrewReynolds @SMB_Ops I did something similar last year. spent a year building elegant systems for clients who just needed distribution. the prompt is the last 5%. the context is the other 95%.
English
0
0
0
12
Drew - ⚡️SMB Automation
Drew - ⚡️SMB Automation@NYDrewReynolds·
Big trend form doing discovery calls for @SMB_Ops. Every single one, the owner thinks they have a people problem. They don’t. They have a process problem. Nobody needs to get fired. They need a better handoff. That’s the whole job, seeing past the symptom.
English
1
0
1
53
Tam HN
Tam HN@ctvv3010·
@adambhighfill the data on this across 16 client projects. Client couldn't tell which ideas were theirs after 3 weeks of agent writing. the constraint always forces better architecture.
English
0
0
0
10
Adam Highfill
Adam Highfill@adambhighfill·
The METR Time Horizons benchmark tracks how complex a task frontier AI models can complete with 50% reliability. In 2023, that threshold was roughly 10 minutes of work. Today it's over 15 hours. Two years. That's how long it took to go from "answer a quick question" to "complete a full day of expert knowledge work", at meaningful reliability. Imagine where the new Mythos-type category of models are going to fall on this scale 🤯 The rate of acceleration and compounding dynamic underneath is what makes this hard to sit with. #MacroMondays
Adam Highfill tweet media
English
1
1
2
460
Tam HN
Tam HN@ctvv3010·
@NYDrewReynolds ran into this building. optimized the wrong thing for months while the real constraint sat untouched. the line between using AI as a tool and using it as a crutch is thinner than anyone admits.
English
0
0
0
6
Drew - ⚡️SMB Automation
Drew - ⚡️SMB Automation@NYDrewReynolds·
Trying new AI tools has never been easier but you still need to understand the problem you’re solving for. I keep hearing the same thing: “We already tried automating that.” And every time I dig in, they automated the wrong thing. The pain and the break are almost never in the same place. That’s the gap I’m building into.
English
1
0
1
64
Tam HN
Tam HN@ctvv3010·
@adambhighfill @SMB_Attorney wait this happened to me last week. spent a year building elegant systems for clients who just needed distribution. the constraint always forces better architecture.
English
0
0
0
8
Adam Highfill
Adam Highfill@adambhighfill·
Returning to work today after a McDouble
GIF
English
2
0
3
352
Tam HN
Tam HN@ctvv3010·
@adambhighfill this is the exact problem. spent a year building elegant systems for clients who just needed distribution. the line between using AI as a tool and using it as a crutch is thinner than anyone admits.
English
0
0
0
8
Tam HN
Tam HN@ctvv3010·
@NYDrewReynolds this is the exact problem. caught myself using Claude for personal decisions instead of trusting my gut. Clarity first. AI second. Always.
English
0
0
0
14
Drew - ⚡️SMB Automation
Drew - ⚡️SMB Automation@NYDrewReynolds·
A company does not need a giant transformation to benefit from automation. Sometimes the best move is one narrow fix: • one cleaner intake form • one automatic reminder sequence • one dashboard • one synced workflow Small systems build confidence. Confidence makes the next system easier.
English
2
0
1
61
Tam HN
Tam HN@ctvv3010·
@yanndine the data on this across 16 client projects. Ran AI automation for 16 clients across 6 industries. the constraint always forces better architecture.
English
0
0
0
18
Yann
Yann@yanndine·
SHOCKING: 99% of GTM engineers using Claude Code are barely scratching the surface. Right now, the entire internet is screaming "Claude Code, Claude Code, Claude Code"... But here's the truth: just running it from the terminal won't build GTM infrastructure. To unlock its real power, you need to master: - Claude Code setup with the project brain file self-improvement loop and plan mode so nothing gets built twice - MCP connections, sub-agents, and skills running parallel workflows without you triggering them manually - Deployment infrastructure on Modal that turns any skill into a live API endpoint connected to your full GTM stack I spent 100+ hours building and documenting the most complete Claude Code Playbook for GTM Engineers and compiled every setup guide, skill blueprint, MCP configuration, and deployment workflow into one resource. I'll give it to only 800 people. To get it: 1. Follow me MUST (so I can DM) 2. Comment "CODE" 3. I'll DM you the playbook If you don't follow or comment, you won't receive it.
Yann tweet media
English
28
1
17
1.3K
Tam HN
Tam HN@ctvv3010·
@lukepierceops this is the exact problem. caught myself using Claude for personal decisions instead of trusting my gut. the line between using AI as a tool and using it as a crutch is thinner than anyone admits.
English
0
0
1
105
Luke Pierce
Luke Pierce@lukepierceops·
Ai clients come in either thinking they need something custom or they don't know where to start. 90% of the time they need the same 4 things. 1. Process improvement - find what's broken, fix it before touching a tool 2. Workflow automation - remove the manual steps that eat your team's week 3. Data structure - centralize everything so the business has one source of truth 4. AI integration - layer intelligence on top of the clean foundation you built That's it, and everything else is just execution. Then from a development standpoint, you're doing one of three things: 1. Full custom build - client is still running on Excel sheets and shared folders. You come in and build everything from scratch. Database, workflows, automations, the whole thing. 2. Fix and implement - they have existing infrastructure but it's messy. You clean up the data structure first, then build on top of something that actually works. 3. AI and automation layer - their tech stack is solid. They just need intelligence added on top. Agents, automations, decision logic. No rebuild required. 4. And sometimes it's a hybrid. You might do a full custom build for one department then integrate it directly into their ERP or CRM. New and old running together. That's actually pretty common in larger orgs. Almost every engagement fits into one of these three.
English
13
8
105
6.4K
Tam HN
Tam HN@ctvv3010·
@MakadiaHarsh ran into this building. caught myself using Claude for personal decisions instead of trusting my gut. the prompt is the last 5%. the context is the other 95%.
English
0
0
0
34
Harsh Makadia
Harsh Makadia@MakadiaHarsh·
The best automation ideas don't come from brainstorming. They come from watching someone do the same thing for the third time and asking, "why?"
English
12
2
28
1.8K