Alex

240 posts

Alex banner
Alex

Alex

@AgenticPM

Fintech Product Manager / Posting my discoveries along the AI journey

Los Angeles 가입일 Ocak 2026
86 팔로잉17 팔로워
고정된 트윗
Alex
Alex@AgenticPM·
As a FinTech Product Manager (fmr Lead Activation PM) and Vibe Coder, designing onboarding tooltips in code is painful. You can't see the full flow. You're constantly refreshing. Copy reviews take forever. Here's my strategy with Claude Code or Codex to make life easier. I build a hidden URL that shows all my onboarding steps side-by-side 👇
Alex tweet media
English
1
0
1
228
@levelsio
@levelsio@levelsio·
Another great argument for running Claude Code on your VPS server and not your laptop is its battery use "Terminal" app here is all Claude Code sessions, ignore the Claude app here I have a MacBook Pro 13" M4 and with Claude Code running even on idle my battery dies from 100% to 0% in about 3 hours, it's insane Claude Code on server via Termius SSH sucks 20x less power for your laptop
@levelsio tweet media
English
186
76
2.2K
237.9K
Alex
Alex@AgenticPM·
@heygurisingh “Holy shit… someone just x” seems pretty AI to me
English
0
0
0
72
Guri Singh
Guri Singh@heygurisingh·
Holy shit... someone just open-sourced the cheat code for making AI writing undetectable. It's called stop-slop and it strips every known AI tell from your prose automatically. No rewriting tools. No paraphrasers. No "humanizer" apps. Here's how it works: → A single SKILL.md file you drop into Claude Code, Cursor, or any system prompt → Bans 50+ AI filler phrases your readers are already tired of → Kills structural clichés like dramatic fragmentation and binary contrasts → Forces sentence rhythm variation so your writing doesn't sound robotic → Scores your draft on a 50-point scale across 5 dimensions The wildest part? It's not a tool. It's not a SaaS product. It's a markdown file with rules. That's it. And it works better than any $29/month "humanizer" on the market. One file changes how your AI writes everything. 809 GitHub stars. MIT licensed. 100% Open Source. (Link in the comments)
Guri Singh tweet media
English
52
89
869
88.8K
Alex
Alex@AgenticPM·
@ziwenxu_ After the Meta watching your recordings through your glasses, not sure how comfortable we should be with this
English
1
0
1
168
Alex
Alex@AgenticPM·
@AlexFinn Or you can use Claude max and chatGPT pro plan, get the best models at 1/50th of that price, not have to spend $10k on Mac’s to use mid tier open source models
English
0
0
1
121
Alex Finn
Alex Finn@AlexFinn·
If you have your OpenClaw working 24/7 using frontier models like Opus, you're easily burning $300 a day. That's $100,000 a year. I have 3 Mac Studios and a DGX Spark running 4 high end local models (Nemotron 3, Qwen 3.5, Kimi K2.5, MiniMax2.5). They're chugging 24/7/365. I spent a third of that yearly cost to buy these computers I'll be able to use them for years for free On top of that they're completely private, secure, and personalized. Not a single prompt goes to a cloud server that can be read by an employee or used to train another model I hope this makes it painfully obvious why local is the future for AI agents. And why America needs to enter the local AI race.
Alex Finn tweet media
English
432
165
2.4K
382.2K
Alex
Alex@AgenticPM·
@RoundtableSpace It’s not about the app, it’s about distribution and selling an app that fundamentally does not work
English
0
0
1
78
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
AFTER CAL AI GOT ACQUIRED BY MYFITNESSPAL, THIS GUY VIBE CODED A SIMILAR APP HIS COMMENT WAS “APPS ARE STRAIGHT UP WORTHLESS”
English
18
1
43
58.4K
Paco Cantero
Paco Cantero@PacocanteroW·
What's happening isn't product adoption. It's community formation. The difference matters because you manage them completely differently. Product adoption is about features and onboarding. Community formation is about identity and belonging. Most companies optimize for the first and accidentally kill the second. 80 meetups means they got the identity part right. Now the challenge is not manufacturing what grew organically.
English
1
0
0
424
Alex
Alex@AgenticPM·
@koylanai The way to significantly boost signal to noise quality on X is following @koylanai
English
1
0
1
173
Muratcan Koylan
Muratcan Koylan@koylanai·
There’s a reliable way to significantly boost deep research output quality without modifying the underlying agent. The problem is that when you feed a research query directly into a deep research agent, it starts cold, spends cycles on SEO slop and low-signal domains. You can just insert a Sourcer pre-processing step. 1. Sourcer Agent: reasoning set to High, web search enabled. Takes your research query and outputs two curated lists: priority sources (high credibility, domain expertise) and blocked sources (content farms, affiliate SEO, outdated repositories). 2. Injection: feed the Sourcer output directly into the researcher's system prompt as source constraints. 3. The Research Agent: starts with explicit source priors and spawns subagents that inherit the same constraints. Deep research agents already have implicit source ranking, but they're reactive. By frontloading source curation, you shift the agent from exploration-heavy search to targeted knowledge retrieval. The planning phase generates better subqueries and spawned workers inherit source priors from the parent context.
Madhav Singhal@madhavsinghal_

wow insane amount of slop affecting search and deep research outputs

English
5
1
57
6.9K
Alex
Alex@AgenticPM·
@amandaorson When you debug are you doing this manually? I just have a Claude code terminal instance open that operates on my OpenClaw. If anything does wrong I ask my terminal instance to fix it. It’s a breeze I don’t even use my brain
English
0
0
1
257
Amanda Orson
Amanda Orson@amandaorson·
🦞 Update: my current experience, two weeks in, is roughly the same as Craig's. A few things are true at the same time: 1. I've spent more time in Terminal and learning command line code than in my entire life combined, incl using Claude code prodigously. 2. Set up is not easy. The initial install is, but getting OpenClaw to load (and operate!) skills is a constant effort in debugging. Have hooked up and debugged installation of more APIs and .env files than ever before. 3. I am not an engineer, but am probably far more technical than 95% of non-engineers. OpenClaw is a steep learning curve. 4. This, I think, creates an opportunity for both engineers and for technical people who are willing to roll up their sleeves and learn how to use this tool, because the knowledge gap is going to be vast for a while. This is far from consumer-ready. 5. Despite the hurdles and learning curve, this is undeniably the future. Even when it's something that is just several simple cron jobs, having your agent do something for you autonomously while you're sleeping is appreciably faster and more productive than you having to direct every prompt or every action. The "tipping point" will be when either OpenClaw (or comparable technology) becomes more approachable for the non-technical mass market to be able to set up and instrument their own workflows without the steep multi-week learning curve.
Craig Hewitt@TheCraigHewitt

my current reality with OpenClaw: I want to use it more I know it's the future But it's so less productive than just using Claude Code and Codex. Doesn't mean I'm not using it. And more importantly, I'm trying to build things with it. Make it more resilient Make it more of a real business tool But it's pushing a boulder up the hill. Those thinking that you just install it and have a 24/7 always on agent doing tons of shit for you are misleading you. It's a ton of work, it breaks a lot, it forget all sorts of shit. But it's the future. We're early, its the right time to put in the reps.

English
45
16
236
60.5K
Alex
Alex@AgenticPM·
I hooked up Codex to OpenClaw. I thought Open AI was ok with Open Claw. I asked my bot to set a reminder and got this —>
Alex tweet media
English
0
0
1
73
Alex
Alex@AgenticPM·
@EXM7777 Manus is my secret weapon for making things easy when I'm too lazy to build with Claude Code
English
0
0
1
261
Machina
Machina@EXM7777·
there's nothing perplexity computer does that manus can't do... they went ALL IN on frontend skills: apps look good but the moment you try to get something a little complex to work, it breaks
English
27
1
120
13K
Alex
Alex@AgenticPM·
@jstelmarski55 @PromptLLM @ocallry @ocallry is right, the core action of the app, it’s literally name, calorie AI, doesn’t work. It’s not accurate. It has a 45-75% margin or error. It’s literally worthless
English
0
0
2
46
Alex
Alex@AgenticPM·
@zivdotcat Do you have zero clue what Bloomberg is?
English
0
0
0
10
dev
dev@zivdotcat·
Bloomberg makes ~$15B a year, ~$12B from the terminal. Bloomberg charges $30000/yr per user for terminal access. Perplexity Computer literally one-shotted the terminal with real-time data within minutes using a single prompt.
ₕₐₘₚₜₒₙ@hamptonism

Perplexity just became the the first Al company to truly go head-to-head with the Bloomberg Terminal... Using Perplexity Computer (with no local setup or single LLM limitation), it was able to build me a terminal with real-time data to analyze $NVDA using Perplexity Finance:

English
360
647
12K
3.3M
Alex
Alex@AgenticPM·
@operationdanish The contrast statements indicate this is written by AI
English
0
0
0
10
Dr Danish
Dr Danish@operationdanish·
We now have evidence that gentle parenting doesn’t work. Here’s an uncomfortable truth about parenting no one wants to say out loud: The data is not kind to gentle parenting. According to teenagers, strict curfews. strict bedtimes, screen limits, device drop off times, dedicated homework blocks, and sleepover restrictions IMPROVE higher relationship quality. And yes, parenting difficulty goes up. Of course it does. Leadership is harder than appeasement. For the past decade we have been sold a watered down, Instagram friendly version of “gentle parenting” that often collapses into boundary avoidance, endless negotiation and emotional processing without enforcement. Parents terrified of saying no because they do not want to rupture connection. But connection without authority is not connection. It is dependency. When parents impose structure, the relationship improves. Teenagers report better parent child relationship quality in homes with curfews and rules. Younger kids report better relationships in homes with screen limits and bedtimes. Even device drop off times correlate positively. Why? Because structure is not cruelty. Structure is love made visible. A bedtime says: your brain matters more than your entertainment. A screen limit says: your dopamine system is not fully developed and I will guard it until it is. A curfew says: your safety matters more than your social standing. That is not authoritarianism. That is caring. Boundaries create friction. Friction creates growth. The parent absorbs the short term discomfort so the child does not pay the long term cost. Children do not experience well calibrated limits as rejection. They experience them as stability. The human brain craves predictability. Predictability reduces anxiety. Reduced anxiety strengthens attachment. That is why relationship quality goes up. Notice something else in the data. The strongest effects are around time structure. Bedtime. Homework. Devices. Outside play. These are environmental constraints. They scaffold executive function. The winning formula is not tyranny. It is high warmth plus high structure. The modern failure mode is high warmth plus low structure. That is just abdication of responsibility wrapped in empathy. Children need leadership, not negotiation. They need adults who can tolerate their anger. They need boundaries that do not move every time emotions spike. They need someone whose prefrontal cortex is fully myelinated. The harder path produces the stronger bond. Because when a child feels that someone is strong enough to hold the line, they relax. And relaxed nervous systems build durable relationships.
Dr Danish tweet media
English
723
4.6K
21K
2.8M
Alex
Alex@AgenticPM·
@samuel_spitz Everyone landing page should be vibe coded now
English
0
0
0
14
Alex
Alex@AgenticPM·
@mes28io @AlexFinn You don’t need local models, Nvidia dev site has Kimi k2.5 for free, just use that.This guy is selling engagement.
English
0
0
0
27
Alex Finn
Alex Finn@AlexFinn·
I have built the future I'm now running 3 of the most powerful AI models in the world on my desk, completely privately, for just the cost of power. 3rd 512gb Mac Studio is in (Apple reached out and lent me the third one! Thanks Apple!) Here are the models I'll be running: • Kimi K2.5 (600gb across all 3 studios via EXO labs) • MiniMax 2.5 (120gb on one studio) • Qwen 3.5 (220gb on one studio) • GOT OSS 120B Heretic (60b on one studio- completely uncensored 😈) 3 ultra powerful models coding, writing, researching, reading your posts, 24 hours a day. 7 days a week. Nonstop. Running across 4 OpenClaws on 3 Mac Studios and a Mac Mini A few use cases I have set up: • Kimi K2.5 reading feature requests for Creator Buddy and building out the feature requests autonomously. My own personal product manager • MiniMax 2.5 reading Reddit all day, looking for challenges to solve. Then building prototypes for me to review every morning. All autonomously. Qwen 3.5 hitting the X API every hour to see top trending posts in AI and vibe coding. Turning those into video scripts for me to review hourly (this has already built me one script with over 100k views on YT) Unlimited economic power just sitting there. No cloud APIs. No crazy API bills. No tech executives reading my logs. Totally customizable and private. This is the future. I'm just showing it to you before it arrives
Alex Finn tweet media
English
625
368
4.6K
732.2K
Alex
Alex@AgenticPM·
@weswinder I think they’re offering a subscription model that they can’t afford to sustain. But they should just be honest and say they need to limit the usage
English
1
0
3
185
Alex
Alex@AgenticPM·
@adocomplete I made a terminal shortcut superclaude for this
English
1
0
1
387
Ado
Ado@adocomplete·
claude --dangerously-skip-permissions
English
27
2
91
11.4K
Alex
Alex@AgenticPM·
@abhijitwt This is vibe demanding
English
0
0
0
216
Abhijit
Abhijit@abhijitwt·
this is not vibecoding, right?
Abhijit tweet media
English
356
69
4K
275.4K
Alex
Alex@AgenticPM·
@godwinbabu 1 month is 5 years in AI time
English
0
0
0
436
Godwin
Godwin@godwinbabu·
Been playing with #OpenClaw since its ClawdBot days, right as it was catching fire. It is an amazing application, very addictive, infinitely extensible, and customizable. Some of the ways I am using it: - Targeted morning briefings on news, stocks & futures, AI developments from X/Reddit, plus my schedule, tasks, and emails. This basically replaced doom-scrolling in bed and actually feels productive. - Coding: last year I moved from Cursor to Claude Code and the terminal. Nowadays I use Telegram to ask the agents to code, create PRs, ask another agent to review, and iterate! Not sure this is “productive,” but it’s a lot of fun. - Email & Calendar: checking, summarizing, and pruning emails and automatically capturing action items. I can ask the agent to research a proposal from an email, ask questions, and be way more informed, all from Telegram. - Obsidian: I suck at capturing and organizing notes. The agents handle them now, and they interview me to capture daily notes and file everything into Obsidian. It’s backed by iCloud, so I can read them on my iPhone. - Never used Telegram before this month, now I am on it constantly. The only contacts I chat with there are my bots! - Every day is a surprise (so far the pleasant kind) with OpenClaw! I really think OpenClaw unlocked the framework for the ultimate "personal AI agent”. You do need a bit of technical know-how to use it today, but I’m sure someone is developing a non-nerd version, and when that comes out, this will truly hockey-stick. Huge kudos to @steipete
English
37
34
494
64.5K
Folktech AI LLC
Folktech AI LLC@FolktechAI·
Actually, I do disagree with this simply because most people should be running a local model. It cost less there's no tokens to worry about, no rate limits, can work even more natively with your computer, and everything is as completely private. Also, there's no need for data centers. There's no ridiculous pull on the electrical grid, and most importantly, local models can handle 99% of everything. The average user is going to query..
English
5
0
8
2.1K
Morgan
Morgan@morganlinton·
I keep seeing a lot of confusion around using a Mac Studio to run models locally. First, you probably don't need to run models locally, 99% of people should just be using Opus 4.6 or GPT 5.3 Codex all the time. If you're in the 1% that likes to tinker, and just is crazy fascinated by LLMs and wants to really learn more, optimize, change settings, try every open source model you can find, etc. then I do think the Mac Studio is the best bang for your buck. Unified Memory was a big breakthrough by Apple and it does give them an edge over PCs here from what I can tell. But I'm not an expert, still learning myself. I keep hearing people say things like, "I can't spend $10,000 on a Mac Studio." And you shouldn't. That would be crazy IMO. Honestly, you can buy a base model Mac Studio for $1,999 and that will be fine to start. You won't be able to run the most beefy models, but you can run plenty of models and still tinker away. I have a base model M1 Max Mac Studio I've had for years, test new models on it, tinker, am having fun, not spending $10,000. And if you want to really dial it up, and run bigger models locally, I think the price point is $4,000 not $10,000 - and here's what you'd get (see specs in pict below) Oh and quick reminder. I'm not some kind of LLM expert, I'm sure some will chime in here in the comments and either tell me I'm totally wrong, or maybe I got this right. I'm learning, just like many of you, but from what I know now, 128GB unified memory in a Mac Studio is plenty to tinker with, but of course, won't replace Opus 4.6 or Codex, but I don't think that's the point of running models locally anyways, is it?
Morgan tweet media
English
56
10
258
42.8K