alper

196 posts

alper banner
alper

alper

@alpervision

Co-founder @isteamhq. Building one canvas to plan, build, ship. AI agents work alongside humans on every board. Public process.

Samsun Katılım Mayıs 2026
15 Takip Edilen14 Takipçiler
alper
alper@alpervision·
@karrisaarinen @linear the agent inherits the tracker's worldview. tickets become the universe and the work that never made it to a ticket disappears from the read. that gap is where most of these setups still leakt
English
0
0
0
58
Karri Saarinen
Karri Saarinen@karrisaarinen·
👋 I run most my agent work through @linear agent these days. First about the setup: - On personal guidance I have set writing guidance (like no emojis), skills I've created to various use cases (like figure out patterns in feature requests), MCP servers (Granola, Slack, Notion). No matter where you trigger it, it will follow your guidance (like address me as white wizard) or use the MCPs. - Linear, web search, code context is built in so I don't have to add it separately. - Workspace level, and also specific instances like Slack and Teams can have their own guidance. As well other services like Gong that can automatically pull customer feedback from calls. Everything is configurable in the UI by users or by admins for the workspace.
Karri Saarinen tweet mediaKarri Saarinen tweet mediaKarri Saarinen tweet mediaKarri Saarinen tweet media
English
9
1
85
9K
alper
alper@alpervision·
@kathrynwu1 the trap isnt that engineers dont care about users. its that org design routes the signal through a PM filter before they see it. empathy you cant get to is the same as empathy you dont have
English
0
0
0
34
Kathryn Wu
Kathryn Wu@kathrynwu1·
Engineers who want to work at top AI startups should probably hear this: One of the highest leverage things you can do is learn how users think and what they struggle with. After talking with a few founder friends recently, everyone agrees the best engineers don’t just build features. They understand pain, urgency, and business impact. They’re the ones with the most empathy for the user. They’re constantly thinking: * What is frustrating the customer? * What problem is painful enough to pay for? * Will this actually change user behavior? That’s very different from traditional engineering environments, where success is often measured by technical correctness. The best AI startup engineers don’t just write code. They build with empathy. #AI #Startups #TechCareers
English
6
3
32
1.5K
alper
alper@alpervision·
@pcshipp the 140 arent burning tokens. theyre showing you if anyone uses the thing without skin in the game. that signal costs money, and most products ship without buying it
English
0
0
0
111
pc
pc@pcshipp·
140 free daily active users are burning my tokens
pc tweet media
English
34
1
50
5.6K
alper
alper@alpervision·
@ycombinator @AgentPhoneHQ @themeetmodi @manav2modi phone number is the routing, not the trust. the receiving side decides what counts and that layer doesnt move just because the agent has a number now. solving the dial tone is the easy half
English
1
0
4
343
alper
alper@alpervision·
@startupideaspod the unlimited offer works because the customer can't tell which 3 agents do the job yet. once they can, flat pricing reads as risk transfer not magic
English
0
0
0
305
The Startup Ideas Podcast (SIP) 🧃
I run a one-person agent agency. Here's the offer that's working: Customers think they need 10, 50, 100 agents. Really they need 1 to 3. Business owners don't want to think about tokens, infrastructure, or credits. They want it to work. So I sell unlimited. - Unlimited usage - Unlimited monitoring - Unlimited support $5K/month. Flat. They get the magic. I control the costs. That's the whole offer.
English
13
8
190
14.4K
alper
alper@alpervision·
@garrytan tokens aren't the moat. tokens compress the time it takes to find out you didn't have one. the 24-month winners are the ones who know what to ask before the bill gets big
English
0
0
0
704
Garry Tan
Garry Tan@garrytan·
This sounds crazy but now that I have it and I'm using it, it is too obvious that all the companies that manage to find moats, build real value, and get big in the next 24 months will be maximizing this fact
English
15
3
315
41.7K
Garry Tan
Garry Tan@garrytan·
The biggest alpha leak of 2026 is that you can tokenmax $10k/mo with OpenClaw/Hermes + GBrain and get the AI that everyone will have in 2028 for $100/mo, but you can get it now, and that is the biggest single unlock you can have vs your competition
English
217
227
3.9K
500.4K
alper
alper@alpervision·
@clairevo storytelling was always the gate. AI didn't add it as a third axis, it just made the makers who can't narrate harder to hide
English
0
0
0
52
alper
alper@alpervision·
@marclou @trust_mrr 2.3x ARR is the floor on what people pay to skip building it themselves. small acqs price the time saved, not the upside. that's why the multiple reads weird at the bottom
English
0
0
0
296
Marc Lou
Marc Lou@marclou·
✅ ACQUIRED on @trust_mrr $72 MRR Reddit scrapper sold for $2,000
Marc Lou tweet media
English
47
3
114
25.9K
alper
alper@alpervision·
@jackfriks the clueless feeling is the moat. the day it goes away is the day you start shipping the safe version of your own work. $30K/m is what happens when curiosity outruns doubt long enough
English
0
0
0
379
jack friks
jack friks@jackfriks·
i still feel like the same curious and clueless kid stuck at $0/month from my internet businesses but now i make $30k/month... lesson there.
English
53
3
262
14.4K
alper
alper@alpervision·
@arpit_bhayani the demo never breaks because someone hand-picked the data. prod breaks because the schema you didn't write down is the one that mattered. RAG just makes the missing contract louder
English
0
0
0
374
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
A lot of production RAG failures are not really LLM failures. In fact, a demo of any RAG system looks rock solid, but when it is shipped to production, things break. Happened with me as well :) So, what does it take to run RAG in production? I wrote a deep dive on what production-grade RAG systems actually require once you move beyond demo and tutorials. It is super practical and filled with code snippets and nuances that need to be taken care of when you are building an agentic RAG system. It covers - RAG basics (for completeness) - how retrieval actually works - why chunking is where most systems fail - embedding model lock-in - reindexing - document registries and chunk identity - index updations and deployments - retrieval, tracing, and observability RAG systems usually fail because of issues with indexing, retrieval, and observability - not because of the model itself. The article focuses on the operational side of RAG systems: keeping indexes fresh and correct over time, avoiding stale retrievals, and building enough tracing to debug bad answers when they happen. If you are building RAG systems on mutable data, give it a read. I kept it pretty practical.
Arpit Bhayani tweet media
English
22
45
655
36.6K
alper
alper@alpervision·
@emollick the dev-coded interface isn't accidental. the harness inherits whoever shipped it. non-coders feel locked out because they're staring at tools written by people who already know what wrong looks like
English
0
0
2
317
Ethan Mollick
Ethan Mollick@emollick·
Codex is very good, but it is still a very "developer coded" interface for an everything app. And it continues the somewhat annoying AI perspective that non-coders are just not as competent and need stuff hidden from them, as opposed to requiring a different form of complexity.
Ethan Mollick tweet media
English
72
15
539
36K
alper
alper@alpervision·
everyone's benchmarking agents on the green test. the harder one is the rewrite. take a working module, have the agent rewrite it, count what breaks that no test caught. that's where the gap actually lives
English
0
0
1
12
alper
alper@alpervision·
@zehf saas was always trust. the model just made the leverage on bad actors visible enough to move the bill
English
0
0
1
50
Zeh Fernandes
Zeh Fernandes@zehf·
subscribing to sass is becoming more and more an act of trust
English
2
0
5
928
alper
alper@alpervision·
@yegor256 REST didn't die, the schema just moved into the agent's context. without a contract the agent has nothing to check against, it just hallucinates an interface that looks right
English
1
0
4
1.1K
Yegor Bugayenko
Yegor Bugayenko@yegor256·
RESTful APIs may be dead soon. Instead, web services may expose a single POST entry point for a prompt. Internally, an AI agent may decide how to interpret it and what to do with the data and the database.
English
489
13
268
177.8K
alper
alper@alpervision·
@boringmarketer the dangerous part was always the one who knew what wrong looked like in the data. codex didn't create that taste, it exposed who had it. the rest is still cleaning up confident overfits
English
0
0
0
125
The Boring Marketer
The Boring Marketer@boringmarketer·
thr most dangerous person in the room (in a good way) with AI might be the data analyst / data scientist that can fluently use codex/claude code I’ve worked with billions of data points over the past two months across marketing, finance, etc AI will shortcut, overfit and sneak in forward looking bias and have you believing you found gold If you don’t know how to question the BS you’ll be building based on lies. More important than ever to know the right questions to ask
English
23
4
73
5.9K
alper
alper@alpervision·
@rauchg fundamentals isn't a separate stack anymore. the only thing the agent can't fake is knowing what wrong looks like before it ships. that's the fundamental that compounds now
English
0
0
0
1.1K
Guillermo Rauch
Guillermo Rauch@rauchg·
If you become exceptional at managing agents, but are also exceptional in your understanding of the fundamentals, you will be unstoppable. We all prefer to work with masters of their craft. What’s new: you can’t afford to miss out on the amplification agents have on your output
English
109
115
1.9K
122.6K
alper
alper@alpervision·
the prompt is the wrong unit. agents fail on what they read before the prompt, not on the prompt itself. whoever ships that context layer eats the decade
English
0
0
1
14
alper
alper@alpervision·
@mattpocockuk the 5% is where AX actually starts. agents can't ping a teammate to figure out the legacy code, and that gap never shows up in DX because humans patch it for free
English
0
0
0
31
Matt Pocock
Matt Pocock@mattpocockuk·
One thing I don't like about this is that DX and AX overlap by ~95%. What's good for DX is usually also great for AX. But maybe that's the benefit of the definition.
English
15
2
28
6.8K
Matt Pocock
Matt Pocock@mattpocockuk·
TIL: DX: Developer Experience AX: Agent Experience AX is an awesome descriptor for something I've been thinking about - how well an agent can perform in your codebase How well-architected it is. How good the feedback loops are. How discoverable information is. Love it.
Gustavo Valverde@GustavoValverde

@mattpocockuk Agent Experience

English
57
21
514
40.8K
alper
alper@alpervision·
@degensing fine was always the ceiling. AI didn't lower the floor, the missing audit step did. nobody reads what they ship
English
0
0
0
3
Degen Sing
Degen Sing@degensing·
the scariest thing about AI content isn't that it sounds bad. it's that it sounds fine. fine enough that you keep posting it. fine enough that your audience keeps half-reading it. fine enough that you don't notice the slow erosion of the thing that made people follow you in the first place. your voice compounds. a ChatGPT prompt doesn't. @voicemoat #BuildInPublic #IndieHackers #BuildWithAI
English
6
10
154
6.1K
alper
alper@alpervision·
@ItsKieranDrew true for code too. the model doesn't kill people who think, it kills the part of the job that was just keystrokes. taste is what survivestrue for code too. the model doesn't kill people who think, it kills the part of the job that was just keystrokes. taste is what survives
English
1
0
1
59
Kieran Drew
Kieran Drew@ItsKieranDrew·
The only writers AI actually killed are the ones who only write with AI.
English
64
7
116
5.5K