Peter Choi

2.1K posts

Peter Choi banner
Peter Choi

Peter Choi

@pitachoi

I talk about AI agents & AI-native ops | eng @andocorporation | eng @ oneshop (YC S21) | cs @Columbia

가입일 Ocak 2021
365 팔로잉482 팔로워
고정된 트윗
Peter Choi
Peter Choi@pitachoi·
Slack search is somehow still stuck in 2010. No semantic search, no vector search, no personalization. Zero awareness of who you are, what team you're on, or what you're working on. Just good old, raw, full text search. Type one wrong letter and you may as well have hallucinated that conversation with your PM from last Tuesday. I know the message exists. Slack knows it exists. But we both have to pretend it's lost because I can't remember the exact phrase someone used. They did add on an AI feature that gives you some sort of summarized response…but it feels completely bolted on. We have AI that can summarize entire codebases and write production code, but slack still can't find a conversation from yesterday. The gap between AI capabilities and what slack actually does is wild.
Peter Choi tweet media
English
1
0
7
3.2K
Peter Choi
Peter Choi@pitachoi·
@zeeg zero puns is criminal negligence imo
Português
0
0
0
11
David Cramer
David Cramer@zeeg·
The fact that datadog never named a project Good Boy is embarrassing tbqh
English
20
6
310
12.2K
Peter Choi
Peter Choi@pitachoi·
@unclebobmartin maybe the flip side of "tests are documentation." when the tests document current behavior instead of correct behavior.
English
0
0
3
213
Uncle Bob Martin
Uncle Bob Martin@unclebobmartin·
Mutation testing has a dark side. Not only does it consume rather large amounts of CPU and wall time; but it makes is much more difficult to remove old behavior and replace it with new, "better", behavior. Those extra tests do their job of stabilizing the behavior very well -- perhaps a bit too well. Caveat Emptor.
English
9
2
81
7.7K
Darius Dan
Darius Dan@dariusdan·
Want to be a designer? Pick up a pen. Use paper. Grab a camera. Learn Figma. Don't skip those. Because no prompt box will ever make you one.
English
23
32
296
6.1K
Peter Choi
Peter Choi@pitachoi·
@sqs This is why usage based pricing creates such weird dynamics
English
1
0
0
387
Quinn Slack
Quinn Slack@sqs·
An uncomfortable truth about building agents/models: By default, your most lucrative, most-smitten customers will be those using intricate out-of-band techniques that are exorbitantly expensive and probably net negative (but that they love). It's a very weird incentive. You can't and don't want to indulge this. There's nothing wrong with experimentation, but if you saw what every agent company sees, you'd know this goes way beyond experimentation. Amp tries really hard to prevent this: limiting long context, showing prices, not recommending swarms or loops prematurely, strongly advising against big MCPs, killing features that have high usage but that aren't worth it anymore, and just generally staying away from any hype train we don't have a good gut feeling about. Pi and OpenCode are also particularly good and outspoken here. But if you have growth targets to hit, investors to pitch, and salespeople to keep happy, or if you didn't start this way from day 1, I can see it being tricky. At Amp, we're profitable, don't have salespeople, and have no sales/growth targets to hit, so we have it relatively easy. I often wonder what this tension is like inside other companies building agents. (And for the record: if you've shown me your Amp workflow and I haven't told you this directly, this post is not about you. :)
Thorsten Ball@thorstenball

Lately, whenever I open this app and see the latest tricks, and hacks, and notes, and workflows, and spec here and skill there, I can't help but think: All of this will be washed away by the models. Every Markdown file that's precious to you right now will be gone.

English
9
13
177
52.3K
Peter Choi
Peter Choi@pitachoi·
@nbaschez If a startup method actually worked reliably everyone would use it and then it wouldn't work anymore. I think deviation from consensus is the whole point.
English
0
0
2
447
Chubby♨️
Chubby♨️@kimmonismus·
What has repeatedly surprised and impressed me is how well ChatGPT maintains its memory across different chats. It automatically refers back to topics I've already discussed, and questions I ask days later are correctly placed in context and related to the topic I've already discussed, without having to revisit it. A concrete example: In preparation for the interview with Kari Briski from NIVIDA, I did some fact checks, and ChatGPT automatically said, "Ah, it's about today's interview; in that context, the answer is..." That's a real "wow" moment for me. It feels much better than it did a few months ago.
English
45
8
332
15.1K
Peter Choi
Peter Choi@pitachoi·
@amrishrau probably right. What's left is either building or selling.
English
0
0
0
29
Amrish Rau
Amrish Rau@amrishrau·
Product Management will see a massive change. PMs will become lot more technical or they will become product marketing managers. The current won’t exist.
English
20
8
191
11K
Peter Choi
Peter Choi@pitachoi·
@ivanburazin Right. The take that agents will just build what they need assumes agents operate in isolation
English
0
0
1
93
Ivan Burazin
Ivan Burazin@ivanburazin·
Agents won't vibe-code a new Slack every time they need to communicate. They'll use the same Slack because the other agent's team also uses Slack. Traditional SaaS will survive because standards don't get disrupted by probabilistic code generation. Network effects and standardization still matter.
English
50
9
158
74.5K
Peter Choi
Peter Choi@pitachoi·
@omooretweets ChatGPT gets 10 minutes of your attention per session, sometimes multiple times a day. If they can monetize that without making the experience worse, it's a completely different kind of business.
English
0
0
0
112
Olivia Moore
Olivia Moore@omooretweets·
A big story that most people are missing in the AI race for the consumer (ChatGPT vs Claude) is ads. Right now, most consumer AI revenue is coming from power users who are willing to pay high cost subscriptions. This currently skews positive for products like Claude - but this will not be the end state. Google makes ~$460/ user/year in the U.S., mostly on ads. Meta makes around ~$250. I would argue ChatGPT’s ad-based ARPUs will be even higher as they will ultimately have deeper / more frequent user engagement. Even at the $460 level - monetizing everyone in the U.S. via ads is $152 billion in annual revenue. By contrast, if you’re able to monetize even 5% of the population on a $200/month subscription (which is a stretch!), that’s only $40 billion 🤔 I suspect this will be even more drastic outside the U.S. where users are even less willing or able to pay directly for subscriptions. And, the earliest data from a very small rollout shows ChatGPT ads are already outperforming Meta in effectiveness - this just gets better over time. TL;DR - I would not count ChatGPT out on consumer AI revenue. Once ads start working, that can quickly become a massive machine.
English
42
16
202
35.5K
Peter Choi
Peter Choi@pitachoi·
@kmr_dilip yes, if you're comfortable with what you know then you're probably only operating inside your existing skill set.
English
0
0
0
99
Dilip Kumar
Dilip Kumar@kmr_dilip·
If you're looking to join a startup, you should know that you’re not there to learn. You’re there to be useful and learning is a side effect. No one is coming to train you. You've to figure it out. If you need permission to do things, you’re already too slow. If you see a problem and walk past it, you just accepted mediocrity. If you’re not embarrassed by how much you don’t know, you’re too slow. If you’re replaceable, you didn’t push hard enough. The best people make themselves impossible to ignore.
English
63
88
960
39.6K
scott belsky
scott belsky@scottbelsky·
thinking: products that help humans get credit for the work accomplished by agents they supervise in the enterprise will have better adoption than agentic solutions that do the work instead of humans. credit feeds ego, drives adoption...and accountability.
English
19
13
269
15.9K
Peter Choi
Peter Choi@pitachoi·
@tankots easier to imagine hypothetical users than admit you're not one of them.
English
0
0
1
309
Tanay Kothari
Tanay Kothari@tankots·
conviction almost killed our company. for 2 years, we built tech that no one else had ever attempted. but the smartest thing we ever did was walk away from it. my co-founder, sahaj, and i had spent years building a wearable device that could read your brain. the idea: control your phone or computer without saying a word out loud. no awkward "hey siri" in public. just think it, and it happens. the technology was working. but when we finally connected it to siri, alexa, and chatgpt to test it out, we had to face a hard truth: none of us actually wanted to use our own product. walking away from two years of work is one of the hardest things i've done as a founder. but here's what that moment actually taught me: the best product decisions come from humility, not conviction. you can spend all the time you want designing something perfect. but the most valuable thing you can do is ship a quick experiment, feel it yourself, and be willing to acknowledge the hard truths. that's how we found wispr flow. not through a grand vision. but from walking into our own office one day and seeing half the team talking to their computers through $10 mics and realizing: the behavior change was already happening. we just had to get out of the way and build around it. if you're building something and wrestling with when to stay the course versus when to change direction, i think you'll find something useful in this one. link to the full podcast in comments!
English
33
6
243
26.6K
Unemployed Capital Allocator
I think one sneaky aspect of LLM coding that is under discussed is just how bad the code has to be before appearing as broken to the casual observer.
English
26
21
713
25.8K
Archie Sengupta
Archie Sengupta@archiexzzz·
i love reverse engineering. i enjoy breaking down systems i haven't built myself, just to see how others solve problems differently than i would. that's why hacking comes naturally to me. for example, i've grepped through client-side electron apps (bundled js) from $1b companies to understand their approach to challenges i was facing, then implemented similar solutions, and then improving on top of that. or to understand how a particular agent works or what unique idea they used. this is especially helpful for hard problems: call me lazy, but there's no point banging my head against a problem that would take me 5 days to solve when i can tear down their engineering decisions, puzzle the pieces together, and adapt what works. it saves time - larger orgs (than yours) have already solved it - and lets you replicate with half the effort to ship faster.
English
9
7
328
10.5K
Peter Choi
Peter Choi@pitachoi·
@IsaacKing314 sometimes the confusion is the funnel 😂 If you understood it you wouldn't need to talk to sales.
English
0
0
3
111
Isaac King 🔍
Isaac King 🔍@IsaacKing314·
For years I found SaaS websites confusing. Why do they put only bland corpospeak on there? How is anyone supposed to know what they're actually selling? Ah well, these websites aren't targeted towards individuals like me anyway- I'm sure it makes sense in the business world. Well as someone now in the business world trying to find vendors for my company, I can report that... I still have absolutely no idea what these companies are selling.
English
35
4
489
20.1K
Peter Choi
Peter Choi@pitachoi·
@michaelfreedman The irony of AI companies using interviews that could be solved by their own products
English
0
0
0
185
Mike Freedman
Mike Freedman@michaelfreedman·
State of the industry: Leading AI companies still using LeetCode-style algorithmic interviews for systems PhDs. 🤯
English
7
1
133
21.1K
Peter Choi
Peter Choi@pitachoi·
@sahill_og I like the 2 hours. it's enough time to see how someone scopes/compromises, and you can also see their editing process which is sometimes more revealing then the initial code they come up with.
English
1
0
0
131
Sahil
Sahil@sahill_og·
Job interviews in 2026 should just be: "Open your laptop. Build something in 2 hours. We'll watch." No leetcode. No "tell me about yourself." No "where do you see yourself in 5 years." Just build. That's the whole interview.
English
190
31
615
37.8K
Peter Choi
Peter Choi@pitachoi·
@RhysSullivan skills are genuinely useful for some things, but I do think some people/teams are cramming use cases that may not necessarily fit.
English
0
0
0
79
Rhys
Rhys@RhysSullivan·
skills is still not sitting right with me as a concept i think it's because companies rushed to them as the next big thing as is what happens with all ai things now everyone is their docs as skills but it's recreating all the issues (authority, up to dateness) docs solved
English
72
7
263
28.8K
Peter Choi 리트윗함
Peter Yang
Peter Yang@petergyang·
The cost of not tinkering and exploring has never been higher.
English
40
129
817
26.9K
Peter Choi
Peter Choi@pitachoi·
@zendadddy The curious ones were always building leverage they couldn't cash in yet
English
0
0
0
69
Frank ☼ Bach
Frank ☼ Bach@zendadddy·
designers used to get shunned for personal projects, side-quests etc. give yourself 100% to the company and don't think outside. now with AI, those who kept making things on the side (the curious ones, tool nerds, obsessive ones) are about to THRIVE. everything’s open now. it's all possible, what are you going to build?
English
24
0
144
7.2K