Cody Boyte

2K posts

Cody Boyte banner
Cody Boyte

Cody Boyte

@codyboyte

Leading a SaaS product team. Prev marketing and sales engineering leader. Plays basketball.

NJ Katılım Ekim 2008
2.8K Takip Edilen905 Takipçiler
Cody Boyte
Cody Boyte@codyboyte·
@Molson_Hart Didn’t really have an expectation. I grew up playing with a few guys who played pro in Europe and they weren’t much faster than me, they could just shoot much better and see the court better. He would have made them look silly. Better on every metric.
English
0
0
2
53
molson 🧠⚙️
molson 🧠⚙️@Molson_Hart·
We cannot conceive how much better top 10 experts are. Height is normally distributed, which means that the taller you are relative to the average, the likelihood that someone else is taller than you, shrinks not only increasingly, but almost exponentially. Said another way, you've probably met someone who is 1 foot taller than you are, but you've never met someone who is twice your height. 3 times? Get out of here. Because we walk around on this earth and meet many people, some taller, some shorter, it's pretty easy for us to conceptualize how someone could be significantly taller than us, perhaps 1.2x as tall of 1.3x as tall etc. But it is not possible for us to conceptualize how much better an NBA player is than we are at basketball. Why? They could be 1000x as good. How is that possible? They're just 1.5x as tall. It's because their basketball skill is the product of multiple normally distributed traits. Each trait, by itself concentrates around an average, but basketball skill isn't just about how tall you are. It's about: - speed - strength - work ethic - experience - visual pattern recognition - spatial awareness - dexterity The chance that someone is twice as tall as you could be 0.00001% But if basketball skill is the multiplicative product of 8 traits, then all they need to be is 9% better than you are to be twice as good at basketball because 1.09^8 = 2 And the chance that someone is 9% better than you are at 8 things is way more likely than someone is twice as tall as you are, but in basketball world, their effective skill output is the same. (and remember that these traits can be positively correlated making this even more likely!) Individual traits are normally distributed by skills which are the product of many are not, they have fat tails, meaning that the chance that someone is 10x or 100x better than you are is not acceleratingly exponentially small. It can happen. Elon Musk probably has 1,000,000x as much money as you do. Lebron James might legitimately be 1,000,000x as good as you are at basketball. We can't wrap our minds around it, but it very well might be true.
English
13
5
91
11.4K
Cody Boyte
Cody Boyte@codyboyte·
@_colemurray Yea, I wrote a personal system that runs on background agents. Ended up with the same shape. There's also a layer above it for coordination/guiding the control plane itself.
English
1
0
2
136
cole murray
cole murray@_colemurray·
background agent systems all follow the same pattern. a control plane coordinating, persisting state & identity and a data plane running the actual agent harness fairly easy system once you're familiar with the architecture
cole murray tweet media
English
6
3
33
3.1K
Cody Boyte
Cody Boyte@codyboyte·
@signulll "Network effects" should be talked about as "more people on the network make it better for everyone on the network" not "our app will grow by being shared". Most apps with networks still need a first person value prop that matters even before the network exists.
English
0
0
1
52
signüll
signüll@signulll·
we need to retire the term “network effects”, mostly cuz this term is now coopted more often by non builders than ppl actually doing the hard work of creating something from nothing. also it weirdly screams growth hacky. if products are designed well, if they’re useful, there will naturally exist shareable moments that expand the reach of the product. for skye we focus on natural & organic elements that help surface delightful moments that ppl can’t help but share. e.g. we do not make things by intentionally making them shareable from the start. you can tell who does this pretty quickly & they make you want to take a cold shower afterwards.
English
33
4
162
15K
Cody Boyte
Cody Boyte@codyboyte·
@hammer_mt @danshipper I've been trying to write a skill for this for weeks. I can get decent output, but getting it to use a large corporate template effectively is miserable.
English
0
0
0
17
Mike Taylor
Mike Taylor@hammer_mt·
@danshipper Especially on the latest PowerPoint generation benchmarks (the true test of AGI)
English
3
0
8
1.3K
Cody Boyte
Cody Boyte@codyboyte·
One of the hardest parts of building with AI is how fast the system sprawls. I've been building an assistant using Claude Code Channels (an iOS voice assistant) for ~90 days. The codebase is now ~160k LOC with 90k LOC for tests. It's been refactored twice already.
English
1
0
0
55
Nicolas Cole 🚢👻
Nicolas Cole 🚢👻@Nicolascole77·
I just packaged 102 of my best writing templates into 5 Claude Skills. This is everything you need to: • Write scroll-stopping hooks • Create X content for yourself • Ghostwrite content for high-paying clients Comment "social" and I'll send it across ASAP (for free).
Nicolas Cole 🚢👻 tweet media
English
1.3K
42
549
78.2K
Cody Boyte
Cody Boyte@codyboyte·
@mstockton I wrote a small iOS app that connects to Claude Code Channels for this so I can literally talk to my AI constantly and have it do real things on my computer. It’s so much better than typing.
English
1
0
0
114
Matt Stockton
Matt Stockton@mstockton·
This is still a weird step for most folks to take but it is correct. Talking to your AI is the way. I tell people that i simply yell at my computer all day and I don’t use my keyboard. It’s not a joke yet most people laugh. Folks - I am not typing things. A keyboard is like using morse code for your brain. It’s unnecessary in most cases. You are compressing the signal unnecessarily by typing. Just let the words flow. These models have seen so much in their training data that they can help you make sense of the chaos that comes out of your mouth. Just. Talk. If you want to try it, truly just record a 10 minute voice memo spilling everything you want AI to help you with. When you think you are done, keep talking. Then paste it into your AI and say: ‘this is my brain dump. Ask me questions exhaustively until you know exactly what you should do next in order to help me’
Andrew Deitrick@dbmigrate

The easiest way to get someone to get more involved in AI development is to encourage them to use voice-to-text. They will naturally switch to a conversational tone and will quickly unlock the power they were previously missing.

English
1
1
17
2.6K
Cody Boyte retweetledi
vas
vas@vasuman·
There is no AI enabled services AI is services Every situation is unique You could release AGI tomorrow and most enterprise companies would not change whatsoever Generalist SaaS is useless unless configured properly to its opponent VCs who did not realize this a year ago are going to scramble You do not understand AI simply by using it for your every day tasks You understand AI by studying and experimenting, pushing the boundaries and pressure testing unique business cases Most don't do this then wonder why their experiences don't match those at the frontier
English
25
7
81
7K
Cody Boyte retweetledi
Maurizio
Maurizio@themgmtconsult·
A few years ago, one of my consulting clients told my managing partner about me: "that guy is so good he sells without selling". I'm not sure I'm that good as he was portraying, but do I have some sort of secret to "sell without selling"? Well, if there is one secret, definitely isn't a magic mindset or a personality trait... I have a logical framework I apply. I understand that a relationship is just the compounded result of useful transactions. Here is why the "transaction" always comes first:
Maurizio tweet mediaMaurizio tweet mediaMaurizio tweet media
English
5
9
150
12.2K
Cody Boyte
Cody Boyte@codyboyte·
@randomrecruiter I once drove 4 hours to a meeting. There were 2 people in the conference room and 23 people on zoom. I asked if the had a work from home policy and one of them said “no, they’re all in the office they just don’t want to leave their desks.” 🤯
English
1
0
18
858
The Random Recruiter
The Random Recruiter@randomrecruiter·
POV: You drove 58 minutes in bumper to bumper traffic to sit in a conference room alone on a Zoom with people who are all working from other offices
English
35
276
9.2K
257.3K
Suhail
Suhail@Suhail·
We seem close to: - Give an agent access to a competitor app on a computer - Tell agent: Rebuild this app by using all its features - Agent tries app -> documents all flows/features/edge cases - The other agent builds all flows/features - They iterate trying/testing until done
English
105
41
1K
145K
Cody Boyte
Cody Boyte@codyboyte·
@doodlestein @peter_szilagyi The amount of time I spend going back and forth between the UI versions of 5.2 Pro and Opus 4.6 before going to Claude Code and Codex just on planning docs is crazy. Makes plans much better having them review each other aggressively.
English
1
0
3
141
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
@peter_szilagyi The real lesson is to use all of the frontier models in concert, which is what I’ve been telling anyone who would listen for the past year or more. Even gemini3 is incrementally useful for code review and finding bugs.
English
8
3
53
3.2K
Péter Szilágyi
Péter Szilágyi@peter_szilagyi·
Had some code written by Codex, had a benchmark, told it to optimise. After half an hours said this is the max it can do. Gave the code to Claude, 10 mins later it came back with a 17x speed increase. I'll leave the conclusion to you guys.
English
42
5
244
40.7K
Cody Boyte
Cody Boyte@codyboyte·
@ValKatayev @gregisenberg Also their founding team is still involved and has been building using AI from very early on. They'll have a good sense for how/where to apply AI.
English
0
0
0
70
Val Katayev
Val Katayev@ValKatayev·
@gregisenberg Just don’t think that user base will homebrew or move to a new AI driven system at a significant rate.
English
6
0
21
3.5K
Val Katayev
Val Katayev@ValKatayev·
Applying AI on an existing SaaS base > Starting from 0 with AI So I’m calling BS on the SaaS meltdown Opened a position on HubSpot (down 70% from year ago) What are the best opportunities in this space that took a beating?
English
25
4
94
32.3K
Cody Boyte
Cody Boyte@codyboyte·
@LucasHogie I keep coming back to taste as the differentiator. He’s able to harness significant power but doesn’t have much taste so his output feels like slop. Tbf output of affiliate marketers, info marketers, etc looked like slop before AI so I’m not sure it’s the models fault.
English
1
0
1
38
Cody Boyte
Cody Boyte@codyboyte·
@codeandvibes Different projects all started from scratch. Similar levels of complexity.
English
0
0
0
4
SHAWN
SHAWN@codeandvibes·
@codyboyte Is it possible that CC had the benifit of the CGPT work or plan?
English
1
0
0
13
Cody Boyte
Cody Boyte@codyboyte·
I've been building small side projects for my kids the last few years. Before LLMs, it took me 3 months to build a small fantasy football auto-play system. With ChatGPT 3.5 it took me 3 weeks and had extra features. With CC 4.5 I rebuilt something better in 2 days.
English
1
1
4
126
Cody Boyte
Cody Boyte@codyboyte·
@thdxr We use self-hosted models internally but Cline still shows estimated costs. It freaks people out when they first start using it, especially when they’re start running multi-step agentic development. The numbers can spin fast.
English
0
0
1
602
dax
dax@thdxr·
btw we spoke to a company yesterday that's at the scale of 20,000 devs they are looking at these numbers and going W T F and they're moving inference to their own gpu cluster with open source models there isn't infinite budget and appetite for this stuff
English
35
10
659
43.4K
dax
dax@thdxr·
every time we talk about how much companies are spending on LLMs per engineer we get a bunch of replies "wdym claude max is just $200" when rolling out coding tools to a real team companies use a proper enterprise control plane attached to a yearly contract these all have a pay per use component (with discounts) so yes it's not completely extraordinary to see a $2000 bill for a single dev per month and no just because $2000 is less than the salary doesn't mean it's automatically worth it eventually companies will scrutinize these numbers and understand if they're actually seeing proportional ROI
English
77
18
1.1K
128.4K
Cody Boyte
Cody Boyte@codyboyte·
@doodlestein I use it constantly and have for months after you shared it. Constantly finds and fixes issues before they’re issues. Thank you.
English
0
0
0
45
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
Fresh eyes is a massive unlock. This is the stuff that even the labs don’t fully understand. It’s all based on theory of mind of the models and gestalt psychology concepts.
Oussama Sekkat@osekkat

@doodlestein Overall it sounds like you've found a way to extract more IQ out of these clunkers, like a tough piano teacher who gets the best of his students. I was really surprised at how much improvements you can get just by the "fresh eyes" rounds, especially with codex 5.3..

English
9
1
90
12.9K
ryan borker
ryan borker@borker·
@doodlestein Do you have an agent write a QA framework against the design or use something like a Figma MCP? Trying both of those out and figma is best so far but slowwwwww
English
2
0
1
92
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
I’ve mentioned this before, but I think it’s so revealing and important to understand that I want to convey it again: Suppose you have two images of different people and you want Nano Banana to take the clothing and pose and orientation of the first image but make it look like the face of the second image so that it’s perfectly recognizable. The obvious way to do this, and the conventional wisdom for a long time, was to make some big, detailed prompt that specifies exactly what you want to happen and even include a bunch of things to look out for to prevent known failure modes. You might have some phrases about making sure that the generated image looks “just like” the person in the second image, or that the “facial likeness must be instantly recognizable” or some other formulation. Or conversely, you might specify that the pose and clothing and orientation of the generated image must match that of the first image. And perhaps early testing taught you that there are some failure modes you had to watch out for. As an example, you might include in your prompt that, if the person in the first image has a beard, but the person in the second image doesn’t have a beard, that the generated image should definitely not have a beard. All these things sound reasonable, do they not? And here’s the weird thing: the more stuff like that you include in the prompt, the worse it will work! Now, in this example, it might “work” insofar as it will be a picture of the person dressed as the other person, but it will look comically bad like one of those “face-in-hole” apps from 2010. Why? What’s even stranger is that giving a very short and schematic prompt asking what you want, like “make the person in the second pic so they’re dressed like the person in the first pic” might result in a much more pleasing and realistic image, even if you might need to generate it a couple times to get it just right. Again, why? The answer is that these models are already trained so much to give good results out of the box. But they’re also designed to be very helpful, attentive, and accommodating to every part of your request. In fact, every single word in your prompt is “attended to” by the model and has an impact on the specific activation states that occur in its “brain.” Because this activation weight space is so incomprehensibly vast, you’d be amazed at just how different those activations can be as a result of what might seem to be a minor change in the wording of a prompt. Incidentally, this is why things like my “fresh eyes” code review prompt can be so shockingly effective if you’re not used to that sort of thing: it’s because they’re tapping into some very deep thing in the model’s brain that changes the way it operates, like toggling a create/critique mode gestalt switch. An analogy here is especially informative. Suppose you want to hire a famous and talented chef to prepare a special meal for your party. Great, surely it will be a wonderful meal, right? But then you start giving all these additional requests and tweaks to the chef: “Martin has nut allergies. Oh and Lucy loves duck, be sure to include that. Oh, and our apple trees are ripe, wouldn’t it be so great to use those, too.” And on and on, you give more rules and requirements and constraints. The chef wants to be helpful (assume you’re paying them a lot), but every time you add another one of your rules, you are restricting and circumscribing what they can do. You are dramatically narrowing and constraining their search space and impeding their creative process, because now they keep bumping against your rules. Instead of focusing on what they know best, which is creating incredible dishes and meal experiences, they are forced to waste their cognitive energy on dancing around these constraints. If you foist enough of them on the chef, it becomes like those scenes in heist movies where they have all the laser beam motion detectors and you need to dance around them like some kind of ninja acrobat just to get through the other side. Now, if the chef is good enough, will you still end up with a pretty good meal for your guests? Sure, probably. But will it be close to as good as it could have been if you let the chef make all the decisions themselves with maybe just some basic, high-level guidance (e.g., “less seafood, lots of veggies”)? Almost certainly not. The chef is the model, and you are the annoying party planner. Every time you try to tell the model exactly what to do and how, just understand that, although you might end up with something that on the surface conforms with all your requirements, it will be the equivalent of that “face-in-hole” photo that “technically” looks like the person but also looks 2-dimensional and like a bad Photoshop attempt: no artistry, and not likely to fool anyone about it being natural or real. This applies just as much to using these models to generate code. The more you tell them what to do and how, the worse the results will be. That’s why you should try to focus your prompting on your goals, the purpose of the project, the desired end state, the features and functionality you’d like to have (but not in such extreme specificity: again focus on the purpose of the feature, the intent of it, what it’s supposed to help the user do, etc.). The models are now smart enough that, once they understand the high-level goals, they can do a better job planning than you can, at least if the goal is to get a plan that other models/agents are going to implement. Note that what I’m saying here really applies more to the planning stages. Once you have a plan, you can make it quite elaborate in an iterative way, and I usually do. And then I turn those plans into extremely detailed beads (epics, tasks, subtasks, etc) so that the agents that are actually implementing the stuff don’t need to understand the big picture and can focus instead on their narrow task, much like a short-order cook in a diner can focus on the ticket in front of them and just make a good pastrami sandwich without worrying about how to bake a pie or whether the people at table 3 have been waiting too long for a water refill. So, in short: when coming up with your plan, don’t be too prescriptive to give the model flexibility so that you get the best possible plan. But once you’ve figured out what to do, you want to go in the opposite direction and get very detailed and specific so that you can turn the plan into such detailed marching orders that even a dumber agent could still probably implement them well (but of course, you don’t use a dumb agent, you use a very smart agent that is super overpowered for the task so that they do a phenomenal job). If you squint, you will also see a connection to my other big advice for working with brownfield projects that already have a ton of code (and also my approach to porting). That advice is that you first need to transform the big existing codebase into a much shorter specification document that details just the interfaces and the behaviors, but none of the internal implementation details. This lets you compress the important parts down into something that can easily fit within the model’s context window, where it can think about everything all at once: the full totality of the project and what you’re trying to do, without getting weighed down by all the minutiae (which wouldn’t even fit in its context anyway, so that it would be forced to look through the equivalent of a very zoomed-in camera lens from far away, scanning just a tiny portion of the scene at a time). Although there are obvious differences here, the core concept is really analogous: that you want to get out of the way of the model as much as possible so it has more degrees of freedom to explore and solve your problem without having to waste its cognitive powers on dumb, irrelevant details. In the case of coming up with a plan, those would be like the details about all the ingredients you want to use. In the case of a brownfield development project, those details are all the irrelevant internal implementation details of all of the code files. And by the way, you can always wait until after the chef has come up with the plan and then say at the end “Oh, Martin has nut allergies so let’s change that one thing.” That might annoy the chef, but you’ll still end up with a very good meal. Something to keep in mind.
Jeffrey Emanuel tweet media
English
6
2
61
7.6K