cody

538 posts

cody banner
cody

cody

@cmcollier

Thinking and building ⌁ Always learning :: Making mistakes and moving forward

Texas Katılım Aralık 2021
523 Takip Edilen127 Takipçiler
Sabitlenmiş Tweet
cody
cody@cmcollier·
Wondering what's happening inside an agent? Take some time this weekend to get hands-on and tinker with a really basic agent. It can be simpler than you might have imagined. You can definitely make your own with a little python experience and an api key. Start with this minimal example and adapt it for fun. Feel free to message me if you get it to work or have questions.
cody tweet media
English
1
1
1
294
cody
cody@cmcollier·
Again and again I find the psychology and anthropology around AI augmented programming to be even more interesting than my applied work using the tools.
English
0
0
2
21
cody
cody@cmcollier·
@jobergum The Unix philosophy
English
1
0
1
72
Jo Kristian Bergum
Jo Kristian Bergum@jobergum·
I saw someone wrote something about model wrapper apps and that slowly companies realize that they can instead can build more tailored workflows on top of primitives and I think that is a very good observation. You build the custom harness around the models, instead of buying a fixed inflexible harness. I think phi in the coding agent space is in this direction and that we will see this in broader vertical domains too.
English
3
1
13
2K
cody
cody@cmcollier·
I knew LLMs were good for bridging unstructured to structured data. However, it's always interesting what happens when theory meets practice and real world messiness. I'm working on a project which extracts Art Show events as structured json from a variety of website pages. It's interesting finding all the edge cases and iterating through patterns to get the process refined and reasonably consistent. Sometimes I'm improving the LLM side. Many times I'm improving basic data processing code such as normalizing titles in a consistent way. Or using similarity techniques for identifying items which should be merged. There are still plenty of mistakes being made by the LLM, but I think I'll be able to get things pretty stable. It's interesting that some of the newer OpenAI models no longer expose a temperature parameter in the api.
English
0
0
0
27
cody
cody@cmcollier·
Initially with AI, I was excited that I could make more complete software functionality with less effort. Now I'm seeing how it's even more necessary for me to be on guard against unnecessary complexity.
English
0
0
0
20
cody
cody@cmcollier·
Old communication advice ftw: - Tell them what you're going to say. - Say it. - Tell them what you said.
BURKOV@burkov

LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the context tokens were generated without any awareness of what question was coming. This asymmetry is a basic structural property of how these models work. The paper asks what happens if you just send the prompt twice in a row, so that every part of the input gets a second pass where it can attend to every other part. The answer is that accuracy goes up across seven different benchmarks and seven different models (from the Gemini, ChatGPT, Claude, and DeepSeek series of LLMs), with no increase in the length of the model's output and no meaningful increase in response time — because processing the input is done in parallel by the hardware anyway. There are no new losses to compute, no finetuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing is sometimes small, sometimes large (one model went from 21% to 97% on a task involving finding a name in a list). If you are thinking about how to get better results from these models without paying for longer outputs or slower responses, that's a fairly concrete and low-effort finding. Read with AI tutor: chapterpal.com/s/1b15378b/pro… Get the PDF: arxiv.org/pdf/2512.14982

English
0
0
1
26
cody
cody@cmcollier·
Skills, preferences, and activities that seem to benefit me when using AI to program: - Decomposing work into sequences of tasks - Knowing good words to express software design ideas - Preferring encapsulated, composable units of code - Preferring clean semantic separations into modules - More frequent refactoring as a part of dev cycle - Many small conversations, rarely maxing context - Careful manual selection of files to add to context - Smaller prompts with more concision and completeness
English
0
0
0
27
cody
cody@cmcollier·
Thinking out loud... My experience with LLM programming is that it will often minimize friction for a next step in a project. Similarly, simpler systems will keep the average friction of progress lower (a lower cognitive load as you mention). I've been recognizing how often I reach for a database when really all I need is a csv or json file. Unnecessary friction and complexity. Maybe the dsl you ask it to create is a good way to avoid the complexity of implementation options and uncover the most fundamental primitives of a system. These are semantic primitives not operation primitives like crud. Very similar to data flow and data first design.
English
1
0
1
22
Manuel Odendahl
Manuel Odendahl@ProgramWithAi·
@cmcollier tell me more! you might have seen this one, i'm trying to put my thoughts down in clearer form: @gogogolems/note/c-219772654" target="_blank" rel="nofollow noopener">substack.com/@gogogolems/no…
English
1
0
2
30
cody
cody@cmcollier·
Increasing constraints minimizes the space of possibilities and makes a project simpler and more amenable to leveraging ai code generation successfully. I've been thinking about similar things this week as I work building a site which has no backend or db.
Manuel Odendahl@ProgramWithAi

one thing llms don't change is that simplicity is the biggest unlock you can have when building software. and simplicity comes from a having a simple cognitive environment as well. how can you come up with simple solutions if you are orchestrating 100s of agents? I don't get it.

English
1
0
2
81
cody
cody@cmcollier·
50% off with OpenAI flex mode I missed this new API option from last year. Just switch to flex service tier when you're running jobs that don't need real time response. It's much cheaper and most of the time the response is fast anyway. Great for data processing work.
cody tweet media
English
0
0
0
31
cody
cody@cmcollier·
I racked a lot of servers in the early 2000's. These HP face plates were the best looking of the bunch.
cody tweet media
English
0
0
0
22
cody
cody@cmcollier·
There's so much chatter about AI programming. Most of it lacks nuance and much of it is hyperbole. Just try it yourself. It's ok to explore new tech and come to your own conclusions.
English
0
0
0
20
cody
cody@cmcollier·
Decreasing friction. Iterative building. One of the benefits I've experienced with AI augmented programming is that it helps me decrease friction on projects. It might be through: - answering questions when I'm oscillating on a decision - gathering some research on an area where I have less experience - generating code as a version 0 solution to some problem Overall, it's about lowering that small mental friction which might be preventing me from taking the next step in a project. This keeps me moving forward and drives a more iterative flow. Less perfection, fewer straight lines, more iteration and more progress overall.
English
0
0
1
19
cody
cody@cmcollier·
Working on a new project, for fun and a bit of service to a local community. I'm iterating toward full automation. It's a great area for AI programming because there's minimal risk and constraint. Here's a social media card which was automatically created as html based on the data, and then rendered to png with html2canvas. I have operational pages which allow me to quickly generate images and text ready for posting on instagram.
cody tweet media
English
0
0
0
26
cody
cody@cmcollier·
Just one more prompt...
English
0
0
1
18
cody
cody@cmcollier·
I'm up near the end of my cursor billing cycle and have plenty of tokens left to burn. It's a very different feel when when I'm in experimental mode and have minimal concern for cost.
English
0
0
1
28