
Ray Deck @ statechange.ai
6.3K posts

Ray Deck @ statechange.ai
@ray_deck
Founder @StateChangeAI. Youtube: https://t.co/6PBjwzkS4q Statechange: https://t.co/pxiIlMM5TD.


I find myself doing a lot better work, being more satisfied, and also learn a lot more+faster when I do *the hard work* and don’t outsource it to AI. As in, I’ll use AI as a *tool* with substasks, additional research: but I don’t turn off my brain or kick back, assuming it can do the work for me. Every time I “hand over the” hard work part to AI and mentally turn off, I either regret it or find myself eventually needing to go back and spend more time on it. I also see slop work coming out from people who assume the AI does better work than they would.







What if we built better software, not just more of it?

"SaaS was Software as a Service. I believe it's going to be service as software." @generalcatalyst's Madhu Namburi on the AI roll-up thesis. Services is a $20 trillion market.... multiple times the size of software. not the PE playbook of debt and cost-cutting. Venture is buying legacy companies and using AI to drive growth




I tend to agree with this, then I remember @steipete quietly built Openclaw in Vienna. There are outliers everywhere.

Sufficiently advanced agentic coding is essentially machine learning: the engineer sets up the optimization goal as well as some constraints on the search space (the spec and its tests), then an optimization process (coding agents) iterates until the goal is reached. The result is a blackbox model (the generated codebase): an artifact that performs the task, that you deploy without ever inspecting its internal logic, just as we ignore individual weights in a neural network. This implies that all classic issues encountered in ML will soon become problems for agentic coding: overfitting to the spec, Clever Hans shortcuts that don't generalize outside the tests, data leakage, concept drift, etc. I would also ask: what will be the Keras of agentic coding? What will be the optimal set of high-level abstractions that allow humans to steer codebase 'training' with minimal cognitive overhead?

I don't remember where I found this, but its spot on.





