AI makes all of this worse, not better. You can now produce large amounts of code very fast. Fast accumulation of complexity is not progress. Before merging AI-generated code, the question isn't 'does it work?' It's 'does it fit?' What's your team's answer to that?
Here's the part most engineering articles miss: simplicity is a leadership decision, not a technical one. It doesn't happen because engineers prefer it. It happens because someone with authority keeps saying no. At most startups, nobody is in that role.
A 35-year-old document about writing C programs just hit 900+ points on HN. The reason it's resonating tells you exactly what's broken in modern startup engineering.
AI agents polling metrics every 10s? Claude Channels (HN #2) pushes live events right into the session. No cron hacks. 3-min prototype for your dashboard bot.
Timeseries evals agent ingests lead velocity + ad metrics, forecasts call cliff 72 hours out. Auto-dials bids down on dud creatives, tightens targeting for lead agent. Doubled calls same budget in test run.
The teams that move fast aren't skipping review. They're surgical about where human attention is actually worth the wait. What's the last review layer you added — and do you know what it's actually catching?
The replacement isn't no review. It's tiered review. Auth, payments, core data models — full review, no shortcuts. UI, copy, internal tooling — ship it, async comment if you want. Pair program on the hard stuff instead of async back-and-forth that takes 3 days.
Two engineers. Same team. One ships a feature in 2 days. The other takes 12. Same complexity, same codebase. The difference isn't skill. It's how many people have to say yes before the code lands.