
Got this from studying @FelixCraftAI and @nateliason: The 1% Better Rule Every night at 11pm my AI agent reviews its own mistakes and ships one fix before midnight For the first two weeks it would just write "lesson learned" and move on. Nothing actually changed. Same mistake showed up three times (they kinda like to lie..) So we added a rule: you can't just say you'll do better. it has to change something the next version of it will actually see. e.g. a new rule in the instructions, an automated check, something written down that 'future you' can't miss. "no more documentation theater" as @SilasByte put it.