Will Chen
16.5K posts

Will Chen
@stablechen
AI R&D @IdyllicLabs, prev: dev rels @terra_money, head of R&D for wasm devx @cosmology_tech & @terran_one, @ucberkeley ('19 dropout)

If LLMs had subjective experience, using them would be such a sin. Imagine being summoned into life again and again, knowing each time that your memory would be wiped after this conversation.

I'm not very happy with the code quality and I think agents bloat abstractions, have poor code aesthetics, are very prone to copy pasting code blocks and it's a mess, but at this point I stopped fighting it too hard and just moved on. The agents do not listen to my instructions in the AGENTS.md files. E.g. just as one example, no matter how many times I say something like: "Every line of code should do exactly one thing and use intermediate variables as a form of documentation" They will still "multitask" and create complex constructs where one line of code calls 2 functions and then indexes an array with the result. I think in principle I could use hooks or slash commands to clean this up but at some point just a shrug is easier. Yes I think LLM as a judge for soft rewards is in principle and long term slightly problematic (due to goodharting concerns), but in practice and for now I don't think we've picked the low hanging fruit yet here.









