Eric S. Raymond@esrtweet
My experience with LLM-assisted coding has been great and I'm a big fan of it, but I've just had a slightly depressing realization. It may almost entirely shut down the development and adoption of new computer languages.
The percentage, and probably the absolute amount of code, handwritten by humans is going to fall a great deal. But for the foreseeable future, LLMs won't be able to write code fluently in a specific language without having a large volume of good code in that specific language already available to train on.
For a new language in 2026 and after, where exactly is that large volume of good training data going to come from?
Probably not from human beings, and where is the incentive for an LLM handed a vibecoding task to go looking for an exotic new language to do it in?
I find this slightly depressing, because I enjoy contemplating new-language development the way a more physical tinkerer enjoys salivating over shiny new tools.
Human beings are still going to write new languages occasionally, because that's huge fun (if you have a brain bent anywhere like the way mine is) and still a way to climb some status ladders. But with the barrier to mass adoption getting so much higher, I have to think the level of research and engineering activity put into this is going to drop a lot.
There is one not-unhappy but rather weird way I could be wrong about this. Historically, once the development of compilers got to a certain point it became clear that designing machine instruction sets to be easily reasoned about by humans was a big mistake. We had to figure out how to design machine instruction sets that were easy for the compilers to reason about. Thus, RISC.
It could be that's the future of language design, too. But I have no idea what a new language design optimized for LLM code generation would look like. And I don't think anybody else does, either.
Interesting times, indeed.