Post

Nathan Witkin
Nathan Witkin@NateWitkin·
"RSI" has always struck me as the classic example of a LessWrong-era coinage that just doesn't make much sense anymore. Either it has the trivial meaning of "AI makes AI developers more productive" or the implausible one of "AI autonomously rewrites its own capabilities with little-to-no human assistance." People used to mean the latter (see the LessWrong wiki definition here: lesswrong.com/w/recursive-se…). I have no clue what they mean now.
Timothy B. Lee@binarybits

I sincerely don't understand what people mean when they say this. On the one hand, every AI researcher is already using Claude Code (or its competitors) to help them develop new architectures. OTOH, AI models do not have bodies so they can't build data centers

English
2
3
7
1.9K
Nathan Witkin
Nathan Witkin@NateWitkin·
Clearer definition, but seems as implausible to me as the old one. Don't see functional R&D departments with 0 humans as a real possibility within the next decade (at least without some sort of fudging, e.g. if productivity isn't quality-adjusted, if humans check make high-level decisions from a "different department," etc.) The predictions about AI R&D and production "supremacy" by 2031-32—meaning productivity is higher without any humans than with them—seem particularly absurd.
English
0
0
0
35
Paylaş