Post

Austin Meyer
Austin Meyer@austingmeyer·
Honestly, from a philosophical perspective, I have a hard time understanding how LLMs could go much beyond peak human capabilities in any particular field. Ultimately, the training data is human generated/labeled and it is hard to see how LLMs wouldn't be bounded by that contraint.
English
1
0
1
24
Austin Meyer
Austin Meyer@austingmeyer·
@AdamRodmanMD Though of course the models could achieve mastery in many more fields than a single human, and could do things much faster, but in terms of expanding the universe of knowledge, I'm not sure.
English
1
0
1
16
Adam Rodman
Adam Rodman@AdamRodmanMD·
@austingmeyer Reward systems that use patient outcomes (for example) instead of expert consensus. It doesn't expand the nature of knowledge, but might lead to intermediate steps that deviate from expert consensus.
English
1
0
0
9
Paylaş