

Why we should view LLMs as powerful Cognitive Orthotics rather than alternatives for human intelligence #SundayHarangue
LLMs are amazing giant external non-veridical memories that can serve as powerful cognitive orthotics for us, if rightly used (c.f. x.com/rao2z/status/1…).
The trick, IMHO, is to exploit them without deluding ourselves in the process.
The delusion comes chiefly from our incessant need to confuse them for human intelligence, merrily applying anthropomorphic concepts such as thinking , thoughts, reasoning and self-critiquing to LLMs. (c.f. x.com/rao2z/status/1…;
x.com/rao2z/status/1…;
x.com/rao2z/status/1…)
This anthropomorphization is quite futile--and, as shown in the case of some of the current Ersatz Natural Science AI literature--even counterproductive and misleading.
Sure we didn't quite foresee how impressive the approximate omniscience of these n-gram models on steroids would be, but that doesn't have to make us assume they do everything humans do.
Unless human-level #AI is your singular goal, you don't necessarily need to think auto-regressive LLMs suck (as @ylecun puts it colorfully).
LLMs can be very effective complementary cognitive orthotics without subsuming human intelligence.
LLMs do some things way way way better than humans do (the litmus test--from my perspective--being converting anything to iambic pentameter in seconds 😋x.com/rao2z/status/1…), and do other things (planning, reasoning, self critiquing, mental modeling) much worse (c.f. ;
x.com/rao2z/status/1…;
x.com/rao2z/status/1…; x.com/rao2z/status/1…)
If we can manage to tone down the "LLMs are Zero-shot

















