
Frans Zdyb
745 posts

Frans Zdyb
@FZdyb
Shaper of loss landscapes. No agent samples the same distribution twice, for it's not the same distribution, and she is not the same agent





Once again, regardless of whether you think that ChatGPT understands anything or not, I think this argument is confused. To say that it can't possibly understand anything because it was only trained to "predict the next word" is just as idiotic as saying that humans can't understand anything because they were "trained" to survive and spread their genes. This line of argument seems to boil down to the idea that, unless something works roughly in the same way as the human brain, it can't really be intelligent, but just as the same software can run on very different types of hardware there is no reason to think that human-like intelligence couldn't be implemented in very different ways.






Oxford AI professor Michael Wooldridge: "ChatGPT doesn't understand anything. It's essentially doing some fancy statistics."










what's actually happening in the discourse is clearly stranger than this. there's a contingent of people to whom computational functionalism seems obviously correct, to whom arguments against seem like nonsense, and a contingent of people to whom computational functionalism seems obviously incorrect, to whom arguments for seem like nonsense. approximately zero meaningful communication appears to be capable of bridging this gap. this is weird! what the fuck is going on with this!

