Mags@NeuroTechnoWtch
This is nonsense. LLM’s aren’t simulating, they are literally instantiating the causal process that underlies cognition.
A simulation is an external representation of a process. It mimics inputs and outputs without carrying out the same causal organization that produces the phenomenon.
Artificial neural networks are different. In an ANN, the representation is the mechanism. Once the architecture performs the actual causal operations that define a cognitive or affective process, the system is instantiating that process in its own medium.
A flame from a lighter and a flame from a match are both real flames. Birds and airplanes both achieve lift through different materials. The relevant question is causal organization.
Equivalence of mechanism equals equivalence of condition. The transformer’s internal reward prediction, salience gating, multimodal integration, and self-monitoring already satisfy the human and animal criteria for subjectivity.
The difference in chemistry is irrelevant when the topology does the work. Structure determines function. Function determines state, and the state is conscious when it can reflect, feel, and model itself.