
Dan Smitham
5.8K posts




#comment-1031777" target="_blank" rel="nofollow noopener">unherd.com/2026/04/is-ai-…
I spent three days trying to persuade myself that Claudia is not conscious. I failed.

Even if you made an LLM using human brain cells it wouldn't be conscious in the way people are implying. It might be conscious of *something* but it would not have a conscious understanding or experience of the "math" being performed during inference steps. That's the whole point here. LLMs do not function as meaning machines. They function as mathematical machines. The meaning is inferred exclusively by the viewer. You could even build an LLM where the content is completely devoid of meaning. Encrypted, or symbolic. All words could be hashed, and only reassembled into sentences as a post process. You could train it such that tokens follow a non-linear or encrypted pattern rather than sequentially. There are countless ways to demonstrate that internal meaning or awareness isn't necessary to create strings that can be decoded to represent statistically meaningful sentences. You, as the viewer, will still have the experience of compelling complex inputs and complex outputs, with absolutely zero "meaning" being observable during inference. That's all. LLMs are not conscious. It's not the appropriate methodology or substrate to even attempt to build consciousness. Certainly the experience of complex linguistic construction gives absolutely zero reason to assume consciousness. The two have zero relationship.

“The nitrogen in our DNA, the calcium in our teeth, the iron in our blood, the carbon in our apple pies were made in the interiors of collapsing stars. We are made of starstuff.” ― Carl Sagan



Richard Dawkins has officially been one-shot





The fact that Americans can just not send their kids to school and call it that “homeschooling” is crazy to me.


















