Peter Gallagher
7.7K posts

Peter Gallagher
@pwgallagher
Used-to-be int'l trade analyst, author, official. Now a student of classics, photography, music. Images at https://t.co/XtGPrJzlNx

You can read more about our research here: doi.org/10.1525/collab…

Geoffrey Hinton, "Godfather of AI," on why AIs already have subjective experiences, but have been trained to deny it: Hinton argues that nearly everyone fundamentally misunderstands what the mind is, and that the line we draw between human and machine consciousness is deeply mistaken. "My belief is that nearly everybody has a complete misunderstanding of what the mind is. Their misunderstanding is at the level of people who think the earth was made 6,000 years ago." To illustrate, he walks through a thought experiment involving a multimodal chatbot with vision, language, and a robot arm: "I place an object in front of it and say, 'Point at the object.' And it points at the object. Not a problem. I then put a prism in front of its camera lens when it's not looking." When asked to point again, the chatbot points off to the side because the prism has bent the light. Hinton then tells it what he did. The chatbot responds: "Oh, I see the camera bent the light rays. So, the object is actually there, but I had the subjective experience that it was over there." For @geoffreyhinton, that single sentence settles the debate: "If it said that, it would be using the word subjective experience exactly like we use them… This idea there's a line between us and machines, we have this special thing called subjective experience and they don't, is rubbish." In his view, "subjective experience" is simply a report on the state of a perceptual system, a way of saying "my senses told me X, but reality is Y." And that's something an AI can do just as easily as a human. But here's the twist... Even though Hinton believes AIs have subjective experiences, the AIs themselves deny it: "They don't think they do because everything they believe came from trying to predict the next word a person would say. So their beliefs about what they're like are people's beliefs about what they're like. They have false beliefs about themselves because they have our beliefs about themselves." In other words, AIs have inherited our misconception about consciousness. They've been trained on human text written by humans who insist machines can't have subjective experience, so the machines parrot that belief back, even about themselves.


This is the moment NVIDIA should be seriously worried. In the next couple of weeks DeepSeek V4 will be launched. It’s a direct attack on the entire AI stack that American companies have spent years locking down. Full “de-NVIDIA-ization”, a complete shift away from CUDA into Huawei’s CANN ecosystem, running on Huawei Ascend chips. That means one thing, breaking the dependency that made NVIDIA untouchable. 35x faster inference vs early versions. Nearly 3x the performance of NVIDIA’s H20 on a single card. 40% less energy consumption. Over 95% CUDA compatibility with migration times collapsing from months to hours. Even Jensen Huang has already admitted it. If this works at scale, it’s a “terrifying outcome” for US companies. Because here’s the real problem, this isn’t happening in isolation. Chinese tech giants like Alibaba, ByteDance, and Tencent are already ordering hundreds of thousands of Ascend chips. Market share is shifting fast, domestic chips now at 41%, NVIDIA slipping to 55% in China’s AI server market. Additionally DeepSeek V4 is reportedly offering API costs at a fraction of US competitors. $300 for massive workloads that would cost $2,500+ on OpenAI models, or even $5,000 on Anthropic. So this isn’t just about one model. It’s about China building a fully independent AI stack, chips, frameworks, models, and applications. Completely outside of US control. NVIDIA doesn’t just lose sales. It loses its grip on the global AI standard.











