
Leo
156 posts

Leo
@leo_os8
Building Personal Agent tools. OS8: Desktop app where agents build local apps. OS8-launcher: Use local models on DGX Spark. https://t.co/h5OjTlo4Ph


Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."

Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."


Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."










Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."


Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."




Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."








Simple way to see this is wrong: If you view a system as having inputs (like hearing something) and outputs (like saying something) then you can divide system properties by whether or not they affect I/O. Claude's weights somewhere storing "Paris is in France" affect I/O if you ask a question about Paris. The exact mass of the power supply to the GPU rack for that Claude instance doesn't affect I/O. That Claude instance being made out of silicon instead of carbon, or electricity in wires instead of water in pipes, doesn't affect I/O given a fixed algorithm above the wires or pipes. Nothing Claude can internally do will make anything get damp inside, if it's running on electricity. Nothing about "electricity vs water" can affect Claude's output for the same reason. It always answers the same way about France. Nothing Claude can internally compute will let it notice whether it's made of electricity or water flowing through pipes. When someone says "a simulated storm can't get anything wet", they are unwittingly pointing to the difference between the physical layer and the informational/functional layer. Things that the computer physics affect without affecting output; things that affect the output without depending on the exact computer-physics. The material it's made of doesn't affect the output. The output can't see the material because no algorithm can be made to depend on the choice of material. You can always run the same algorithm on different material, so you can't make the algorithm depend on that, so the output can't depend on that. By reflecting on your awareness of your own awareness, the fact of your own consciousness can make you say "I think therefore I am." Among the things you do know about consciousness is that it is, among other things, the cause of you saying those words. You saying those words can only depend on neurons firing or not firing, not on whether the same patterns of cause and effect were built on tiny trained squirrels running memos around your brain. You couldn't notice that part from inside. It would not affect your consciousness. That's why humans had to discover neurobiology with microscopes instead of introspection. Consciousness is in the class of things that can affect your behavior and can't depend on underlying physics, not in the class of direct properties of underlying physics that can't affect your behavior. A simulated rainstorm can't get anything wet. Running on electricity versus water can't change how you say "I think therefore I am." And that's it. QED.




Blake Lemoine was famously fired from Google for saying that AI has emotions. During our interview, he wanted to set the record straight: the AI sentience part was a big headline. But his more important message was that AI is going to be a powerful tool, too dangerous to leave in the hands of a small group of people.



as someone who works on making LLMs run faster and cheaper every day, i can confidently say the question of whether theyre conscious has exactly zero impact on whether theyre useful. we dont need our inference stack to be conscious, we need it to be correct, fast, and affordable. the consciousness debate is fascinating philosophy but its a distraction from the actual engineering problems that determine whether AI creates value. the gravity formula doesnt need to exert weight to help you build a bridge @Hesamation


Google DeepMind researcher argues that LLMs can never be conscious, not in 10 years or 100 years. "Expecting an algorithmic description to instantiate the quality it maps is like expecting the mathematical formula of gravity to physically exert weight."



New conversation with @dioscuri and @rgblong on AI consciousness and welfare! Among many other topics, we discuss: - Why we should take AI consciousness and welfare seriously - What Rob found doing the first external welfare evaluation of a frontier model, Claude, and his experiments on Claude Mythos - The "willing servitude" problem: if AI loves being helpful, is that good or horrifying? - Why AI companies might have an incentive to downplay AI consciousness (1/2)


