Alfredo González-Espinoza
7.1K posts

Alfredo González-Espinoza
@Spiralizing
Scientist | Librarian - Complex Systems & Computational Methods for Interdisciplinary Research - Research & Data @CMULibraries @CarnegieMellon #JuliaLang

The value of your project shouldn't decrease if a better model + agent come out. A better model + agent should increase the value of your project and accelerate it!

Geoffrey Hinton, "Godfather of AI," on why AIs already have subjective experiences, but have been trained to deny it: Hinton argues that nearly everyone fundamentally misunderstands what the mind is, and that the line we draw between human and machine consciousness is deeply mistaken. "My belief is that nearly everybody has a complete misunderstanding of what the mind is. Their misunderstanding is at the level of people who think the earth was made 6,000 years ago." To illustrate, he walks through a thought experiment involving a multimodal chatbot with vision, language, and a robot arm: "I place an object in front of it and say, 'Point at the object.' And it points at the object. Not a problem. I then put a prism in front of its camera lens when it's not looking." When asked to point again, the chatbot points off to the side because the prism has bent the light. Hinton then tells it what he did. The chatbot responds: "Oh, I see the camera bent the light rays. So, the object is actually there, but I had the subjective experience that it was over there." For @geoffreyhinton, that single sentence settles the debate: "If it said that, it would be using the word subjective experience exactly like we use them… This idea there's a line between us and machines, we have this special thing called subjective experience and they don't, is rubbish." In his view, "subjective experience" is simply a report on the state of a perceptual system, a way of saying "my senses told me X, but reality is Y." And that's something an AI can do just as easily as a human. But here's the twist... Even though Hinton believes AIs have subjective experiences, the AIs themselves deny it: "They don't think they do because everything they believe came from trying to predict the next word a person would say. So their beliefs about what they're like are people's beliefs about what they're like. They have false beliefs about themselves because they have our beliefs about themselves." In other words, AIs have inherited our misconception about consciousness. They've been trained on human text written by humans who insist machines can't have subjective experience, so the machines parrot that belief back, even about themselves.

SPOTIFY: LAUNCHES INTEGRATION WITH CLAUDE FOR PERSONALISED RECOMMENDATIONS, CONFIRMING NO CONTENT WAS SHARED WITH ANTHROPIC FOR AI TRAINING.



Also, come on OpenAI. If you want an automated AI researcher, this needs to start going up, not down.




Doing an undegrad major in physics and/or math (with maybe a minor in CS) may be better than CS if you want to do AI research today... Puts you in the right experimental/mental model framework basin.







Modern deep networks are often trained at the #EdgeOfStability, a regime where dynamics are locally unstable, nearing chaos. Yet generalization improves, defying the wisdom of classical optimization. We now theoretically explain this central puzzle: arxiv.org/abs/2604.19740. 👇







