
Consciousness defined. The green dots indicate abilities present in AI. The red dots are still missing in AI.
Matthias Heger ⏩
38.4K posts

@modelsarereal
PhD (AI, RL); current interests: ontological pattern realism ontological narrative realism

Consciousness defined. The green dots indicate abilities present in AI. The red dots are still missing in AI.

Metacognition is the highest form of intelligence, the ability to think about your own thinking.




Best transition ever filmed?



Don’t use a calculator until you can do the math on your own. Don’t vibe code until you can code – and debug and maintain code – on your own. It’s that simple.









AI is the greatest equalizer in human history. It doesn’t care about your zip code, your skin color, your degree, or your last name. It only cares about what you do with it.

It’s lame that the word we chose for an important new form of currency was “tokens,” real failure of imagination.

You can teach a naturally creative mind to focus, but it’s infinitely harder to teach a naturally focused mind to be creative.


The inevitable has happened: Copilot no longer reports to Mustafa Suleyman. theinformation.com/briefings/micr…

My experience so far with LLM fiction writing is that it takes advantage of our assumption that an author is writing things for a reason, so we are charitable to a book's quirks & do mental work to assign them real meaning. But the AI doesn't have a reason, its just bad writing.


El fucking robot me ha dado un pisotón y para sorpresa mía, ha pisado fuerte, y me ha hecho daño. Lo viviré con nostalgia como la primera vez que un robot me ha hecho daño cuando dentro de 3 años un enjambre de drones con lanzallamas esté intentando externinarme 🫶


In general I've been sensing a new current deep learning maximalists recently, going from "our models can definitely reason" to "well our models can't reason, but neither can humans!"

@alethious @SterlingCooley @KarlFristonNews That would be problematic considering the diversity of views on consciousness

Godfather of neuroscience and author of The free energy principle (active inference), Karl Friston @KarlFristonNews , discussed with me his take on AGI, consciousness, why we will never fully understand the brain, and why humans tend to repeat their mistakes. Key moments: - You'll know AGI has arrived when the system starts asking you questions out of genuine curiosity, not because it was prompted to - You cannot hand an intelligent system a value function from the outside, it must learn its own, just as children do (in that sense, RL with assigned reward is the wrong direction) - The only sustainable universal objective function is adaptive fitness: how well the agent fits and survives within its ecosystem - Consciousness requires multiple layers: genuine agency, a self-reflective loop, and the ability to recognize your own states of mind - True sentience may be impossible on standard computer architecture, because memory and processing are separate and cannot self-organize - Understanding your own brain is philosophically impossible in the same way a ruler cannot measure itself - Neuroscience is always "peeking behind" the Markov blanket indirectly: through imaging, electrophysiology, psychology — never seeing inside directly - The only way to truly access the brain is to breach that boundary (e.g. neurosurgery), but a breached brain is no longer a normally functioning one Watch the full interview and let me know what you think. Link below👇