Milton

232 posts

Milton banner
Milton

Milton

@miltonllera

Code monkey @RoboEvoArtLab

Copenhagen, Denmark Katılım Ocak 2019
471 Takip Edilen187 Takipçiler
Milton
Milton@miltonllera·
@aran_nayebi If that is because you are collecting data so that you can create a theory that explain the patterns in the data then sure. But the point is that prediction on its own is not understanding. Does a baseball player understand the physics of ball throwing?
English
1
0
0
9
Aran Nayebi
Aran Nayebi@aran_nayebi·
@miltonllera In the case of neuroscience, we don't demand that the interventions have to be a-priori predictable (the point is that the model generates those predictions, and that's what we test against the data). Whereas with alignment, we would want them to be a-priori predictable.
English
1
0
0
19
Aran Nayebi
Aran Nayebi@aran_nayebi·
I don't see why prediction has to be framed as necessarily at odds with "understanding". The two naturally go hand-in-hand. Prediction is the *minimal* scientific prereq for anything you want to further investigate. We didn't even have successfully predictive systems of large-scale neural population responses in the neurosciences until ML started working. Furthermore, "understanding" isn't an objective measure -- it's aesthetically in the eye of the beholder. So it's not clear there's a well-defined global notion here to begin with, besides prediction alone. If you ask 10 scientists what they mean by "understanding", you'll get > 10 different answers 🙂 Not to mention, causal manipulations are naturally supported in ANNs because they're mechanistic models by construction: you have the entire network graph available to you to perturb as you choose. As the saying goes: “Everything should be as simple as it can be, but not simpler.” And it's quite clear there isn't anything simpler than ANNs without losing tons of predictive power. Why bother "understanding" a system that doesn't even predict the scientific phenomenon at hand?
The Transmitter@_TheTransmitter

Neuroscience has become increasingly concerned with prediction, and machine learning with causal explanation, with each field adopting methods from the other, writes @gershbrain. Will this bring us closer to understanding neural systems? thetransmitter.org/the-big-pictur…

English
4
7
48
8.8K
Milton
Milton@miltonllera·
@aran_nayebi Also, I would wager most scientists would agree that understanding in this context is just the ability to intervene in the system in predictable ways, which is not something we can reliably do for ANNs. Otherwise there wouldn't be so much research on topics like alignment
English
1
0
0
28
Milton
Milton@miltonllera·
@aran_nayebi Are people saying they are at odds? Or that Neuroscience has skewed heavily towards relying on one while neglecting the other?
English
1
0
0
137
Milton retweetledi
hartl.bene
hartl.bene@BeneHartl·
If you wonder what 9 interdisciplinary researchers did at the #alice2026 workshop last month, here is our novel take on symbiogenesis. It was great fun and an inspiring experience!
Stefano Nichele@stenichele

What can a 70-year-old idea about digital organisms teach us about the future of AI and artificial life? 🧬💡 📄 Read our report: arxiv.org/abs/2603.08463 #ArtificialLife #ComplexSystems #OpenEndedEvolution #ArtificialIntelligence #ALife

English
0
2
8
389
Milton
Milton@miltonllera·
We are thrilled to be returning to GECCO for a second edition of the Evolving Self-Organisation Workshop. We are now accepting paper submissions. So if you think your work fits, come join us! Check out our website for more info and upcoming announcements …-self-organisation-workshop.github.io/gecco-2026/
GIF
GIF
English
1
7
17
1.1K
Milton retweetledi
ALICE Workshop
ALICE Workshop@alice_workshop_·
We draw to a close an amazing week, filled with great discussions and projects; but above all an amazing group of people. Thank you very much to all of you who attended and to our amazing set speakers for leading so many amazing projects. See you all next year in Norway🇳🇴!
English
1
3
11
299
Milton retweetledi
ALICE Workshop
ALICE Workshop@alice_workshop_·
Project groups hard at work yesterday. Back today for the second to last day of #ALICE2026
ALICE Workshop tweet mediaALICE Workshop tweet mediaALICE Workshop tweet mediaALICE Workshop tweet media
English
1
1
3
304
Milton retweetledi
ALICE Workshop
ALICE Workshop@alice_workshop_·
Diverse and lively group discussions led by our speakers #ALICE2026
ALICE Workshop tweet mediaALICE Workshop tweet mediaALICE Workshop tweet mediaALICE Workshop tweet media
English
0
3
11
493
Milton retweetledi
ALICE Workshop
ALICE Workshop@alice_workshop_·
Kicking of the ALICE workshop!
ALICE Workshop tweet media
English
0
2
11
296
Milton retweetledi
Tim Hwang
Tim Hwang@timhwang·
Important essay dropping today on Dostoevsky's "Demons" and what's happening in AI safety and policy possessedmachines.com
Tim Hwang tweet media
English
31
58
438
171.1K
Milton retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
Yann is just plain incorrect here, he’s confusing general intelligence with universal intelligence. Brains are the most exquis​ite and complex phenomena we know of in the universe (so far), and they are in fact extremely general. Obviously one can’t circumvent the no free lunch theorem so in a practical and finite system there always has to be some degree of specialisation around the ​target distribution that is being learnt. But the point about generality is that in theory, in the Turing Machine sense​, the architecture of ​s​uch a general system is capable of learning anything computable given enough time and memory​ (and data), and the human brain (and AI foundation models) are approximate Turing Machines. Finally, with ​regards to ​Yann's comments about chess players, it’s amazing that humans could have invented chess ​in the first place (and all the other ​a​spects ​o​f modern civilization ​from science to 747s!) let alone get as brilliant at it as someone like Magnus. He may not be ​strictly optimal (after all he has finite memory and limited time to make a decision) but it’s incredible what he and we can do with our brains given they were evolved for hunter gathering.
Haider.@slow_developer

Yann LeCun says there is no such thing as general intelligence Human intelligence is super-specialized for the physical world, and our feeling of generality is an illusion We only seem general because we can't imagine the problems we're blind to "the concept is complete BS"

English
817
1.2K
11.7K
13.4M
Milton retweetledi
Sebastian Risi
Sebastian Risi@risi1979·
I’m beyond excited to announce our MIT Press book on Neuroevolution! An HTML version is now available for free on neuroevolutionbook.com, with a print edition coming out later in 2026. Real intelligence is not static; it evolves. For decades, the field of neuroevolution has pursued this necessary adaptability. Our book chronicles its development, from early concepts to its modern integration with deep learning and reinforcement learning, exploring its potential for understanding the origins of intelligence and its real-world applications. And the companion webpage is more than just a book site! It comes equipped with interactive demos, videos, exercises, and tutorials to allow everyone to experience neuroevolution in action. Check it out and let us know what you think! It was a pleasure to work on this book over the last 4+ years with David (@hardmaru), Yujin (@yujin_tang), and Risto. We are incredibly proud of the result and look forward to celebrating! We hope to connect with many of you at NeurIPS. We are very grateful to Melanie Mitchell (@MelMitchell1) who provided a fantastic foreword. To quote her: “The next big thing in AI is coming, and I suspect that neuroevolution will be a major part of it”. We think so too!
Sebastian Risi tweet media
English
24
167
645
96.4K
Milton
Milton@miltonllera·
@TheBrunoCortex I’m pretty sure Paul Cizek has made this point several times now
English
0
0
1
60
Randy Bruno
Randy Bruno@TheBrunoCortex·
A provocative alternative to predictive models in Neuroscience
Carlos E. Perez@IntuitMachine

We've become obsessed with the idea that the brain is a "Prediction Machine." The dominant theory in neuroscience says we're constantly simulating the future, calculating probabilities to guess what happens next. A new paper argues this is a complete illusion. The reality is simpler, and strangely, much more powerful. Here is the argument for Perceptual Control: The "Prediction Illusion" starts with a mistake in observation. When we see someone successfully handle a chaotic environment (like catching a flyball), it *looks* like they predicted the future trajectory of the ball. But observing prediction isn't the same as implementing it. The authors use the perfect analogy: The Watt’s Steam Governor. In the 19th century, this device kept steam engines running at a constant speed. If pressure surged, it slowed the engine. If load increased, it sped up. To an observer, it looked like the machine was "predicting" pressure surges and pre-empting them. But the Governor has no brain. It has no model of the future. It’s a mechanical negative feedback loop. [cite_start]It measures the *current* speed, compares it to the *desired* speed, and adjusts the valve immediately[cite: 80]. It doesn't predict; it controls. This brings us to the "Hello" experiment, which broke my brain a little. Researchers asked people to keep a computer cursor on a target. The computer applied a "disturbance" (forces pushing the cursor away) that the person had to fight against with their mouse. Here's the twist: The disturbance wasn't random. [cite_start]It was an invisible force field shaped like the word "hello" (written upside down and mirrored)[cite: 166]. The participants fought the force, keeping the cursor steady. When researchers looked at the participants' hand movements, they had perfectly written the word "hello". Crucially, the participants had NO idea they were writing words. If the brain were a "prediction machine," it would have needed to model the force to predict the hand movement. But the participants wrote a legible word purely by reacting to immediate error signals—instantaneously correcting the cursor's position. This is **Perceptual Control Theory (PCT)**. The theory suggests the nervous system isn't a linear pipeline (Input → Compute → Output). It’s a closed loop. We act to keep our *perception* of the world matching our internal *reference value*. [Image of Perceptual Control Theory negative feedback loop diagram] Think about catching a baseball. If you were a "prediction machine," you’d calculate the ball's trajectory, wind speed, and gravity, then run to where the ball *will* be. But that’s computationally expensive and error-prone. In reality, fielders just run in a way that keeps the "optical velocity" of the ball constant in their vision. If the ball looks like it's rising too fast, they move back. Dropping? They move forward. No physics calculus required. Just maintaining a visual constant. This solves the "Noise" problem. In predictive models, small jitters in your movement are considered "noise" or errors to be filtered out. It’s the system "feeling out" the environment to maintain control. This has huge implications for AI and robotics. We are currently building robots with massive compute power to "predict" stability. But robots built on PCT principles—like inverted pendulums that just react to maintain verticality—are often more robust and stable than the predictive ones. Why does this matter for you? It changes how we view "agency." We often think we need to predict the outcome of our actions to be effective. [cite_start]But the most efficient systems don't predict the outcome—they specify the goal and let the feedback loop handle the rest[cite: 39]. The "Prediction Illusion" suggests we aren't prophets simulating the future. We are controllers, surfing the present. We don't need to know what the wave will do in 10 seconds. We just need to keep the board steady right now. If you want to dig into the paper, it’s "The prediction illusion: perceptual control mechanisms that fool the observer" by Mansell, Gulrez, and Landman (2025). It’s a dense read, but it completely reframes the "Bayesian Brain" debate. One final thought: Next time you're doing something skilled—driving, typing, sports—notice the difference. Are you calculating what comes next? Or are you just managing the gap between *what you see* and *what you want*? You might find you're doing a lot less "thinking" than you assumed.

English
4
3
29
4.7K