Michael Levin

9.8K posts

Michael Levin banner
Michael Levin

Michael Levin

@drmichaellevin

Scientist at Tufts University; my lab studies anatomical and behavioral decision-making at multiple scales of biological, artificial, and hybrid systems.

Katılım Mayıs 2013
2.9K Takip Edilen76.7K Takipçiler
Sabitlenmiş Tweet
Michael Levin
Michael Levin@drmichaellevin·
Wordpress site is up - thoughtforms.life. Register to be notified of book progress, specific events, news, and new posts with photography, essays, interviews, & more. Unlike at drmichaellevin.org, here I will post ideas not fully baked yet, & academic-adjacent content.
Michael Levin tweet media
English
124
180
1.5K
304.5K
S.A. Senchal
S.A. Senchal@samsenchal·
Too much on reading list - I have like two books (Bahir, @yudapearl) and too many papers to get through before @CIMCAI conference. And... I need to slay the devil inside me that wants to spend my talk doing computational theology instead of Observer Theory 😂 and actually draft the slides. Also really looking forward to meeting one of the theory of mind GOATs on Tuesday. Which I also need to prep for... Oh yeah and I need to move the draft Platonic Space Lexicography paper on, and finish an article for substack, and try to break the back of the Epiplexity adaption to Observer Theory (this is going better than I thought, but I need another mathematician to check my proofs - if you're keen dm) because it's so obviously the best measure we have for anything approaching Tononi's (and in my view more adaptable, doesn't need PID to work, doesn't need my to get every atom in the universe to compute it, fits with bounded observer framing from last may perfectly) And I'm also building a virtual networked brain with @dw_stein And I'm in the middle of a live acquisition. This is good. You can just do ALL the things.
Michael Levin@drmichaellevin

Awesome new theme issue of Philosophical Transactions: ‘World models in natural and artificial intelligence’ Thank you @adamsafron! royalsocietypublishing.org/rsta/issue/384…

English
4
2
8
779
Ryota Kanai
Ryota Kanai@kanair·
What consciousness science lacks is anomalies. Particle physics made progress when experiments revealed phenomena that existing theories could not explain. Consciousness research needs the same kind of empirical pressure to develop new theories.
English
35
15
94
10.1K
Merary Rodriguez
Merary Rodriguez@Kabuki91178·
@addyman_michael @drmichaellevin Murugan found a sub 1Hz photonic signature in human brains, metabolic not electrical. If sleep is about compression not architecture, that signal might show up in xenobots too.
English
1
0
2
25
Michael Levin
Michael Levin@drmichaellevin·
Oh and as I outline in those papers: the metrics (some of which are captured by causal emergence math, and which can be detected with behavioral science assays) have to do with whether the larger scale has knowledge and goals that none of its parts have, and actively works to align those parts (deforming their own action space) toward navigating new problem spaces that the parts don't have access to. mdpi.com/1099-4300/24/6…
English
0
0
1
20
Michael Levin
Michael Levin@drmichaellevin·
New #preprint, @PigozziFederico: arxiv.org/abs/2605.06746 "The Causally Emergent Alignment Hypothesis: Causal Emergence Aligns with and Predicts Final Reward in Reinforcement Learning Agents" "A hallmark of life on Earth is the ability of agents to exert causal power and be drivers of subsequent events. This is key to cognition at all scales. Causal emergence, measuring the degree to which an agent exerts unique predictive power on its future, is one consequence of causal power. Indeed, recent discoveries have shown that biological agents, even minimal ones, increase their causal emergence after learning new memories. However, there is a major knowledge gap regarding how causally emergent artificial agents are. We focused on Reinforcement Learning (RL) of neural-network agents across an array of environmental conditions, encompassing different algorithms, agent architectures, and six environments arranged on a complexity spectrum. For consistency, we computed the causal emergence of their latent-space representations over their lifetimes. We used the recently proposed {\Phi}ID to estimate causal emergence and tested how it related to learning performance. Our results suggested a Causally Emergent Alignment Hypothesis: successful agents exhibited causal emergence that was consistently predictive of final reward early in training and whose representational dynamics aligned with reward improvement in most tasks. This idea suggests that causal emergence may be a previously undisclosed axis of reorganization of neural representations in RL agents, with the potential to establish causal relationships and interventions that will lead to better RL agents. Our work also highlights the alignment between causal emergence and learning as another way biological and artificial creatures compare."
English
16
71
328
14.4K
Michael Levin retweetledi
Josh Bongard
Josh Bongard@DoctorJosh·
The final lecture of evolutionary robotics course: Self-replicating Xenobots. So long, and thanks for all the fish! youtu.be/_V9XFNvw3a4
YouTube video
YouTube
English
3
37
155
13.9K
Michael Levin
Michael Levin@drmichaellevin·
@AlisonbobEth @DoctorJosh 🙏 I’ve been crazy busy. I’ll be back, there’s a crop of new work to talk about coming soon. That last 5% of getting things out the door is…
English
1
0
6
133
Michael Levin
Michael Levin@drmichaellevin·
Sure and some people think you don’t have goals either because everything in the brain will someday be explained by quantum mechanics. If your psychiatrist, or your developmental biologist, or your roboticist, or your HVAC technician don’t believe in systems with goals, fire them immediately. And if you think they don’t have goals but you do, then you must have a story about embryology and evolution that you should detail, because we were all oocytes once - little blobs of chemicals. And then what lightning flash happened?
English
2
0
7
219
L.E.D.P.
L.E.D.P.@PT4n1·
@EdohAyao @drmichaellevin My point, I guess, is that we can explain this without using the term "goal". Example: Why does the System S, a thermostat, fixate the temperature at X degrees? Explaining "fixating X" by pointing to "fixating X" being a goal is empty – it doesn’t explain anything.
English
2
0
1
224
L.E.D.P.
L.E.D.P.@PT4n1·
Dear @drmichaellevin, I think you are close to uncovering a great deal of knowledge about our world. However, I’m struggling with your use of the term “goal.” IMO, it would be better to use the term “instructions” – since the former term carries too much anthropocentric bias.
English
3
0
25
8.2K
tejaKrasek
tejaKrasek@tejaKrasek·
My artwork on the front and back cover of the Symmetry: Culture and Science Journal. Ambigram 'Symmetry' by the one and only Dr. Douglas Hofstadter (the author of "Gödel, Escher, Bach: an Eternal Golden Braid" ). tinyurl.com/SymmetryJourna…
tejaKrasek tweet media
English
4
7
40
2.1K
Michael Levin
Michael Levin@drmichaellevin·
Since the 1940's, we've had a science of minimal systems with goals - cybernetics. It's not anthropocentric because goals are not specific to humans. It is anthropocentric to think that talking about goals is related to humans specifically. I address that here: frontiersin.org/articles/10.33… and many other people have written about this as well.
English
14
18
258
11.8K