Christian Internò

57 posts

Christian Internò banner
Christian Internò

Christian Internò

@ChrisInterno

Researcher at @unibielefeld (@HammerLabML) and @Honda Research Institute EU, Visiting Researcher at @CSHL at the Department of Computational Neuroscience.

New York, US Katılım Eylül 2024
1.1K Takip Edilen164 Takipçiler
Sabitlenmiş Tweet
Christian Internò
Christian Internò@ChrisInterno·
1/10) We found an observer effect in world models: Invasive adaptation can corrupt latent physics! 🌍 New Paper: arxiv.org/abs/2602.12218 The phenomenon where the act of measuring a system unavoidably alters its state or properties (Heisenberg, 1927 & Sassoli de Bianchi, 2013).
GIF
English
1
21
79
9.2K
Christian Internò retweetledi
David Klindt
David Klindt@klindt_david·
MechInterp's SAE paradigm has recently gone through its first three crises: 1) SAEs dont learn the same features on different seeds tinyurl.com/saesdontidenti… 2) SAEs dont work out of distribution tinyurl.com/saesdontgenera… 3) SAEs are bad for interventions tinyurl.com/saesdontsteer @_shruti_joshi_ @rpatrik96 et al did a wonderful job explaining all of these shortcomings and how to fix them trough the lens of causality 🤓
Shruti Joshi@_shruti_joshi_

Mechanistic interpretability aims to understand models — and the more superhuman or incoherent they become, the more we need that understanding to be reliable. We propose a framework for this, drawing on established tools from causal reasoning and statistical identifiability: 🧵

English
6
19
188
16.3K
Ying Wang
Ying Wang@yingwww_·
What is a good latent space for world modeling and planning? 🤔 Inspired by the perceptual straightening hypothesis in human vision, we introduce temporal straightening to improve representation learning for latent planning. 📑: agenticlearning.ai/temporal-strai…
Ying Wang tweet media
English
29
130
775
211.2K
Christian Internò
Christian Internò@ChrisInterno·
Amazing work! Congrats @yingwww_ :) Happy that we had the chance to chat about Perceptual Straightening at the @worldmodel_conf! I am very curious about what is going to be next following this line of work! Here our contribution (Neurips2025): arxiv.org/abs/2507.00583
Ying Wang@yingwww_

What is a good latent space for world modeling and planning? 🤔 Inspired by the perceptual straightening hypothesis in human vision, we introduce temporal straightening to improve representation learning for latent planning. 📑: agenticlearning.ai/temporal-strai…

English
0
0
5
352
Christian Internò retweetledi
AMI Labs
AMI Labs@amilabs·
Advanced Machine Intelligence (AMI) is building a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe. We’ve raised a $1.03B (~€890M) round from global investors who believe in our vision of universally intelligent systems centered on world models. This round is co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, along with other investors and angels across the world. We are a growing team of researchers and builders, operating in Paris, New York, Montreal and Singapore from day one. Read more: amilabs.xyz AMI - Real world. Real intelligence.
AMI Labs tweet media
English
343
879
8.5K
4.8M
Christian Internò retweetledi
David Klindt
David Klindt@klindt_david·
Wow, I did not expect that DINOv3's global [CLS] token linearly represents the continuous geometric latents of dSprites (size & X/Y position) 🤯 It only took me 3.5 years to finally run this experiment 😂 I'm looking to do more of this MechInterp work, dissecting foundation models like biological artifacts and building theory. If you want to collaborate (especially students looking for a fun project) reach out! 🔬🤖
David Klindt@klindt_david

If there were an image input, I would be curious to show it some DSprites examples and ask: what are the independent factors of variation in that data 🤓

English
24
85
1.1K
93.1K
Christian Internò
Christian Internò@ChrisInterno·
I like this. Nature is the way, as it always has been.
Bo Wang@BoWang87

This week feels like AGI meets sci-fi. First: @CorticalLabs ships the CL1 — real human neurons grown on silicon, running DOOM. A biological computer you can buy. (x.com/mastronomers/s…) Second: Eon Systems uploads a fruit fly. They took FlyWire's complete connectome — 130,000 neurons, ~50M synapses — simulated it with a biologically realistic neuron model, and connected it to a MuJoCo body. The simulated fly avoids toxic compounds. Sensory input to physical behavior, loop closed. (x.com/michaelandregg…) Same week. Two completely different paths to the same place. Cortical Labs (bottom-up): plate real neurons on a multielectrode array, give them a closed-loop environment, apply the free energy principle. No backprop. No gradient descent. They learn. The CL1 has ~800K neurons. The human brain has 86 billion. But the same principles apply. (pubmed.ncbi.nlm.nih.gov/36228614/) Eon Systems (top-down): take the connectome — the wiring diagram — and run it as a simulation. No real cells. Just the map, instantiated in silicon, driven by spiking neuron dynamics from real electrophysiology. Wire the motor outputs to a physics engine. Watch it move. (FlyWire: doi.org/10.1038/s41586… | Shiu et al.: doi.org/10.1038/s41586…) One starts with biology and asks what it can compute. The other starts with the map and asks how to run it. Both are making the same bet: that the structure of biological circuits — not their wetware substrate — is where intelligence lives.

English
0
0
0
86
Christian Internò retweetledi
Cold Spring Harbor Laboratory
Think a 280-character limit is small? CSHL's Ben Cowley in collab with @PrincetonNeuro and @CarnegieMellon, has recreated the visual system of the macaque - a species of monkey whose brains are much closer to humans-using an AI model that fits in an email. Punch approves! (And who knows? It could help explain what he sees in that stuffed orangutan!) What if the key to understanding the brain wasn’t bigger AI- but smaller AI? cshl.edu/ai-monkey-brai… #Neuroscience #AI #BrainResearch #InspiringInnvoation #ScienceMakesLifeBetter
English
1
3
5
1K
Christian Internò retweetledi
William Gilpin
William Gilpin@wgilpin0·
How do time series foundation models forecast unseen dynamical systems? In new experiments, we find that small transformers learn to approximate transfer operators in-context. (1/N) arxiv.org/abs/2602.18679
English
3
79
383
28.4K
Christian Internò
Christian Internò@ChrisInterno·
10/10) This work was recently presented at the @worldmodel_conf 2026 at @Mila_Quebec together with @klindt_david. I’m really excited about the growing ML × physics community and what it could do for scientific discovery. If you’re working on world models / interpretability / scientific ML, let’s connect :)
Christian Internò tweet media
English
2
2
17
635
Christian Internò
Christian Internò@ChrisInterno·
1/10) We found an observer effect in world models: Invasive adaptation can corrupt latent physics! 🌍 New Paper: arxiv.org/abs/2602.12218 The phenomenon where the act of measuring a system unavoidably alters its state or properties (Heisenberg, 1927 & Sassoli de Bianchi, 2013).
GIF
English
1
21
79
9.2K
Christian Internò
Christian Internò@ChrisInterno·
Amazing work! We’ve been exploring a similar angle with PhyIP, a non-invasive probe for physics world models. We tested it on fluid dynamics and the same orbital mechanics dataset from @keyonV, and found that invasive adaptation or high-capacity probes can actually corrupt latent physics. Would love to discuss about this with you! :) @ZimingLiu11 x.com/ChrisInterno/s… @klindt_david
English
0
0
2
67
Ziming Liu
Ziming Liu@ZimingLiu11·
🚨Transformers don't learn Newton's laws? They learn Kepler's laws! Like us, transformers don't predict a flying ball via a differential equation, but by fitting a curve. Moreover, reducing context length steers a transformer from Keplerian to Newtonian. Compression in play.
Ziming Liu tweet media
English
25
206
1.2K
115.6K