Imran Thobani

67 posts

Imran Thobani banner
Imran Thobani

Imran Thobani

@cogphilosopher

Neuroscience postdoc @Stanford, previously philosophy of neuroscience PhD. Building large-scale brain models using deep learning.

Stanford, CA เข้าร่วม Ağustos 2008
617 กำลังติดตาม269 ผู้ติดตาม
ทวีตที่ปักหมุด
Imran Thobani
Imran Thobani@cogphilosopher·
1/x Our new method, the Inter-Animal Transform Class (IATC), is a principled way to compare neural network models to the brain. It's the first to ensure both accurate brain activity predictions and specific identification of neural mechanisms. Preprint: arxiv.org/abs/2510.02523
Imran Thobani tweet media
English
3
14
46
13.4K
Imran Thobani รีทวีตแล้ว
Aran Nayebi
Aran Nayebi@aran_nayebi·
I don't see why prediction has to be framed as necessarily at odds with "understanding". The two naturally go hand-in-hand. Prediction is the *minimal* scientific prereq for anything you want to further investigate. We didn't even have successfully predictive systems of large-scale neural population responses in the neurosciences until ML started working. Furthermore, "understanding" isn't an objective measure -- it's aesthetically in the eye of the beholder. So it's not clear there's a well-defined global notion here to begin with, besides prediction alone. If you ask 10 scientists what they mean by "understanding", you'll get > 10 different answers 🙂 Not to mention, causal manipulations are naturally supported in ANNs because they're mechanistic models by construction: you have the entire network graph available to you to perturb as you choose. As the saying goes: “Everything should be as simple as it can be, but not simpler.” And it's quite clear there isn't anything simpler than ANNs without losing tons of predictive power. Why bother "understanding" a system that doesn't even predict the scientific phenomenon at hand?
The Transmitter@_TheTransmitter

Neuroscience has become increasingly concerned with prediction, and machine learning with causal explanation, with each field adopting methods from the other, writes @gershbrain. Will this bring us closer to understanding neural systems? thetransmitter.org/the-big-pictur…

English
4
6
45
8.4K
Imran Thobani
Imran Thobani@cogphilosopher·
1/x Our new method, the Inter-Animal Transform Class (IATC), is a principled way to compare neural network models to the brain. It's the first to ensure both accurate brain activity predictions and specific identification of neural mechanisms. Preprint: arxiv.org/abs/2510.02523
Imran Thobani tweet media
English
3
14
46
13.4K
Imran Thobani รีทวีตแล้ว
Klemen Kotar
Klemen Kotar@KlemenKotar·
1/ A good world model should be promptable like an LLM, offering flexible control and zero-shot answers to many questions. Language models have benefited greatly from this fact, but it's been slow to come to vision. We introduce PSI: a path to truly interactive visual world models 🧵
Klemen Kotar tweet media
English
3
35
131
45.2K
Imran Thobani
Imran Thobani@cogphilosopher·
Sunrise over Prague
Imran Thobani tweet media
English
0
1
4
194
Imran Thobani
Imran Thobani@cogphilosopher·
Devoured these pastries from the oldest bakery in Copenhagen today, Skt. Peders Bageri.
Imran Thobani tweet media
English
0
1
4
220
Imran Thobani รีทวีตแล้ว
Rahul Venkatesh
Rahul Venkatesh@Rahul_Venkatesh·
AI models segment scenes based on how things appear, but babies segment based on what moves together. We utilize a visual world model that our lab has been developing, to capture this concept — and what's cool is that it beats SOTA models on zero-shot segmentation and physical manipulation tasks. So is segmentation solved, or are we only beginning to learn how to define it in a way that supports downstream applications? Find out how causal probes on world models help address this definitional challenge. Check out our preprint on “Discovering and Using Spelke segments”: neuroailab.github.io/spelke_net Get ready for 🤿 1/n
English
9
14
54
105.9K