Haim Sompolinsky

57 posts

Haim Sompolinsky

Haim Sompolinsky

@HSompolinsky

@Harvard Professor of MCB & Physics and Director of Swartz Program in Theoretical Neuroscience; @HebrewU Professor of Physics and Neuroscience (Emeritus)

เข้าร่วม Temmuz 2018
21 กำลังติดตาม5K ผู้ติดตาม
Markus Meister
Markus Meister@mameister4·
No neuronal fMRI after all. Impressive work from Jasanoff group debunking the recent glam report on "DIANA". Traced the claim down to a subtle artifact of pulse timing in MRI. On the plus side: DIANA is a cool acronym, now available again for other uses. science.org/doi/10.1126/sc…
English
5
16
87
16K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
the title of LeCun's slide says it all.
Haim Sompolinsky tweet media
English
16
39
246
146.9K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
Thanks, Yann for the great inspiring talk at Harvard.
Haim Sompolinsky tweet media
English
4
5
128
194.5K
Haim Sompolinsky รีทวีตแล้ว
יצחק הרצוג Isaac Herzog
בגאווה ישראלית גדולה בירכתי היום את פרופסור חיים סומפולינסקי מהאוניברסיטה העברית בירושלים על זכייתו בפרס Lundbeck היוקרתי - הפרס החשוב ביותר בעולם בתחום חקר המוח. מדובר בהישג ישראלי פורץ דרך, שמשקף את מצוינות האקדמיה הישראלית במחקר ובכלל. אני סמוך ובטוח כי בעקבותיו יבואו עוד אותות ופרסים שיוקירו את תרומתם של ישראליות וישראלים לקהילה המדעית העולמית, ומקווה ופועל לכך שנדע להכיר תודה - כחברה וכמדינה - לענקי האקדמיה והמחקר שלנו.
עברית
108
20
104
27K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
Topics include principles of early sensory processing; unsupervised and supervised learning; attractors, memory, and spatial functions in cortical circuits; noise, chaos, and neural coding; learning, representations, and cognition in deep neural networks in brains and AI. 3/3
English
0
0
28
4.7K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
The course introduces analytical and numerical tools from information theory, dynamical systems, statistics, statistical physics, AI, and machine learning for the study of neural computation. 2/3
English
1
0
25
5.7K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
My Harvard/Neuro 231, 2024 Edition begins soon. It explores contemporary brain theory spanning local neuronal circuits as well as deep neural networks. It examines the relationship between network structure, dynamics, and computation. 1/3
English
8
10
149
24.4K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
The theory unites NTK and NNGP as two limits of the same underlying process. We introduce the Neural Dynamical Kernel (NDK), derive equations for the dynamics of the mean predictor of the network, and discuss implications for the problem of representational drift in neuroscience.
English
0
0
6
3.2K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
I am excited to announce a recent work by Yehonatan Avidan and Qianyi Li arxiv.org/abs/2309.04522 presenting an analytical theory for learning dynamics in infinitely wide neural network.
English
1
18
84
10.4K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
Jet Blue just entered the Book of Guinness for World Record on Greediness. It cancelled my flight due to weather conditions but refused to fully refund me. They charged me cancellation fee! Never underestimate the creative way companies such as Jet Blue chase your money! advice?
English
5
0
6
4.1K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
@ylecun When DCNNs said 'cat' in response to an image, or when VAEs drew an image of cat they met with all but praise. Nobody said 'they make stuff up'. ChatGPT is a milestone in AI and big cos should swallow their pride and start working, otherwise they will be sidelined.
English
4
1
27
2.8K
Yann LeCun
Yann LeCun@ylecun·
Excellent WaPo article about large language models and chatbots that corroborates what I've been posting recently: they are useful but they make stuff up. They detail the reasons why large tech cos have been hesitant to release such things for public use. washingtonpost.com/technology/202…
English
58
189
801
197.2K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
Our model proposes a novel scheme for associative memory of temporal sequence. In contrast to using sequence attractors (Sompolinsky and Kanter, 1988), here entire sequences are stored holistically as fixed points-a scheme that is robust to overlap between sequences.
Haim Sompolinsky tweet media
English
0
2
8
2.8K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
and then store multiple compressed vectors as fixed points in RNN. Retrieved fixed points is followed by decoding individual linked components. Thus, model consists of two memory systems: 'dictionary' for items and 'episodic' for their linked structures.
Haim Sompolinsky tweet media
English
1
1
13
3.8K
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
I am excited to announce the publication of the wonderful paper of Julia Steinberg on Associative memory of structured knowledge in scientific reports nature.com/articles/s4159…. Most neural models of associative memory store structureless knowledge as simple random patterns in RNNs
English
2
15
110
22.4K
Haim Sompolinsky รีทวีตแล้ว
Surya Ganguli
Surya Ganguli@SuryaGanguli·
Our new paper in @PNASNews: "Neural representation representation geometry underlies few shot concept learning'' lead by Ben Sorscher and @HSompolinsky: a quantitative theory of neural geometry & few shot learning, tested in both deep networks & monkey IT pnas.org/doi/abs/10.107…
Surya Ganguli tweet media
English
4
66
361
0
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
Remarkably, we find that object manifolds in DNN visual feature layers also support zero shot learning of concepts from linguistic descriptors revealing geometric alignment of semantic features in Word2Vec and high level visual features of the same concepts.
Haim Sompolinsky tweet media
English
0
5
20
0
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
And show that that they support highly accurate few-shot learning of novel visual concepts and that and that variability in performance across concepts follow closely the predictions of our manifold geometric theory.
Haim Sompolinsky tweet media
English
1
1
9
0
Haim Sompolinsky
Haim Sompolinsky@HSompolinsky·
Our work on manifold geometric theory underlying fast learning of novel concepts led by Ben Sorcher of the Ganguli Lab is out. pnas.org/doi/pdf/10.107… We apply our theory to object manifold representations in deep neural network (DNN) and in macaque IT cortex.
English
2
38
186
0