Bill Podlaski

548 posts

Bill Podlaski

Bill Podlaski

@Bill_P_

Passively absorbing the neuro twitterverse...

Lisbon, Portugal Katılım Haziran 2016
607 Takip Edilen322 Takipçiler
Sabitlenmiş Tweet
Bill Podlaski
Bill Podlaski@Bill_P_·
🚨 New paper alert! Have you ever suspected that spikes, Dale's law, and E/I balance might be more than just biological constraints, but rather fundamental to how brains compute? Check out my latest work with Christian Machens @Neuro_CF: tinyurl.com/mpwkkubd 🧵 (1/5)
English
4
30
92
11.7K
Mohamady El-Gaby
Mohamady El-Gaby@GabyMohamady·
Delighted to share that I’ll be joining @OxExpPsy as a group leader/Faculty next spring. My group will investigate how our cells build models of the world and of our internal goals, and how this goes wrong in disorders like schizophrenia.
English
21
9
128
12.9K
Bill Podlaski retweetledi
Dan Goodman
Dan Goodman@neuralreckoning·
Missed #SNUFA24 spiking neural network workshop last week? No worries, we uploaded all 11 talks and the flash talks to Youtube. Check it out, along with talks from every year since we started in 2020 📺 71 videos, 36k views and 3.4k hours watched so far. youtube.com/playlist?list=…
English
0
21
76
5.2K
Bill Podlaski retweetledi
Mohamady El-Gaby
Mohamady El-Gaby@GabyMohamady·
“Where was I again?” Our study published today nature.com/articles/s4158… reveals brain cells can form a coordinate system for our behaviours. Instead of locating where we are in the world, this coordinate system tells us “where we are” in a sequence of behaviours: 🧵below:
Mohamady El-Gaby tweet media
English
20
129
418
56.3K
Bill Podlaski retweetledi
Dan Goodman
Dan Goodman@neuralreckoning·
#SNUFA24 starting at 14:00 CET (in just over 2 hours). Say no to doomscrolling, and yes to a free online workshop on spiking neural networks. snufa.net/2024
Dan Goodman tweet media
Dan Goodman@neuralreckoning

The #SNUFA24 final program is out and the event is next Tue-Wed! If you love spiking neural networks, click on the link below to check it out and register (free). snufa.net/2024/

English
0
6
36
2.3K
Bill Podlaski retweetledi
Dan Goodman
Dan Goodman@neuralreckoning·
The #SNUFA24 final program is out and the event is next Tue-Wed! If you love spiking neural networks, click on the link below to check it out and register (free). snufa.net/2024/
Dan Goodman tweet media
English
2
27
63
12K
Bill Podlaski retweetledi
Sebastian Seung
Sebastian Seung@SebastianSeung·
🧵on Japan's underrated contributions to neural nets. Shun-ichi Amari @UTokyo_News_en @riken_en is another one of my heroes. His 1972 paper on associative memory models modeled Hebbian plasticity using an outer product weight matrix.
Sebastian Seung tweet media
English
5
175
650
98.4K
Bill Podlaski retweetledi
matteo saponati
matteo saponati@matteosaponati·
I am excited to announce our workshop at the upcoming Bernstein Conference 2024 @BernsteinNeuro How much biological inspiration makes a system "neuromorphic"? What can we learn from Neuroscience breakthroughs and Machine Learning success? link: tinyurl.com/5yzskcpd 1/n
matteo saponati tweet media
English
2
23
103
10.4K
Bill Podlaski retweetledi
Dan Goodman
Dan Goodman@neuralreckoning·
SPIKING NEURAL NETWORKS! If you love them, join us at SNUFA24. Free, online workshop, Nov 5-6 (2-6pm CET). Usually ~700 participants. Invited speakers: Chiara Bartolozzi, David Kappel, Anna Levina, Christian Machens Posters + 8 contributed talks selected by participant vote.
Dan Goodman tweet media
English
1
42
161
25.9K
Bill Podlaski retweetledi
Michael Lohse
Michael Lohse@LohseNeuro·
Proud to share our paper in @Nature that Andrei and I co-first authored with @TFlogel @SWC_Neuro We recorded ~15000 neurons across the brain while mice made perceptual decisions to reveal how sensory evidence controls actions through global neural dynamics nature.com/articles/s4158…
English
3
59
226
22.1K
Bill Podlaski retweetledi
Friedemann Zenke
Friedemann Zenke@hisspikeness·
We're hiring! Come build models of how the brain learns and simulates a world model. We have several openings at PhD and postdoc levels, including a collab with @georg98keller lab on designing regulatory elements to target distinct neuronal cell types. zenkelab.org/jobs
Friedemann Zenke tweet media
English
4
72
193
20K
Bill Podlaski retweetledi
Dániel Barabási
Dániel Barabási@bdanubius·
it's fellowship szn! I spent *a lot* of time applying for postdocs + independent positions last year, and I want to share my notes. In this doc, I list deadlines, pay, and other 🔑 info on bio/theory positions I considered. DM me for advice anytime. docs.google.com/spreadsheets/d…
English
11
292
1.5K
130.9K
Bill Podlaski retweetledi
Richard Gao
Richard Gao@_rdgao·
My #AI4Neuro magnum opus: Discovery of spiking network model parameters constrained by neural recordings, using simulation-based inference & generative “AI”. (aka the answer to “how the f did you end up in Tübingen?”) Here's what we have in store: biorxiv.org/content/10.110…
English
10
53
209
32.2K
Bill Podlaski retweetledi
Carolina Rezaval
Carolina Rezaval@crezaval·
Sex or survival—what’s more important? Excited to share our @Nature paper on how flies resolve this conflict. We found a dopamine-based filter that reduces threat perception, helping flies focus on courtship when close to mating. #citeas" target="_blank" rel="nofollow noopener">nature.com/articles/s4158…
English
38
120
461
53.8K
Bill Podlaski retweetledi
Michele Nardin (he/him)
Michele Nardin (he/him)@michnard_·
📢📢 preprint alert! 🥳🤓 Hippocampal representation hierarchies are cool! And reshape in a smart way during learning! Great job Heloisa Chiossi!! w/ @michnard_, Gašper Tkačik and Jozsef Csicsvari. biorxiv.org/content/10.110…
Michele Nardin (he/him) tweet media
1
21
94
7.2K
Bill Podlaski retweetledi
Timoleon (Timos) Moraitis
Timoleon (Timos) Moraitis@timos_m·
AI hardware systems such as @nvidia's GPUs are a true marvel of technology. Extremely complex structure that also harnesses enormous amounts of energy. That is because modern AI algorithms involve multiple functions, and GPUs implement those over multiple physical components. This requires careful orchestration & heavy communication among the components, through wires spanning entire chips and even multiple chips. In new work with @ETH_en published in @NatureComms, we demonstrate that six key functions of modern AI algorithms can in fact be included in a single nanometre-scale device. Namely, Chris Weilenmann and team, under the guidance of Dr Alexandros Emboras & Prof. Mathieu Luisier, shows an individual memristor that comprises not only the ability to store a memory of the network structure (i.e. a synaptic connection weight) and to transmit information between neurons (i.e. a weighting operation), but also working memory (i.e. recurrency), learning (i.e. synaptic plasticity), selective context retention/forgetting (i.e. short-term plasticity), and meta-learning (i.e. learning how to learn and forget). These are arguably the key functions that have brought modern AI models such as LLMs to their impressive performance. To manage this result, we previously took inspiration from the brain to devise a new algorithm (the STPN - Short-Term Plasticity Neuron) that is suitable for such #neuromorphic hardware implementations, and reaches high performance in complex tasks. We started this line of work on STP at @IBMResearch, with @abuseb Abu Sebastian & Evangelos Eleftheriou (IJCNN, IEEE Nano, arXiv etc 2017-2020), it continued at @Huawei with @hector_grhv (ICML 2022), and culminated in this latest paper where we finally demonstrate a device that realizes the model physically. Moreover, we use measurements from such devices, in a neural network that plays an Atari video game not only better than a human, but also with a 100x reduction in energy consumption compared to a GPU. This approach does not only improve AI in comparison to GPUs, but also compared with other in-memory computing (#IMC) hardware, and also with other neural network algorithms. Often, IMC for AI implies memory devices with "just" the two functions of weight storage and input-weight-multiplication, which is already a significant improvement compared to the (von Neumann) architecture of GPUs, which separates memory from computation. Our work further pushes the envelope of IMC, by including more of the key computational operations within the memory. As a result, our model (STPN) improves the efficiency of memristive hardware (see ICML 2022). Moreover, our algorithm is not only more efficient but also performs better than other models such as LSTM, even on GPU, as we showed in our previous work. Sincerely thank you, Chris, Alex, Mathieu, and team for realizing this vision and for involving me.
Timoleon (Timos) Moraitis tweet mediaTimoleon (Timos) Moraitis tweet mediaTimoleon (Timos) Moraitis tweet mediaTimoleon (Timos) Moraitis tweet media
English
2
9
30
3.1K