Patrick Mineault

5.5K posts

Patrick Mineault banner
Patrick Mineault

Patrick Mineault

@patrickmineault

NeuroAI researcher @ Amaranth Foundation, safety, open science. Previously engineer @ Google, Meta, Mila.

New York City Katılım Nisan 2011
2.5K Takip Edilen23K Takipçiler
Sabitlenmiş Tweet
Patrick Mineault
Patrick Mineault@patrickmineault·
Excited to release what we’ve been working on at Amaranth Foundation, our latest whitepaper, NeuroAI for AI safety! A detailed, ambitious roadmap for how neuroscience research can help build safer AI systems while accelerating both virtual neuroscience and neurotech. 1/N
Patrick Mineault tweet media
English
18
102
374
108.4K
Patrick Mineault retweetledi
Fatih Dinc
Fatih Dinc@fatihdin4en·
For decades, two revolutions in neuroscience ran in parallel: - 🧠 In vivo imaging — watch neurons fire in living animals - 🧬 Spatial transcriptomics — read cell's molecular identity Meet TRU-FACT - a graph-based method that matches cells between these datasets at scale 🧵
GIF
English
4
55
249
17.8K
Patrick Mineault retweetledi
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
There's a quadrillion-dollar question at the heart of AI: Why are humans so much more sample efficient compared to LLM? There are three possible answers: 1. Architecture and hyperparameters (aka transformer vs whatever ‘algo’ cortical columns are implementing) 2. Learning rule (backprop vs whatever brain is doing) 3. Reward function @AdamMarblestone believes the answer is the reward function. ML likes to use pretty simple loss functions, like cross-entropy. These are easy to work with. But they might be too simple for sample-efficient learning. Adam thinks that, in humans, the large number of highly specialised cells in the ‘lizard brain’ might actually be encoding information for sophisticated loss functions, used for ‘training’ in the more sophisticated areas like the cortex and amygdala. Like: the human genome is barely 3 gigabytes (compare that to the TBs of parameters that encode frontier LLM weights). So how can it include all the information necessary to build highly intelligent learners? Well, if the key to sample-efficient learning resides in the loss function, even very complicated loss functions can still be expressed in a couple hundred lines of Python code.
English
190
170
1.9K
933.9K
Patrick Mineault retweetledi
Quanta Magazine
Quanta Magazine@QuantaMagazine·
Recently, neuroscientist Jeffrey Magee explored the unsung role that dendrites play in memory formation. The findings rewrite our understanding of the brain’s neuroplasticity. “That made it even more interesting, of course, and a little bit intimidating, because then we were going to be facing up to nearly 100 years’ worth of dogma,” he said. quantamagazine.org/a-new-type-of-…
Quanta Magazine tweet media
English
10
84
380
30.4K
Patrick Mineault retweetledi
Marcelo Mattar
Marcelo Mattar@marcelomattar·
New Annual Review with @nathanieldaw. We argue that the planning machinery of the brain is mostly used for learning from simulated experience, and that thinking prospectively at decision time is just one special case of this more general process. annualreviews.org/content/journa…
English
3
48
186
14.2K
Patrick Mineault retweetledi
A. Sophia Koepke
A. Sophia Koepke@ASophiaKoepke·
New paper: Back into Plato’s Cave Are vision and language models converging to the same representation of reality? The Platonic Representation Hypothesis says yes. BUT we find the evidence for this is more fragile than it looks. Project page: akoepke.github.io/cave_umwelten/ 1/9
English
6
62
329
86K
Patrick Mineault retweetledi
Yujin GOTO
Yujin GOTO@eugene_vrain·
運動計画時に既に運動野が活動するのはよく知られています.ではなぜ筋出力はされないのか. single neuron ではなく population-level の力学系として,運動空間と直行するヌル空間という概念を用いて説明してきた筆者らの総説です. nature.com/articles/s4158… #日本神経科学学会ニューロナビゲータ
日本語
2
21
166
10.5K
Patrick Mineault retweetledi
Agnès Landemard
Agnès Landemard@agnesldm·
How does blood flow relate to brain activity? We discovered that it reflects two neural populations affected oppositely by arousal. Together, they explain neurovascular coupling in all brain regions and brain states! Out in Nature: rdcu.be/fdC2A @UCLBrainScience
Agnès Landemard tweet media
English
2
80
316
18.7K
Patrick Mineault retweetledi
Jean-Rémi King
Jean-Rémi King@JeanRemiKing·
🧠 the Digital Brain Project is now live: $5M total · up to $500k per selected team Let's open-source the modeling of the human brain brain activity! ➡️Apply on: digitalbrainproject.org
English
30
124
515
59.7K
Patrick Mineault retweetledi
Doris Tsao
Doris Tsao@doristsao·
This is the strongest ephys evidence so far for a generative model in the brain that I know of. Congratulations @WadiaVarun! Wonderful collaboration with @UeliRutishauser on science that could only be done in humans. And please check out Fig. 5FG. This is new since biorxiv and really surprised me: the mean response to imagery and viewing is actually the same & there are many cells that respond only during imagery--challenging the idea that signal strength is what distinguishes reality from imagination.
VarunWadia@WadiaVarun

1/8 Our preprint is now a peer-reviewed paper :) Big thanks to our reviewers who pushed us to examine our results more carefully and Olivier Wyart (headquarter.paris) for the exquisite visual. science.org/doi/10.1126/sc…

English
6
46
221
40.8K
Patrick Mineault retweetledi
Tianhao Lei
Tianhao Lei@TH_Alec_Lei·
🎉 Excited to share our new paper in Nature: “Active Dissociation of Intracortical Spiking and High Gamma Activity.” 🧠 Huge thanks to my advisors first: @SlutzkyLab @joshuaiglaser Paper link: nature.com/articles/s4158… Here are some digests that walk you through the results 👇🏼
Tianhao Lei tweet media
English
2
57
183
18K
Patrick Mineault retweetledi
Patrick Mineault
Patrick Mineault@patrickmineault·
Fair question. It comes down to the conditioning of the A matrix (sampling matrix from x the high-dim signal to y the measured signal). With ultrasound, you get ~200 um voxels. The instrument is high CNR and the reconstruction problem is trivial, which is great. But what you're measuring is the hemodynamics, so ultimately that area of integration is larger than that (depends on the drainage area of the arterioles and veinules). What it effectively does is average over (a nonlinear function of) the receptive fields of 10s of thousands of neurons. If you have ordered receptive fields (topography) across the cortical surface (e.g. in motor cortex and in the visual cortex), the averaging process leaves the uv dimensions intact (in V1, retinotopy; in M1, somatotopy). So you can do decoding with that, etc. But if you're what you're after is the latens, then the averaging kills the latent signal you're looking for, and you can only measure the true latents with very low CNR. Here, randomly subsampling is much better at preserving manifold distances than averaging.
English
1
0
1
50
Bogdan Ionut Cirstea
Bogdan Ionut Cirstea@BogdanIonutCir2·
@patrickmineault (my [non-specialist, not-super-confident] understanding is that the transfer function from neuron firings to the hemodynamics measured by fUSI should also be a [perhaps noisy] linear projection, at least based on a very coarse, quick reading of some literature)
English
1
0
0
40
Patrick Mineault
Patrick Mineault@patrickmineault·
How can neuroscience help AI safety? Neuroscience is so slow! We have a plan: 1) accelerate neuroscience, 2) work on neuroscience that drives toward AI safety within AI timelines. We explain why data-driven representational approaches ("distillation") are the way.
James Fickel@jamesfickel

Towards Magnanimous AGI Before we build extremely powerful alien minds, we must understand our own minds and the mechanisms behind prosocial behavior. After years of investigating brain-based AI safety, here’s what we found and the teams we're backing: blog.amaranth.foundation/p/towards-magn…

English
2
3
58
6.9K