Andreas Tolias Lab @ Stanford University

1.2K posts

Andreas Tolias Lab @ Stanford University banner
Andreas Tolias Lab @ Stanford University

Andreas Tolias Lab @ Stanford University

@AToliasLab

to understand intelligence and develop technologies by combining neuroscience and AI

Palo Alto, CA Katılım Mayıs 2017
808 Takip Edilen5K Takipçiler
Sabitlenmiş Tweet
Andreas Tolias Lab @ Stanford University
Excited to participate in this @StanfordHAI and @StanfordData AI+Science conference organized by @SuryaGanguli and @RisaWechsler. I’ll bring a #NeuroAI perspective: how #AI can accelerate our understanding of the brain and mind and ultimately help treat and cure brain disorders.
Surya Ganguli@SuryaGanguli

Excited to co-organize with @RisaWechsler this @StanfordHAI and @StanfordData conference on AI+Science: Accelerating Scientific Discovery on Tuesday May 5th Anyone can register for the livestream: hai.stanford.edu/events/ai-scie… We have world-leading speakers talking about: AI for Life: molecular biology to brains AI for Earth: weather, climate, geophysics and oceans AI for Universe: particle physics, cosmology & math Additionally we will have @dariogila give a keynote about America's Genesis Mission to accelerate AI for Science. And interestingly we will have a panel discussion on the nature and role of human understanding in the future of AI for Science, informed by scientists, AI researchers and sociologists of science and AI. Excited to develop a highly interdisciplinary and global view of all the opportunities and challenges of accelerating science with AI.

English
0
2
19
3.2K
Andreas Tolias Lab @ Stanford University retweetledi
Demis Hassabis
Demis Hassabis@demishassabis·
I’ve always believed the No.1 application of AI should be to improve human health. That work started with AlphaFold, and now at @IsomorphicLabs with the mission to reimagine drug discovery and one day solve all disease! We are turbocharging that goal with $2.1B in new funding.
English
648
2.4K
18.8K
2.5M
Andreas Tolias Lab @ Stanford University retweetledi
Goodfire
Goodfire@GoodfireAI·
Neural networks might speak English, but they think in shapes. Understanding their rich *neural geometry* is key to understanding how they work – and to debugging and controlling them with precision. Starting today, we’re releasing a series of posts on this research agenda. 🧵
English
299
1.6K
10.9K
2.9M
Andreas Tolias Lab @ Stanford University retweetledi
Patrick Mineault
Patrick Mineault@patrickmineault·
Underlying all neurogenerative diseases is the general process of aging. We must strike at the root! In the short term, we should restore the health of the support systems of the brain. In the long term, we must build discovery platforms that fully capture human biology.
Patrick Mineault tweet media
English
5
14
87
4.5K
Andreas Tolias Lab @ Stanford University retweetledi
Roan
Roan@RohOnChain·
Anthropic pays $750,000+ a year for engineers who can build LLM architectures from scratch. Stanford taught the entire thing in 1 hour lecture & released it for free. Bookmark & watch this today before someone takes it down.
English
118
1.6K
10.5K
2.5M
Andreas Tolias Lab @ Stanford University retweetledi
Katrin Franke
Katrin Franke@kfrankelab·
Super excited to see OmniMouse 🐭 released 🎉 A single model of mouse visual cortex — queryable for many different questions: • Predicting neural activity from video 📹 👉 🧠 • Predicting neural activity in one population from activity in another 🧠 👉 🧠 • Forecasting neural activity forward in time, given video, past activity, or both 📹 &🧠 👉 🧠 • Decoding behavior (gaze, pupil size, running speed) from neural activity 🧠 👉 🐁 • Predicting neural responses conditioned on behavior 📹 & 🐁 👉 🧠 This kind of multi-modal & multi-task flexibility is an exciting approach for systems neuroscience, enabling systematic in silico exploration of hypotheses about single neuron and population coding The dataset comprises >2 million neurons across 78 mice and 328 sessions of mouse visual cortex, with naturalistic and parametric stimuli alongside behavior. Both the dataset and the pretrained models are available on @huggingface 🙌 Congratulations to @KonstantinWille @pollytur1 @alexrgil14, and the wider team across the @AToliasLab, @alxecker, and @sinzlab labs and the Enigma Project /w @naturecomputes. Paper, code, data, and models in the original post 👇
Konstantin Willeke@KonstantinWille

🧠Introducing OmniMouse: One of the largest datasets in neuroscience ever assembled along with a systematic study of scaling properties of brain models Co-led with🤩@pollytur1 @alexrgil14 3M neurons, >150B tokens from @AToliasLab @stanford, @alxecker @sinzlab @uniGoettingen 🧵

English
0
9
44
5.2K
Andreas Tolias Lab @ Stanford University retweetledi
Katrin Franke
Katrin Franke@kfrankelab·
Every experimental neuroscientist knows the feeling: you have a hypothesis, but running the experiment takes months👩‍🔬 In our new preprint @biorxiv_neursci , we present an openly available functional ''digital twin'' of the retinal input to the mouse superior colliculus that lets you test hypotheses in the model first — try it out yourself using the link below 🧠 We combined chronic two-photon imaging of >200k retinal ganglion cell axonal boutons in the mouse superior colliculus (SC) with deep dynamic models that predict neural responses to parametric light stimuli and natural movies. Key findings⚡️ ▸ Retinal inputs to the SC form functionally distinct, laminar-organized response types, identified via Gaussian mixture model clustering ▸ The functional diversity of retinal output matches that of retinal input to the SC. We show this by aligning our dataset with a retinal reference dataset using a variational autoencoder with adversarial training ▸ Our deep dynamic digital twin learns stimulus–response transformations and generalizes to stimuli it was never trained on, including parametric stimuli used for cell type identification The model functions as a virtual lab bench: feed in any stimulus you're curious about and generate predicted neural responses. As a proof of concept, we fed in a looming stimulus — known to trigger defensive behavior in mice — and identified putative response types selective for this stimulus Try it in our Colab notebook with your own stimulus and see what the model predicts 📄 Preprint: biorxiv.org/content/10.648…💻 Colab: colab.research.google.com/drive/1k9411tL…📂 Code: github.com/yongrong-qiu/r… Huge thanks to an incredible cross-institutional team spanning @StanfordMed , @uktuebingen , @uniGoettingen , @bcm_neurosci & many more @YongrongQ, @lisa_schmors, Na Zhou, Mels Akhmetali, Dominic Gonschorek, Cameron Smith, Anton Sumser, Marie Vallens, @crcadwell, Fabrizio Gabbiani, Maximilian Joesch, @AToliasLab, Philipp Berens, Thomas Euler, @sinzlab, @viajake 🙌
Katrin Franke tweet media
English
0
30
148
8.8K
Andreas Tolias Lab @ Stanford University retweetledi
Surya Ganguli
Surya Ganguli@SuryaGanguli·
Tracey Burns and I were recently interviewed by @sarojacoelho at @CBCRadioCanada. We had a fun conversation about AI, brains and education: cbc.ca/listen/live-ra… My take: AI for eduction is a dual-use technology: it has the immense potential to deliver powerful educational experiences at scale across the globe if done correctly, but it also has the capacity to dull the human mind if used incorrectly. The key to powering education with AI is the development of human-AI interfaces that encourage human exploration, provide only targeted hints, automatically generate related challenges, but never just gives the answer. Giving the answer too early is detrimental. Using AI to do directly solve your homework is as pointless as using a robot to lift your weights at the gym. The human struggle is where the growth lies, in both mind and body. To prevent students from using AI to do their homework, the second key, ironically, is that we should evaluate younger students without AI, through closed book in class written exams, especially for fundamental subjects like writing, mathematics and the sciences. Knowing they will be evaluated this way will ensure they can solve problems on their own as they first encounter new concepts. For older students once they have mastered concepts, we can teach them how to use AI to superpower their creativity and productivity with those concepts. We have already been following such a best practice for years: for example when we teach 1st graders arithmetic, we do not immediately hand them a superhuman calculator; we make sure they master it and only years later do they use calculators. In any case, excited about how education can transform for the better with AI, but not all old school approaches should be abandoned. In the age of AI, we should not take away the gift of struggle from the next generation.
English
2
6
23
5.2K
Andreas Tolias Lab @ Stanford University retweetledi
Patrick Mineault
Patrick Mineault@patrickmineault·
How can neuroscience help AI safety? Neuroscience is so slow! We have a plan: 1) accelerate neuroscience, 2) work on neuroscience that drives toward AI safety within AI timelines. We explain why data-driven representational approaches ("distillation") are the way.
James Fickel@jamesfickel

Towards Magnanimous AGI Before we build extremely powerful alien minds, we must understand our own minds and the mechanisms behind prosocial behavior. After years of investigating brain-based AI safety, here’s what we found and the teams we're backing: blog.amaranth.foundation/p/towards-magn…

English
2
3
58
6.9K
Andreas Tolias Lab @ Stanford University retweetledi
James Fickel
James Fickel@jamesfickel·
Towards Magnanimous AGI Before we build extremely powerful alien minds, we must understand our own minds and the mechanisms behind prosocial behavior. After years of investigating brain-based AI safety, here’s what we found and the teams we're backing: blog.amaranth.foundation/p/towards-magn…
James Fickel tweet media
English
5
19
69
22.1K
Andreas Tolias Lab @ Stanford University
What are deep neural predictive models actually good for? We use them as digital twins of visual cortex. In our inception loop paradigm: large-scale natural data → data-driven #DeepLearning predictive models → in silico experiments → in vivo verification—we characterize center-surround interactions in a fully non-parametric way. Key result: surrounds can be excitatory when they complete the center in ways consistent with natural scene statistics, and suppressive when they disrupt it. These results generalize across mouse and macaque V1 We then formalize this as a Bayesian normative model where neuronal activity for preferred center features reflects posterior beliefs about likely center-surround configurations. A step toward using #AI + large-scale neural data not just to predict the brain, but to discover its principles.
Katrin Franke@kfrankelab

New paper out in @NeuroCellPress 🎉 What determines contextual modulation in primary visual cortex (V1)? The key result ⚡ V1 neurons are facilitated by surrounds that complete their optimal center feature according to natural scene statistics, and suppressed by surrounds that disrupt it — a principle explained by hierarchical Bayesian inference and conserved across mouse and macaque. These results converge with Deveau et al. in @NeuroCellPress (cell.com/neuron/fulltex…) from the lab of @HistedLab, who show that recurrent circuits in V1 filter temporal input sequences to selectively boost natural dynamics, and Lange et al. in @ScienceMagazine (science.org/doi/10.1126/sc…) from the lab of @haefnerlab, who show that perceptual learning increases population redundancy as predicted by generative inference. A consistent picture is emerging: (early) visual cortex actively infers the statistical structure of the natural world. Amazing collaboration with @AToliasLab @haefnerlab @sinzlab Antolik Lab & many more — led by @jiakunfu, with co-authors @suhas_shrinivas & @LuchinoBaroni & many more The paper is open-access and available here: doi.org/10.1016/j.neur… More detailed approach: We trained CNN digital twins on large-scale two-photon recordings from mouse V1 and used them to synthesize, for each neuron individually, the surrounds that most strongly facilitate or suppress its response to its optimal center stimulus. Closed-loop in vivo inception loop experiments confirmed the predictions. Key qualitative finding: Surrounds that complete the optimal center feature under natural scene statistics → facilitation Surrounds that disrupt it → suppression We verified this with an independent generative diffusion model (blind to our CNN): statistically likely continuations of the optimal center feature were significantly more similar to facilitatory surrounds in V1 representational space. The same principle holds in macaque V1 despite major differences in receptive field organization. We formalize these results in a hierarchical Bayesian inference model — V1 neurons represent posterior beliefs about local features, with feedback from higher areas encoding global scene structure — and find like-to-like excitatory connectivity in the MICrONS dataset as a candidate circuit mechanism.

English
0
5
44
5K
Andreas Tolias Lab @ Stanford University retweetledi
Patrick Mineault
Patrick Mineault@patrickmineault·
.@bingbrunton presenting a worm connectome controlling a fly body was a top three conference moment for me. The point: it's pretty easy to fool yourself into thinking that plausible-looking behavior is a meaningful sim youtube.com/clip/Ugkxr8N0Z…
English
2
32
96
25.5K
Andreas Tolias Lab @ Stanford University retweetledi
Mengye Ren
Mengye Ren@mengyer·
Nice work V-JEPA 2.1 from Meta. Our team has also been exploring for a long time on dense and hierarchical video SSL (e.g. FlowE, PooDLe, and Midway). Glad to see it works on a larger scale.
Turing Post@TheTuringPost

A new paper from @ylecun and others – V-JEPA 2.1 It changes the recipe of V-JEPA so the model learns both: • Global semantics – what is happening in the scene • Dense spatio-temporal structure – where things are and how they move The idea is to supervise not just masked tokens but the visible ones too There are 4 key ingredients for V-JEPA 2.1: - Dense prediction loss on both masked and visible tokens - Deep self-supervision across intermediate layers - Modality-specific tokenizers (2D for images, 3D for videos) within a shared encoder - Model + data scaling The workflow turns into: masked image/video → encode visible tokens → predict latent representations for both masked and visible tokens → supervise at multiple layers Here are the details:

English
1
13
106
28.8K
David Sussillo
David Sussillo@SussilloDavid·
Me and Michelle holdin' it down in the airport bookstore!! 🤩🤩
David Sussillo tweet mediaDavid Sussillo tweet media
English
2
0
32
1.8K
Andreas Tolias Lab @ Stanford University retweetledi
Surya Ganguli
Surya Ganguli@SuryaGanguli·
Our new paper: "Solving adversarial examples requires solving exponential misalignment", expertly lead by @AleSalvatore00 w/ @stanislavfort arxiv.org/abs/2603.03507 Key idea: We all want to align AI systems to human values and intentions. We connect adversarial examples to AI alignment by showing they are a prototypical but exponentially severe form of misalignment at the level of perception. The fact that adversarial examples remain unsolved for over a decade thus serves as a cautionary tale for AI alignment, and provides new impetus for revisiting them. We shed light on why adversarial examples exist and why they are so hard to remove by asking a basic question: what is the dimensionality of neural network concepts in image space? For ResNets, and CLIP models, we show that neural network concepts (the space of images the network confidently labels as a concept) fill up almost the ENTIRE space of images (~135,000 dimensions out of ~150,000 for ImageNet & ~3000 out of 3072 for CIFAR10). In contrast natural image concepts are only ~20 dimensional. This indicates exponential misalignment between brain and machine perception (neural networks perceive exponentially many images as belonging to a concept that humans never would). This also explains why adversarial examples exist: if a concept fills up almost all of image space, ANY image will be close to that concept manifold. We further do experiments across > 20 networks showing that adversarial robustness inversely relates to concept dimensionality, though the most robust networks do not completely align machine and human perception. Overall the curse of dimensionality raises its ugly head as an impediment to both adversarial examples and alignment: if can be difficult to get AI systems to behave in accordance with human intentions, values, or perceptions over an exponentially large space of inputs. See @AleSalvatore00's excellent thread for more details: x.com/AleSalvatore00…
Surya Ganguli tweet media
English
11
24
186
16.3K
Andreas Tolias Lab @ Stanford University retweetledi
Kenneth Hayworth
Kenneth Hayworth@KennethHayworth·
So, some people are asking me why this EON fly video doesn’t show real ‘uploading’ since it does simulate a real connectome. The most important reason is that the functional parameters that define the dynamic behavior of individual neuron and synapse types in the connectome are unknown. Instead, they used an existing model (nature.com/articles/s4158…) which substitutes these with guessed parameters and grossly simplified dynamics. As made clear in that older paper, these are not sufficient to recreate the activity patterns that would be seen in the real fly. The simplified dynamics would not, for example, be able to choreograph the timing of leg muscles during walking or grooming, or the dynamics of the compass neurons encoding the fly’s heading direction, or the myriad other neuronal dynamics that make up the fly ‘mind’. So not an ‘upload’ by any reasonable definition. In fact, the simplified dynamics they used have only been demonstrated to approximate gross correlations along major sensory-motor pathways for a handful of neurons. For example: activating a sugar sensing neuron causes gross downstream activation that elevates the activity of feeding neurons. It is this handful of very, very crude and basic correlations in the simulated connectome that are being used to drive the EON simulated fly. If they had said that from the start, then I would have had no issue. But instead, they made the bold claim that they had “uploaded a fly” and presented a video of said fly walking over a landscape with highly articulate legs, visually navigating through the terrain to a food source, grooming its antenna with eerily fly-like leg motions, etc. Any reasonable layperson would assume that these visually exciting articulations are the ones being controlled by the simulated brain’s dynamics instead of being faked by computational add-on routines. There are now many secondary reports of this on YouTube and all of them seem to make this reasonable assumption (e.g. youtube.com/shorts/Z7NNP1Z…). And who could blame them? Many neuroscientists also made that assumption before EON started to spell out what was really behind the video millions of views and over a day later. To make clearer just how misleading EON Systems’ video is and how outlandishly laughable their ‘uploading’ claim is, below is an imagined back-and-forth discussion between a [Reasonable Layperson] and a [Neuroscientist] trying to explain to them what is really behind the video: [Reasonable Layperson] “Look at the complicated leg motions as the fly walks… the timing of all those dozens of individual muscles being controlled by the dynamics of the simulated neurons… and they say that they used no reinforcement learning to tune parameters, just the connectome… that is really impressive!” [Neuroscientist] “Well actually no… those leg movements are actually coming from a program unrelated to the connectome. The connectome used didn’t even include the central pattern generator circuits in the ventral nerve cord responsible for controlling leg muscles.” [Reasonable Layperson] “Oh… so in what sense is the simulated connectome controlling walking?” [Neuroscientist] “It looks like they just found a few neurons in the brain connectome that are correlated with right/left/forward motion and used these to ‘steer’ the pretend walking routine.” [Reasonable Layperson] “Oh… But the activations of those ‘steering’ neurons are reflecting the complicated dynamics of tens of thousands of simulated neurons in the fly visual system as it moves through the virtual world, avoiding objects and heading toward its visual goal, right?” [Neuroscientist] “Well actually no … The visual system and virtual world are essentially ‘decoration’… the flashing dynamic neural responses as the fly moves through the virtual environment are designed to give the viewer the impression that the simulated fly is actually seeing the world and making walking decisions based on those visual responses. But, in fact, they could turn off the lights and the fly would behave identically.” [Reasonable Layperson] “Oh… so how does the fly walk toward the food then?” [Neuroscientist] “Well… it looks like they simply imposed an odor gradient in the virtual environment that is centered on the virtual food. The fly has two sets of odor receptors (right and left) that sense this gradient and the activation of these in the connectome is correlated with the activation of the ‘steering’ neurons. So if the left odor neuron activates more than the right then the fly steers left.” [Reasonable Layperson] “Oh… so it is like one of those toy cars that moves toward a light because it has right and left light sensors cross-connected to right and left motors… Gee, I thought a fly was more complicated than that.” [Neuroscientist] “Well actually a real fly is. Real flies have dozens of behavioral states that allow intelligent behavior in a complicated visual and sensory environment. In fact, a real fly contains a set of neurons which act as an internal compass updated by the visual environment and the fly’s walking.” [Reasonable Layperson] “Oh… and their connectome has those internal compass neurons?” [Neuroscientist] “Yes. They used the full brain connectome that contains those compass neurons.” [Reasonable Layperson] “...And their compass neuron activations are tracking the visual environment just like in the real fly?” [Neuroscientist] “Oh sweet summer child… those compass neurons exist in their connectome simulation, but no one knows enough about their functional parameters (synaptic weights, time constants, etc.) to simulate them accurately. They light up in pretty patterns totally unrelated to how they would in a real fly walking through that visual world.” [Reasonable Layperson] “Oh… and the complicated leg movements it shows during antenna grooming… is that also just a faked recording?” [Neuroscientist] “Yes. All the complicated leg motions shown during grooming are faked by a hard-coded program. But they turn that fake routine on or off by looking at some neurons in the connectome that are correlated with actual grooming behavior triggered by dust accumulation on the antenna… well really they fake the dust too by just activating a set of neurons after a delay.” [Reasonable Layperson] “And what did EON Systems do? Did they acquire the connectome? Did they determine the neurotransmitter types? Did they do the calcium imaging experiments to determine the steering and grooming neurons? Did they make the mechanical fly model?” [Neuroscientist] “No. Those were all done by real labs who were kind enough to carefully write up their results in open journals and to post their results and code openly online…. It looks like Eon Systems just took their code and put it together with a virtual environment designed specifically to trick viewers by triggering behaviors in misleading ways.”
YouTube video
YouTube
English
3
18
113
17.2K