Andreas Tolias Lab @ Stanford University

1.2K posts

Andreas Tolias Lab @ Stanford University banner
Andreas Tolias Lab @ Stanford University

Andreas Tolias Lab @ Stanford University

@AToliasLab

to understand intelligence and develop technologies by combining neuroscience and AI

Palo Alto, CA Katılım Mayıs 2017
808 Takip Edilen4.9K Takipçiler
Sabitlenmiş Tweet
Andreas Tolias Lab @ Stanford University retweetledi
Mengye Ren
Mengye Ren@mengyer·
Nice work V-JEPA 2.1 from Meta. Our team has also been exploring for a long time on dense and hierarchical video SSL (e.g. FlowE, PooDLe, and Midway). Glad to see it works on a larger scale.
Ksenia_TuringPost@TheTuringPost

A new paper from @ylecun and others – V-JEPA 2.1 It changes the recipe of V-JEPA so the model learns both: • Global semantics – what is happening in the scene • Dense spatio-temporal structure – where things are and how they move The idea is to supervise not just masked tokens but the visible ones too There are 4 key ingredients for V-JEPA 2.1: - Dense prediction loss on both masked and visible tokens - Deep self-supervision across intermediate layers - Modality-specific tokenizers (2D for images, 3D for videos) within a shared encoder - Model + data scaling The workflow turns into: masked image/video → encode visible tokens → predict latent representations for both masked and visible tokens → supervise at multiple layers Here are the details:

English
0
6
32
3.4K
David Sussillo
David Sussillo@SussilloDavid·
Me and Michelle holdin' it down in the airport bookstore!! 🤩🤩
David Sussillo tweet mediaDavid Sussillo tweet media
English
2
0
24
1.1K
Andreas Tolias Lab @ Stanford University retweetledi
Surya Ganguli
Surya Ganguli@SuryaGanguli·
Our new paper: "Solving adversarial examples requires solving exponential misalignment", expertly lead by @AleSalvatore00 w/ @stanislavfort arxiv.org/abs/2603.03507 Key idea: We all want to align AI systems to human values and intentions. We connect adversarial examples to AI alignment by showing they are a prototypical but exponentially severe form of misalignment at the level of perception. The fact that adversarial examples remain unsolved for over a decade thus serves as a cautionary tale for AI alignment, and provides new impetus for revisiting them. We shed light on why adversarial examples exist and why they are so hard to remove by asking a basic question: what is the dimensionality of neural network concepts in image space? For ResNets, and CLIP models, we show that neural network concepts (the space of images the network confidently labels as a concept) fill up almost the ENTIRE space of images (~135,000 dimensions out of ~150,000 for ImageNet & ~3000 out of 3072 for CIFAR10). In contrast natural image concepts are only ~20 dimensional. This indicates exponential misalignment between brain and machine perception (neural networks perceive exponentially many images as belonging to a concept that humans never would). This also explains why adversarial examples exist: if a concept fills up almost all of image space, ANY image will be close to that concept manifold. We further do experiments across > 20 networks showing that adversarial robustness inversely relates to concept dimensionality, though the most robust networks do not completely align machine and human perception. Overall the curse of dimensionality raises its ugly head as an impediment to both adversarial examples and alignment: if can be difficult to get AI systems to behave in accordance with human intentions, values, or perceptions over an exponentially large space of inputs. See @AleSalvatore00's excellent thread for more details: x.com/AleSalvatore00…
Surya Ganguli tweet media
English
8
25
182
14.7K
Andreas Tolias Lab @ Stanford University retweetledi
Kenneth Hayworth
Kenneth Hayworth@KennethHayworth·
So, some people are asking me why this EON fly video doesn’t show real ‘uploading’ since it does simulate a real connectome. The most important reason is that the functional parameters that define the dynamic behavior of individual neuron and synapse types in the connectome are unknown. Instead, they used an existing model (nature.com/articles/s4158…) which substitutes these with guessed parameters and grossly simplified dynamics. As made clear in that older paper, these are not sufficient to recreate the activity patterns that would be seen in the real fly. The simplified dynamics would not, for example, be able to choreograph the timing of leg muscles during walking or grooming, or the dynamics of the compass neurons encoding the fly’s heading direction, or the myriad other neuronal dynamics that make up the fly ‘mind’. So not an ‘upload’ by any reasonable definition. In fact, the simplified dynamics they used have only been demonstrated to approximate gross correlations along major sensory-motor pathways for a handful of neurons. For example: activating a sugar sensing neuron causes gross downstream activation that elevates the activity of feeding neurons. It is this handful of very, very crude and basic correlations in the simulated connectome that are being used to drive the EON simulated fly. If they had said that from the start, then I would have had no issue. But instead, they made the bold claim that they had “uploaded a fly” and presented a video of said fly walking over a landscape with highly articulate legs, visually navigating through the terrain to a food source, grooming its antenna with eerily fly-like leg motions, etc. Any reasonable layperson would assume that these visually exciting articulations are the ones being controlled by the simulated brain’s dynamics instead of being faked by computational add-on routines. There are now many secondary reports of this on YouTube and all of them seem to make this reasonable assumption (e.g. youtube.com/shorts/Z7NNP1Z…). And who could blame them? Many neuroscientists also made that assumption before EON started to spell out what was really behind the video millions of views and over a day later. To make clearer just how misleading EON Systems’ video is and how outlandishly laughable their ‘uploading’ claim is, below is an imagined back-and-forth discussion between a [Reasonable Layperson] and a [Neuroscientist] trying to explain to them what is really behind the video: [Reasonable Layperson] “Look at the complicated leg motions as the fly walks… the timing of all those dozens of individual muscles being controlled by the dynamics of the simulated neurons… and they say that they used no reinforcement learning to tune parameters, just the connectome… that is really impressive!” [Neuroscientist] “Well actually no… those leg movements are actually coming from a program unrelated to the connectome. The connectome used didn’t even include the central pattern generator circuits in the ventral nerve cord responsible for controlling leg muscles.” [Reasonable Layperson] “Oh… so in what sense is the simulated connectome controlling walking?” [Neuroscientist] “It looks like they just found a few neurons in the brain connectome that are correlated with right/left/forward motion and used these to ‘steer’ the pretend walking routine.” [Reasonable Layperson] “Oh… But the activations of those ‘steering’ neurons are reflecting the complicated dynamics of tens of thousands of simulated neurons in the fly visual system as it moves through the virtual world, avoiding objects and heading toward its visual goal, right?” [Neuroscientist] “Well actually no … The visual system and virtual world are essentially ‘decoration’… the flashing dynamic neural responses as the fly moves through the virtual environment are designed to give the viewer the impression that the simulated fly is actually seeing the world and making walking decisions based on those visual responses. But, in fact, they could turn off the lights and the fly would behave identically.” [Reasonable Layperson] “Oh… so how does the fly walk toward the food then?” [Neuroscientist] “Well… it looks like they simply imposed an odor gradient in the virtual environment that is centered on the virtual food. The fly has two sets of odor receptors (right and left) that sense this gradient and the activation of these in the connectome is correlated with the activation of the ‘steering’ neurons. So if the left odor neuron activates more than the right then the fly steers left.” [Reasonable Layperson] “Oh… so it is like one of those toy cars that moves toward a light because it has right and left light sensors cross-connected to right and left motors… Gee, I thought a fly was more complicated than that.” [Neuroscientist] “Well actually a real fly is. Real flies have dozens of behavioral states that allow intelligent behavior in a complicated visual and sensory environment. In fact, a real fly contains a set of neurons which act as an internal compass updated by the visual environment and the fly’s walking.” [Reasonable Layperson] “Oh… and their connectome has those internal compass neurons?” [Neuroscientist] “Yes. They used the full brain connectome that contains those compass neurons.” [Reasonable Layperson] “...And their compass neuron activations are tracking the visual environment just like in the real fly?” [Neuroscientist] “Oh sweet summer child… those compass neurons exist in their connectome simulation, but no one knows enough about their functional parameters (synaptic weights, time constants, etc.) to simulate them accurately. They light up in pretty patterns totally unrelated to how they would in a real fly walking through that visual world.” [Reasonable Layperson] “Oh… and the complicated leg movements it shows during antenna grooming… is that also just a faked recording?” [Neuroscientist] “Yes. All the complicated leg motions shown during grooming are faked by a hard-coded program. But they turn that fake routine on or off by looking at some neurons in the connectome that are correlated with actual grooming behavior triggered by dust accumulation on the antenna… well really they fake the dust too by just activating a set of neurons after a delay.” [Reasonable Layperson] “And what did EON Systems do? Did they acquire the connectome? Did they determine the neurotransmitter types? Did they do the calcium imaging experiments to determine the steering and grooming neurons? Did they make the mechanical fly model?” [Neuroscientist] “No. Those were all done by real labs who were kind enough to carefully write up their results in open journals and to post their results and code openly online…. It looks like Eon Systems just took their code and put it together with a virtual environment designed specifically to trick viewers by triggering behaviors in misleading ways.”
YouTube video
YouTube
English
3
16
99
15K
Andreas Tolias Lab @ Stanford University retweetledi
Palli Thordarson
Palli Thordarson@PalliThordarson·
Proud with @UNSWRNA to have been involved & making the mRNA-LNP for Rosie. There are nuances here that the thread below misses but nevertheless, the intersection of RNA technology, genomic & AI poses an opportunity to change the way do medicine and make access more equitable 1/8
Greg Brockman@gdb

How AI empowered Paul Conyngham to create a custom mRNA vaccine to cure his dog’s cancer when she had only months to live. The first personalized cancer vaccine designed for a dog:

English
48
247
1.6K
211.6K
Andreas Tolias Lab @ Stanford University
Important reminder: structure alone is not enough. Understanding neural computation requires measuring the dynamics of neural circuits during natural behavior.
Kenneth Hayworth@KennethHayworth

My statement regarding the misleading EON Systems “fly upload” video: The hundreds of researchers who make up the Drosophila neuroscience community are making good progress toward eventually understanding how the intelligent behaviors of a fruit fly are produced by computations in its neural circuits. Obtaining the structural connectome of the fly brain and ventral nerve cord was a significant milestone in that quest, as was obtaining an estimate of neurotransmitter types for each cell type. What is currently most lacking is a catalog of the precise electrophysiological and molecular dynamics of each neuron and synapse type. Dozens of on-going electrophysiological, genetic, and behavioral experiments are beginning to fill in those details. But completing that task will likely take many years, possibly decades, of more research. At the end of that long road, I have no doubt, there will be a detailed paper, published in a high-quality journal with full details and carefully peer-reviewed, which will at long last make the true statement “we’ve uploaded a fruit fly”. And that future paper will have a supplementary video much like the EON Systems one, showing a fly navigating a virtual environment. But, unlike the misleading EON Systems video, that future video will be real… all 100,000+ neurons displaying dynamics that reflect those that would occur in the real fly engaged in the same sensory-motor behaviors. That paper will represent the crowning achievement of a successful Drosophila neuroscience field. What EON Systems’ misleading video and claim has done today is to try to steal that future victory and take its valor for their own, all in the hopes of raising some cash from naive investors who think they might get to human uploads soon, and all while riding a tide of hype they generated in the gullible public. The result has been a wave of secondary reporting that grossly mischaracterizes the current state of neuroscience progress, implying that it is much further along than it currently is. As a member of the Drosophila research community, and as a long-term advocate of brain preservation for eventual mind uploading, I feel it is my responsibility to call out this reprehensible behavior. Neuroscience technology is progressing fast enough that we are now able to obtain structural connectomes of small organisms like the fruit fly. But neuroscience understanding is progressing much more slowly. True uploading, even for a fruit fly, is likely years to decades away. Even obtaining a mouse connectome seems likely to be a decade or more away. Human uploading is simply not on any reasonable research or investment timeline, unless such a timeline includes many decades of methodical basic neuroscience research. Of course, we can preserve human brains today using aldehyde fixatives as is done in all of today’s connectomics studies. But we will not be able to upload a human brain for many decades, perhaps centuries to come. Please do not let today’s real scientific progress in connectomics and brain preservation be drowned out by misleading hype. -Kenneth Hayworth

English
3
8
68
7.1K
Demis Hassabis
Demis Hassabis@demishassabis·
London has incredible talent & entrepreneurial spirit. Thrilled to deepen @GoogleDeepMind’s roots here with our spectacular new building Platform 37 - a nod to AlphaGo’s legendary Move 37. It’s a tribute to Science & AI, and an inspirational space for our next big breakthroughs!
English
138
305
3.1K
296.8K
Andreas Tolias Lab @ Stanford University retweetledi
Surya Ganguli
Surya Ganguli@SuryaGanguli·
@doristsao Definitely - if you seek to understand function, study function.
English
0
4
26
4.9K
Andreas Tolias Lab @ Stanford University
@jamesfickel is an inspiring, visionary philanthropist and investor. We are honored to have the backing of James and the entire Amaranth team, and we share his enthusiasm for a future that benefits all of humanity. We’re ready for the adventure ahead, James! 🦁
James Fickel@jamesfickel

The Foundations of Tomorrow The transition to AGI needs to go well. We’ve deployed $350M+ to neuroAI, longevity, and more with the belief that the brain is the key to better, safer AGI. This is the first of many overview posts on our thinking. blog.amaranth.foundation/p/the-foundati…

English
0
0
7
1.3K
Andreas Tolias Lab @ Stanford University
@jamesfickel is an inspiring, visionary philanthropist and investor. We are honored to have the backing of James and the entire Amaranth team, and we share his enthusiasm for a future that benefits all of humanity. We’re ready for the adventure ahead, James! 🦁
James Fickel@jamesfickel

The Foundations of Tomorrow The transition to AGI needs to go well. We’ve deployed $350M+ to neuroAI, longevity, and more with the belief that the brain is the key to better, safer AGI. This is the first of many overview posts on our thinking. blog.amaranth.foundation/p/the-foundati…

English
0
0
3
755
Andreas Tolias Lab @ Stanford University retweetledi
Andreas Tolias Lab @ Stanford University retweetledi
a16z
a16z@a16z·
World Labs CEO Fei-Fei Li: Language alone is a lossy representation of the physical world. "Just a simple meal of making pasta... one could imagine using language to describe let's say about 15 minutes or 20 minutes of that process. But it’s still a lossy representation." "The nuance of how you cook the sauce, how you put the pasta in the water, what the pasta [does] in the water is impossible to use language alone to describe." "So much of the physical world’s process... is beyond the description of language." @drfeifei @theworldlabs
English
66
132
794
83.6K
Andreas Tolias Lab @ Stanford University retweetledi
Alex Wa
Alex Wa@_djdumpling·
new blog! What methodologies do labs use to train frontier models? The blog distills 7 open-weight model reports from frontier labs, covering architecture, stability, optimizers, data curation, pre/mid/post-training + RL, and behaviors/safety djdumpling.github.io/2026/01/31/fro…
Alex Wa tweet media
English
34
287
2K
279.4K
Demis Hassabis
Demis Hassabis@demishassabis·
Excited to launch Gemini 3.1 Pro! Major improvements across the board including in core reasoning and problem solving. For example scoring 77.1% on the ARC-AGI-2 benchmark - more than 2x the performance of 3 Pro. Rolling out today in @GeminiApp, @antigravity and more - enjoy!
Demis Hassabis tweet media
English
246
420
5K
244.9K