Maximilian Schons

50 posts

Maximilian Schons

Maximilian Schons

@mxschons

Physician working in the intersection of biotech and AI.

Planet Earth انضم Eylül 2021
80 يتبع277 المتابعون
تغريدة مثبتة
Maximilian Schons
Maximilian Schons@mxschons·
State of Brain Emulation Report 2025 is officially out. Standing on the shoulders of giants, we created a 175-page report, 24 datasets and 38+ figures. A one-year project with over 45 expert contributors from MIT, UC Berkeley, Allen Institute, Harvard, Fudan University, Google and other institutions. A few months back we shared a pre-print. Now everything is open access and beautifully presented at: brainemulation.mxschons.com
English
7
50
267
32.1K
Maximilian Schons
Maximilian Schons@mxschons·
But at the scale of organisms under 1 million neurons — like fruit flies, small fish, bees, or mosquitoes — capturing all aspects of the brain faithfully is increasingly plausible. This means brain emulation models for these organisms could arrive within the decade. A sub-million neuron brain emulation project would likely cost in the low $100Ms. Such a project would help address critical unknowns: How much data, at what quality, improves computational models by how much? What structures are essential, what are nice-to-have? These questions must be answered before scaling to mammalian brains. A mouse brain with 500 times more neurons has 10,000x the brain volume of a fruit fly; a human brain has about a million times more neurons and 30 million times larger. Both cases raise challenges beyond sheer scale. Physical limits on whole-brain recording will likely require extrapolation from partial data. And ethical constraints around animal and human welfare grow in complexity and relevance. Today, everyone worldwide focused specifically on brain emulation could fit in a single workshop room. Total global funding for basic neuroscience over the past 20 years was roughly $0.5 billion per year — about 1% of the NIH’s annual budget, and fragmented across many small academic grants. Any individual or funder entering this field can have outsized impact.
English
0
0
2
197
Maximilian Schons
Maximilian Schons@mxschons·
A key insight of the report: the main barrier to better brain emulation models is not hardware or algorithms, but rather more and higher-quality experimental data. Although data acquisition capabilities have grown roughly fivefold per decade since the 1980s (see figures), we haven’t yet recorded the full brain for any organism at single-cell resolution. Even for the small organisms like the worm C. elegans or the fruit fly, available data is scarce and incomplete. Neuroscientists would like to record a library of hour-long color movies, so to speak, but currently only have access to a few minutes or blurry, stuttering, monochrome footage that barely covers one percent of the desired scene. Scale compounds the problem: an accurate map of the mouse brain at the resolution needed to trace neurons is gigantic; similar in scale to a high-resolution reconstruction of Earth. A human brain is about 1,000 times larger still. Capturing the brain’s many physical dimensions at once is extraordinarily hard. Think of a 10-sided Rubik’s cube: progress on one face reshuffles the others. The faster you record, the smaller the image; the more you capture, the lower the resolution.
English
4
0
5
238
Maximilian Schons
Maximilian Schons@mxschons·
6 weeks ago we released a report ecosystem detailing the state of brain emulation. We now added what is arguably the most important piece: A summary of the state of brain emulation anyone can read in less than 5 minutes and get the take-aways of thousands of hours of research. If you want to build intuitions on how close we are to running brains on computers, our at-a-glance summary is the place to start. For convenience I added the text in the thread below; please check out the full PDF for figures Accessibility was a key objective of our project and I think we delivered strongly with this one! Hopefully it pulls you deeper into our full report, the Asimov Press companion article, our public data repository, and an online guesstimator for predicting time and resources needed. Enjoy! brainemulation.mxschons.com
English
3
31
138
8.8K
Maximilian Schons
Maximilian Schons@mxschons·
Agree with @KennethHayworth ‘s points here. While there is obviously interesting science and unique contributions from the eon team post Phil’s nature publication here, the claim that eon uploaded a fruit fly is confusing at best.
Kenneth Hayworth@KennethHayworth

So, some people are asking me why this EON fly video doesn’t show real ‘uploading’ since it does simulate a real connectome. The most important reason is that the functional parameters that define the dynamic behavior of individual neuron and synapse types in the connectome are unknown. Instead, they used an existing model (nature.com/articles/s4158…) which substitutes these with guessed parameters and grossly simplified dynamics. As made clear in that older paper, these are not sufficient to recreate the activity patterns that would be seen in the real fly. The simplified dynamics would not, for example, be able to choreograph the timing of leg muscles during walking or grooming, or the dynamics of the compass neurons encoding the fly’s heading direction, or the myriad other neuronal dynamics that make up the fly ‘mind’. So not an ‘upload’ by any reasonable definition. In fact, the simplified dynamics they used have only been demonstrated to approximate gross correlations along major sensory-motor pathways for a handful of neurons. For example: activating a sugar sensing neuron causes gross downstream activation that elevates the activity of feeding neurons. It is this handful of very, very crude and basic correlations in the simulated connectome that are being used to drive the EON simulated fly. If they had said that from the start, then I would have had no issue. But instead, they made the bold claim that they had “uploaded a fly” and presented a video of said fly walking over a landscape with highly articulate legs, visually navigating through the terrain to a food source, grooming its antenna with eerily fly-like leg motions, etc. Any reasonable layperson would assume that these visually exciting articulations are the ones being controlled by the simulated brain’s dynamics instead of being faked by computational add-on routines. There are now many secondary reports of this on YouTube and all of them seem to make this reasonable assumption (e.g. youtube.com/shorts/Z7NNP1Z…). And who could blame them? Many neuroscientists also made that assumption before EON started to spell out what was really behind the video millions of views and over a day later. To make clearer just how misleading EON Systems’ video is and how outlandishly laughable their ‘uploading’ claim is, below is an imagined back-and-forth discussion between a [Reasonable Layperson] and a [Neuroscientist] trying to explain to them what is really behind the video: [Reasonable Layperson] “Look at the complicated leg motions as the fly walks… the timing of all those dozens of individual muscles being controlled by the dynamics of the simulated neurons… and they say that they used no reinforcement learning to tune parameters, just the connectome… that is really impressive!” [Neuroscientist] “Well actually no… those leg movements are actually coming from a program unrelated to the connectome. The connectome used didn’t even include the central pattern generator circuits in the ventral nerve cord responsible for controlling leg muscles.” [Reasonable Layperson] “Oh… so in what sense is the simulated connectome controlling walking?” [Neuroscientist] “It looks like they just found a few neurons in the brain connectome that are correlated with right/left/forward motion and used these to ‘steer’ the pretend walking routine.” [Reasonable Layperson] “Oh… But the activations of those ‘steering’ neurons are reflecting the complicated dynamics of tens of thousands of simulated neurons in the fly visual system as it moves through the virtual world, avoiding objects and heading toward its visual goal, right?” [Neuroscientist] “Well actually no … The visual system and virtual world are essentially ‘decoration’… the flashing dynamic neural responses as the fly moves through the virtual environment are designed to give the viewer the impression that the simulated fly is actually seeing the world and making walking decisions based on those visual responses. But, in fact, they could turn off the lights and the fly would behave identically.” [Reasonable Layperson] “Oh… so how does the fly walk toward the food then?” [Neuroscientist] “Well… it looks like they simply imposed an odor gradient in the virtual environment that is centered on the virtual food. The fly has two sets of odor receptors (right and left) that sense this gradient and the activation of these in the connectome is correlated with the activation of the ‘steering’ neurons. So if the left odor neuron activates more than the right then the fly steers left.” [Reasonable Layperson] “Oh… so it is like one of those toy cars that moves toward a light because it has right and left light sensors cross-connected to right and left motors… Gee, I thought a fly was more complicated than that.” [Neuroscientist] “Well actually a real fly is. Real flies have dozens of behavioral states that allow intelligent behavior in a complicated visual and sensory environment. In fact, a real fly contains a set of neurons which act as an internal compass updated by the visual environment and the fly’s walking.” [Reasonable Layperson] “Oh… and their connectome has those internal compass neurons?” [Neuroscientist] “Yes. They used the full brain connectome that contains those compass neurons.” [Reasonable Layperson] “...And their compass neuron activations are tracking the visual environment just like in the real fly?” [Neuroscientist] “Oh sweet summer child… those compass neurons exist in their connectome simulation, but no one knows enough about their functional parameters (synaptic weights, time constants, etc.) to simulate them accurately. They light up in pretty patterns totally unrelated to how they would in a real fly walking through that visual world.” [Reasonable Layperson] “Oh… and the complicated leg movements it shows during antenna grooming… is that also just a faked recording?” [Neuroscientist] “Yes. All the complicated leg motions shown during grooming are faked by a hard-coded program. But they turn that fake routine on or off by looking at some neurons in the connectome that are correlated with actual grooming behavior triggered by dust accumulation on the antenna… well really they fake the dust too by just activating a set of neurons after a delay.” [Reasonable Layperson] “And what did EON Systems do? Did they acquire the connectome? Did they determine the neurotransmitter types? Did they do the calcium imaging experiments to determine the steering and grooming neurons? Did they make the mechanical fly model?” [Neuroscientist] “No. Those were all done by real labs who were kind enough to carefully write up their results in open journals and to post their results and code openly online…. It looks like Eon Systems just took their code and put it together with a virtual environment designed specifically to trick viewers by triggering behaviors in misleading ways.”

English
0
0
10
1.5K
Maximilian Schons أُعيد تغريده
Active Site
Active Site@ActiveSiteBio·
We ran a randomized controlled trial to see if LLMs can help novices perform molecular biology in a wet-lab. The results: LLMs may help in some aspects, but we found no significant increase at the core tasks end-to-end. That's lower than what experts predicted. Our findings 🧵
Active Site tweet media
English
8
45
149
34.2K
Maximilian Schons
Maximilian Schons@mxschons·
Math: Fruit fly adult oxygen consumption is about ~60J (0.014 kcal); Modern GPUs are 700J/s so roughly 0.1s. This gives roughly 100 tokens which is about 75 words. Ref: pubmed.ncbi.nlm.nih.gov/15288688/
English
0
0
1
86
Maximilian Schons
Maximilian Schons@mxschons·
“Fruit flies walk, fly, see, and smell. I learned they even sing (!), fight, and socialize. All this with the energy consumption of 0.014 kcal across their entire lifetime. Sixty days of fruit fly behavior is equivalent to the 0.1 seconds of energy expenditure of modern AI models - approximately the amount of text in this paragraph.” To help everyone grasp the topic of Brain Emulation we produced introductory material. The State of Brain Emulation Report 2025 is a dense 200 page document. I’m the first to admit that is hard to parse. Our Asimov Press Article is already out. This quote comes from an additional non-expert guide we will release soon. Expect more gems like this! Asimov Piece: press.asimov.com/articles/brains Report Website: brainemulation.mxschons.com Singing flies: youtube.com/watch?v=fbEMVi… Math for the calculation: see thread
YouTube video
YouTube
Maximilian Schons tweet media
English
1
1
5
188
Maximilian Schons
Maximilian Schons@mxschons·
Most people assume brain emulation is a compute problem. It's not. We have enough processing power. What we don't have is the data. Yesterday I shared that our report concluded that no experiment has achieved whole-brain recording at single-neuron resolution in any organism. Crucially, even the data we have exist only in scarce quantities. The first OpenAI video models are trained on 10,000 hours. One of our collaborators @qsimeon98 worked on a paper that aggregated all the available C. Elegant recordings. We are not even close to 100h are available here. The central bottleneck isn't running the simulation - it's acquiring the measurements to build it. More detail in our @AsimovPress Introduction article “Building Brains on a Computer”, a collaboration with the wonderful @NikoMcCarty and @AlexandraBalwit asimov.press/p/brains/
Maximilian Schons tweet media
English
11
12
61
4.3K
Maximilian Schons
Maximilian Schons@mxschons·
The State of Brain Emulation Report 2025 launched yesterday. Over the next two weeks, we'll share key insights from 175 pages of research. First off - single neuron recording capabilities: Neuroscience mapped every synapse in a fruit fly brain, but by any rigorous standard, whole-brain recording capabilities lag behind. Part of the problem: no one had defined "whole-brain recording" precisely. Therefore we propose a standard: 95%+ of neurons across 95%+ of brain volume, at single-neuron, single-spike resolution. By that bar, no experiment has hit the mark in any organism during any behavior. Closest: larval zebrafish (~80% coverage), C. elegans (~50% of neurons). Both with major caveats: slow temporal resolution, short durations, head-fixed animals. This is one of many reasons why today's computational brain models struggle. Just one of many insights from the full report. Learn more at: brainemulation.mxschons.com @nc_znc @AntonSArkhipov @isaakfreeman @Philip_Shiu
Maximilian Schons tweet media
English
2
1
24
938
Maximilian Schons
Maximilian Schons@mxschons·
English
0
0
13
580
Maximilian Schons
Maximilian Schons@mxschons·
State of Brain Emulation Report 2025 is officially out. Standing on the shoulders of giants, we created a 175-page report, 24 datasets and 38+ figures. A one-year project with over 45 expert contributors from MIT, UC Berkeley, Allen Institute, Harvard, Fudan University, Google and other institutions. A few months back we shared a pre-print. Now everything is open access and beautifully presented at: brainemulation.mxschons.com
English
7
50
267
32.1K
Maximilian Schons
Maximilian Schons@mxschons·
@isaakfreeman @nc_znc @AdamMarblestone @anderssandberg "This is the material for anyone seeking to develop a computational approach integrating microscopic (neurotransmitters), mesoscopic (neuronal activity), and macroscopic (imaging, behavioral, and environmental) data." — Prof. Jianfeng Feng, Warwick & Fudan Universities
English
0
0
3
113