Weinan Sun

894 posts

Weinan Sun

Weinan Sun

@sunw37

Neuroscience, Artificial Intelligence, and Beyond. Assistant professor, Neurobiology and Behavior @CornellNBB

Katılım Şubat 2016
755 Takip Edilen801 Takipçiler
Weinan Sun
Weinan Sun@sunw37·
Everything is open-source and designed as building blocks, not just for our lab. We're relicensing under Apache 2.0 in the coming weeks to make adoption even easier. We built this for 2-photon imaging + VR behavior in mice, but the patterns should transfer to any hardware-intensive lab. If your science hits a wall at the hardware layer, this is for you. Paper: biorxiv.org/content/10.648… Code: github.com/Sun-Lab-NBB/at…
English
0
0
0
58
Weinan Sun
Weinan Sun@sunw37·
Key design choice: AI helps at configuration time only. During actual experiments, everything runs deterministically with no AI in the loop. Network goes down? API rate limited? Doesn't matter - your experiment keeps running.
English
1
0
0
61
Weinan Sun
Weinan Sun@sunw37·
Closed-loop AI scientists will need to talk to lab hardware. Towards that goal, we released Ataraxis - an open-source framework that gives AI coding assistants direct access to physical instruments. Built by our talented Ph.D. student @InkarosEng @CornellNBB @Cornell. Paper: biorxiv.org/content/10.648… Code: github.com/Sun-Lab-NBB/at… Demos videos: 🎥 Pre-session validation: youtu.be/Ui2AEvFkCoE 🎥 Hardware troubleshooting: youtu.be/KBgv4zgwwKw 🎥 AI-guided hardware integration: youtu.be/iemcuTz1_iM
YouTube video
YouTube
YouTube video
YouTube
YouTube video
YouTube
English
1
0
0
143
Weinan Sun retweetledi
Victor M
Victor M@victormustar·
I just made my own Star Wars clip using AI (text-to-video)!
English
265
202
1.5K
3M
Weinan Sun retweetledi
Toon Van de Maele
Toon Van de Maele@toonvdm·
(1/n) How do we generalize knowledge across similar experiences? In our new preprint, we introduce S-HAI: a hierarchical active inference model that captures "schemas" used by humans and animals to generalize task abstractions. arxiv.org/abs/2601.18946 🧵
GIF
English
1
19
121
7.3K
Weinan Sun retweetledi
Alice Ting
Alice Ting@aliceyting·
Janelia is hiring Group Leaders! Work alongside superstars like Luke Lavis, @JiefuBiol, @mengwang939, @JLS_Lab, @ERSchreiter. If you have ideas that are perhaps too ambitious or crazy for a traditional academic setting, that is exactly who they are looking for. Tool-builders at all career stages with original, transformative ideas for experimental approaches in imaging, molecular engineering, protein chemistry, mass spectrometry, and methods that don’t yet exist: Apply by Feb. 3, 2026: janelia.org/groupleader
English
5
70
226
31.9K
Weinan Sun retweetledi
Niels Rogge
Niels Rogge@NielsRogge·
"Before Transformers, RNNs were the thing. These were a big breakthrough. Suddenly, everyone started to work on improving RNNs. But the results were always these slight modifications on the same architecture, like putting the gate in a different spot, with improvements to 1.26, 1.25 bits per character on language modeling." "After the Transformer, when we applied very deep decoder-only Transformers to the same task, we immediately got 1.1 bits per character. So all that research on RNNs suddenly seemed a waste of time". "We're currently in the same situation where a lot of papers are taking the same architecture (Transformer) and making these endless tweaks, in a local minimum, and we might be wasting time in exactly the same way." - Llion Jones, co-author of the Transformer on @MLStreetTalk
Niels Rogge tweet media
English
42
178
1.9K
190K
Weinan Sun retweetledi
Dileep George
Dileep George@dileeplearning·
If no one builds it, you're never born. ..not building AGI is a risky thing. Wilbur Wright, inventor of airplanes, died in 1912 at the age of 45 from typhoid fever because antibiotics didn’t exist then. Just imagine how our world would be without the medical and technological advancements of the past century! Actually, you wouldn’t have to imagine, because you wouldn’t exist! Inventing technology and advancing science is how we overcame our challenges and managed to support 8 billion human souls on this planet, escaping the Malthusian trap of famines, diseases, and conflicts. Automation of knowledge acquisition and thought is the next step, and the best tool humanity can build. The risk of not building AGI is that we won’t be prepared for the challenges the world throws at us, some of which would be challenges that our own existence creates. A(G)I safety is important, and here are my thoughts about it. 1. Scaling up current techniques is not going to lead to AGI. It will lead to powerful AI systems, but these will be supported by a lot of engineered scaffolding. In these cases, making AI work usefully is almost exactly the same as making AI safe. Since scaling has already proven to be useful, we are naturally on the path to exploiting it to the maximum, and we should. 2. We will eventually figure out how to build and scale AI that uses principles of human intelligence. These systems will learn causal structure and reliable world models that can be used for counterfactual thinking. This will lead to much more capable AI systems and AGI. But for these kinds of systems, increasing capability can also come with increasing controllability. Where powerful AI and AGI is going to help us Earthquakes, wildfires, hurricanes, floods: Despite all the technological advances, we are still at the mercy of nature when it comes to these disasters. Where are the army of robots helping to dig out people from collapsed buildings? Where are the ones to manage fires and help people? Having AGI means we will have robots that will help us in these cases to save lives and to recover faster. Health: Antibiotic resistance, pandemics, …, we don’t know what challenges we will face in the future and it would be great to have powerful tools. In general, having much better understanding of how our bodies and minds work, and curing of diseases. Flora and Fauna: Instead of conforming to the requirements of dumb machines, we might finally be able to do more organic multi-crop agriculture, reduce the amount of pesticides we use, and abolish factory farming. Intelligent machines will free us from the economic necessity of these. Climate change, Energy, Materials, Education, Transportation, Space …. examples like these abound in each of those areas. Things that we accomplished crudely with dumb machinery will be done with more finesse with intelligent machines, and that will be important for humanity to thrive at scale. Balancing the risks… Of course the title is a play on the Yudkowsky and Soares book “If anyone builds it, everyone dies”. While I disagree with many things that the book asserts, their work has brought attention to the important problem of AI safety. Smart people working on AI safety is a good thing. It is important to continue that work, even if the specific x-risk scenarios in the book can be taken apart. In the midst of all the talk about the risks of AGI it is important to realize that not building AGI has risks as well.
Dileep George tweet media
English
4
4
38
5.8K
Weinan Sun
Weinan Sun@sunw37·
You should interview @dileeplearning. Can’t think of anyone else who can connect between neuroscience and AI as well as he can. From co-founding Numenta and Vicarious to getting acquired by Alphabet, he had an interesting journey building neuroscience inspired AI. Our recent Nature paper was heavily influenced by his work.
English
0
0
3
176
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
Looking for a neuroscientist to interview on my podcast. Keen for someone who can draw ML analogies for how the brain works (what's the architecture & loss/reward function of different parts, why can we generalize so well, how important is the particular hardware, etc).
English
359
50
1.4K
146.7K
Weinan Sun
Weinan Sun@sunw37·
hi! based on my testing on various file formats with different sizes, Kosmos could analyze all kinds of files (see below). Claude 4.5 could deal with some of them, but often limited by upload size limit, and for some files, the reasoning pushes to max token limit. This reading ability could be a major unlock for science as some of these data were buried for more than 10 yrs with software license expired. Analysis is not perfect yet but feels this is a major step to the right direction.
Weinan Sun tweet media
English
0
0
2
91
Dr. Novo 🍌
Dr. Novo 🍌@novocrypto·
@sunw37 @SGRodriques Couldn’t Claude 4.5 do the same but more beautiful? Just curious! Mathematical neuroscience here, hi 👋
English
1
0
1
86
Sam Rodriques
Sam Rodriques@SGRodriques·
Today, we’re announcing Kosmos, our newest AI Scientist, available to use now. Users estimate Kosmos does 6 months of work in a single day. One run can read 1,500 papers and write 42,000 lines of code. At least 79% of its findings are reproducible. Kosmos has made 7 discoveries so far, which we are releasing today, in areas ranging from neuroscience to material science and clinical genetics, in collaboration with our academic beta testers. Three of these discoveries reproduced unpublished findings; four are net new, validated contributions to the scientific literature. AI-accelerated science is here. Our core innovation in Kosmos is the use of a structured, continuously-updated world model. As described in our technical report, Kosmos’ world model allows it to process orders of magnitude more information than could fit into the context of even the longest-context language models, allowing it to synthesize more information and pursue coherent goals over longer time horizons than Robin or any of our other prior agents. In this respect, we believe Kosmos is the most compute-intensive language agent released so far in any field, and by far the most capable AI Scientist available today. The use of a persistent world model also enables single Kosmos trajectories to produce highly complex outputs that require multiple significant logical leaps. As with all of our systems, Kosmos is designed with transparency and verifiability in mind: every conclusion in a Kosmos report can be traced through our platform to the specific lines of code or the specific passages in the scientific literature that inspired it, ensuring that Kosmos’ findings are fully auditable at all times. We are also using this opportunity to announce the launch of Edison Scientific, a new commercial spinout of FutureHouse, which will be focused on commercializing our agents and applying them to automate scientific research in drug discovery and beyond. Edison will be taking over management of the FutureHouse platform, where you can access Kosmos alongside our Literature, Molecules, and Precedent agents (previously Crow, Phoenix, and Owl). Edison will continue to offer free tier usage for casual users and academics, while also offering higher rate limits and additional features for users who need them. You can read more about this spinout on our blog, below. A few important notes if you’re going to try Kosmos. Firstly, Kosmos is different from many other AI tools you might have played with, including our other agents. It is more similar to a Deep Research tool than it is to a chatbot: it takes some time to figure out how to prompt it effectively, and we have tried to include guidelines on this to help (see below). It costs $200/run right now (200 credits per run, and $1/credit), with some free tier usage for academics. This is heavily discounted; people who sign up for Founding Subscriptions now can lock in the $1/credit price indefinitely, but the price ultimately will probably be higher. Again, this is less chatbot and more research tool, something you run on high-value targets as needed. Some caveats are also warranted. Firstly, we find that 80% of Kosmos findings are reproducible, which also means 20% are not -- some things it says will be wrong. Also, Kosmos certainly does produce outputs that are the equivalent to several months of human labor, but it also often goes down rabbit holes or chases statistically significant yet scientifically irrelevant findings. We often run Kosmos multiple times on the same objective in order to sample the various research avenues it can take. There are still a bunch of rough edges on the UI and such, which we are working on. Finally, we are aware that the 6 month figure is much greater than estimates by other AI labs, like METR, about the length of tasks that AI Agents can currently perform. You can read discussion about this in our blog post. Huge congratulations to our team that put this together, led by @ludomitch and @michaelathinks: Angela Yiu, @benjamin0chang, @sidn137, Edwin Melville-Green, Albert Bou, @arvissulovari, Oz Wassie, @jonmlaurent. A particular shout out to @m_skarlinski and his team that rebuilt the platform for this launch, especially Andy Cai @notAndyCai, Richard Magness, Remo Storni, Tyler Nadolski @_tnadolski, Mayk Caldas @maykcaldas, Sam Cox @samcox822 and more. This work would not have been possible without significant contributions from academic collaborators @mathieubourdenx, @EricLandsness, @bdanubius, @physicistnevans, Tonio Buonassisi, @BGomes_1905, Shriya Reddy, @marthafoiani, and @RandallBateman3. We also want to thank our numerous supporters, especially @ericschmidt, who has been a tremendous ally. We will have more to say about our supporters soon!
English
274
649
3.7K
727.7K
Weinan Sun
Weinan Sun@sunw37·
@SGRodriques wow, congrats Sam and your team on this milestone! super impressive. just dumped an old whole-cell recording file into it, and got this beautiful summary:
Weinan Sun tweet media
English
1
4
10
2.8K
Dileep George
Dileep George@dileeplearning·
.@RichardSSutton’s Bitter Lesson essay is popular in Silicon Valley, for reasons Rich didn’t intend. Why? One poor word choice in Rich’s essay!! Had he used a different word in one place in the essay the world wouldn’t have thought LLMs are bitter-lesson pilled. Want to make a guess which word it is? Or you can wait till tomorrow and I’ll tell you…..
English
12
2
29
25.5K
Weinan Sun retweetledi
Johan Winnubst
Johan Winnubst@JohanWinn·
Incredibly proud to announce that today @E11BIO is releasing our first preprint together with accompanying open data and methods🚀 Here we show how our PRISM technology addresses the biggest bottlenecks in connectomics: tracing and sample fragility.
Andrew Payne@Andrew_C_Payne

@E11BIO is excited to unveil PRISM technology for mapping brain wiring with simple light microscopes. Today, brain mapping in humans and other mammals is bottlenecked by accurate neuron tracing. PRISM uses molecular ID codes and AI to help neurons trace themselves. We discovered a new cell barcoding approach exceeding comparable methods by more than 750x. This is the heart of PRISM. We integrated this capability with microscopy and AI image analysis to automatically trace neurons at high resolution and annotate them with molecular features. This is a key advance towards economically viable brain mapping - 95% of costs stem from neuron tracing. It is also an important step towards democratizing neuron tracing for everyday neuroscience. Solving these problems is critical for curing brain disorders, building safer and human-like AI, and even simulating brain function. In our first pilot study, we acquired a unique dataset in mouse hippocampus. Barcodes improved the accuracy of tracing genetically labelled neurons by 8x – with a clear path to 100x or more. They also permit tracing across spatial gaps – essential for mitigating tissue section loss in whole-brain scaling. Using molecular annotation, we uncover an intriguing feature of synaptic organization, demonstrating how PRISM can be used for systematic discovery 🧵

English
1
1
10
594
Weinan Sun retweetledi
Andrew Payne
Andrew Payne@Andrew_C_Payne·
@E11BIO is excited to unveil PRISM technology for mapping brain wiring with simple light microscopes. Today, brain mapping in humans and other mammals is bottlenecked by accurate neuron tracing. PRISM uses molecular ID codes and AI to help neurons trace themselves. We discovered a new cell barcoding approach exceeding comparable methods by more than 750x. This is the heart of PRISM. We integrated this capability with microscopy and AI image analysis to automatically trace neurons at high resolution and annotate them with molecular features. This is a key advance towards economically viable brain mapping - 95% of costs stem from neuron tracing. It is also an important step towards democratizing neuron tracing for everyday neuroscience. Solving these problems is critical for curing brain disorders, building safer and human-like AI, and even simulating brain function. In our first pilot study, we acquired a unique dataset in mouse hippocampus. Barcodes improved the accuracy of tracing genetically labelled neurons by 8x – with a clear path to 100x or more. They also permit tracing across spatial gaps – essential for mitigating tissue section loss in whole-brain scaling. Using molecular annotation, we uncover an intriguing feature of synaptic organization, demonstrating how PRISM can be used for systematic discovery 🧵
English
36
108
371
110.1K