Smrithi Sunil

251 posts

Smrithi Sunil banner
Smrithi Sunil

Smrithi Sunil

@_SmrithiSunil

Microscopy. Neuroscience. Cryo-EM scientist at UW-Madison. Fellow @rootsofprogress. Writing at https://t.co/qB3hIM3M8y

Madison, WI Katılım Ocak 2011
733 Takip Edilen244 Takipçiler
Smrithi Sunil retweetledi
Niko McCarty.
Niko McCarty.@NikoMcCarty·
I think this is one of the most important articles we've published at @AsimovPress. If you read carefully, there are at least 3-4 ideas in here that *should* be large, well-funded research programs. The article begins by arguing that existing AI models are good at predicting things *within* an existing framework, but are not good at building new frameworks (and, thus, cannot do paradigm-shifting science). As AI models become more widespread in science, they therefore risk "hypernormal science," meaning we will have less actual breakthroughs and more incremental discoveries. The author (Alvin Djajadikerta) supports this argument with several examples, one of which comes from germ theory: "In the mid-nineteenth century, doctors thought that illness was caused by noxious air, and kept meticulous records accordingly. The physician William Farr mapped cholera deaths across London and found they correlated strongly with low elevation, which he thought was because noxious vapors accumulated in low-lying areas. He was actually picking up a real signal: low-lying districts were closer to the contaminated Thames River. But because his data was organized around air quality, he could not find the true cause..." "An AI trained on Farr’s records could have found even subtler correlations, and would have been genuinely useful for predicting which neighborhoods would be hit hardest in the next outbreak. But it would not be able to derive the concept of a waterborne microorganism, as this was not a variable anyone had yet recorded." After giving other examples of this, Alvin begins mapping out ideas to solve this problem and create AIs that are "visionary" rather than "merely predictive." My favorite idea, of his, is to use AI agents as a model organism for metascience. The gist is that many paradigm shifts seem to happen under particular conditions. "Bell Labs, Xerox PARC, and the early Laboratory of Molecular Biology at Cambridge all produced extraordinary concentrations of paradigm-shifting work," Alvin writes, "mostly because they were small groups with enough institutional protection to pursue ideas that looked unproductive by conventional measures." Alvin continues: "We have never been able to run controlled experiments on scientific institutions; it is impossible to create labs that differ in only one respect and compare the results. But we could run AI agents in parallel populations under different research conditions, and analyze the results...In this sense, AI scientists may give metascience its first model organism." "For instance, one could test how group structure shapes discovery: do small, isolated teams produce more conceptual reorganization than large, well-connected ones? Do flat hierarchies outperform rigid ones? One could run AI agent populations that vary these factors independently and measure the results — something that is impractical to do with real institutions..." This essay is excellent throughout and I hope you'll read it.
Niko McCarty. tweet mediaNiko McCarty. tweet media
English
15
85
494
37.6K
Smrithi Sunil retweetledi
Niko McCarty.
Niko McCarty.@NikoMcCarty·
My weekly blog is back. And my first essay is about the fallout at eLife, the scientific journal. Two years ago, Michael Eisen was fired from his job as editor-in-chief after retweeting a satirical article (from The Onion) about the war in Gaza. Except... that's not really why he was fired. Tensions had already been growing between eLife’s leadership team and its editors and readers. The journal had spent years reforming scientific publishing, and many people were upset about it. First, eLife required authors to publish preprints before submitting to the journal. Then, they got rid of accept-reject decisions entirely. But Eisen increasingly found these policies to be at odds with the norms of the scientific community he was trying to reform. So when Eisen sent out his tweet, the board had an excuse to get rid of him. This is that story. I hope you'll read it. P.S. This story is actually not about eLife or Eisen or his firing or free speech or anything else. It is about what happens to those who try to change the incentive structures of science. eLife is just a journal — one journal of thousands — in a sea of other journals. Its rise, fall, and continued existence is arbitrary, as is so much else about the way we do science. Blog: nikomc.com/2026/03/05/eli…
Niko McCarty. tweet media
English
6
25
188
27.2K
Smrithi Sunil retweetledi
Speculative Technologies
Speculative Technologies@Spec__Tech·
We're excited to introduce the 2026 cohort of Brains fellows! These ambitious scientists and technologists are working on ambitious coordinated research programs to create everything from electric noses, to bacteria-fighting viruses, atom-tracking cameras, and more. 🧵
Speculative Technologies tweet media
English
3
6
31
5.5K
Smrithi Sunil
Smrithi Sunil@_SmrithiSunil·
@GordonBrianR That’s a good point. Even more important to innovate on fundamental capabilities that can help drive future data generation.
English
0
0
1
13
Brian Gordon
Brian Gordon@GordonBrianR·
@_SmrithiSunil This is a critical question, especially given that each generation/epoch of autonomous labs is going to require new insights.
English
1
0
1
37
Smrithi Sunil
Smrithi Sunil@_SmrithiSunil·
I wonder with the rise of autonomous labs how much the gap will widen between the two modes of discovery. Data-driven discovery from atlases, virtual cells, various emulations, that can be scaled, parallelized, and run 24/7. And "insight"-driven discovery that relies on serendipity or observing the unexpected combined with expert judgment, like observing mold to discover penicillin, a glowing screen to X-rays, or a secondary green substance to GFP. Both modes are needed and AI, at least as of now, is much better positioned to accelerate the former over the latter. I guess the ideal case is we set up AI to do more of the former brute-force data generation, freeing up our time to do more of the latter. Could this be the return of the renaissance scientist but now with a parallel science engine running 24/7?
English
2
1
6
375
Smrithi Sunil
Smrithi Sunil@_SmrithiSunil·
@namankatyal14 It certainly could. And of course each one feeds into the other. I'm using insight here to perhaps convey something closer to creativity, surprise, weird observations... outlier type of research.
English
0
0
0
24
Naman Katyal, PhD
Naman Katyal, PhD@namankatyal14·
@_SmrithiSunil Data driven discovery is also insight driven in autonomous labs, right? Using intelligence on the fly, autonomous labs are learning from data which is insights similar to how a scientist works in lab in my opinion.
English
1
0
0
46
Smrithi Sunil retweetledi
Emily Oster
Emily Oster@ProfEmilyOster·
What's going on with measles? There were more measles cases in a single week in January 2026 than all but 5 yearly totals over the past two decades. parentdata.org/kids/whats-goi…
English
5
12
55
11K
Smrithi Sunil retweetledi
Martin Borch Jensen
Martin Borch Jensen@MartinBJensen·
@owl_posting’s great essay on lab robotics, plus the obvious AI macro, has a lot of us talking about automation and whether it’s the missing piece for AI to greatly accelerate breakthroughs in medicine. It is not. A lot of the pushback is that “biology is hard”, but this doesn’t explain why another 50 IQ points or 20h of task length can’t solve this kind of “hard”. I’ll try to be specific: What we call biology spans multiple physical 'layers of organization' with different behaviors and iteration speeds: Layer 1 = Molecular - e.g. drug molecules binding to proteins, which we need to make medicines. This is AlphaFold etc territory. For DNA, RNA, and proteins at least, we're good at simulating biophysics and experiments can happen as fast as arms can move things around. Layer 2 = Cellular - What's the response to a protein getting blocked? Some activity stops, the cell changes gene expression to compensate, and ends up in a new steady state (or maybe dies) minutes or hours later. New tech like CRISPR and single-cell sequencing apply here. Layer 3 = Organs - What happens to an organ when cells change? Often other cells get involved, and the new steady state might be destructive or ineffective. But this happens as gradual changes and responses over weeks to years. Many of the diseases we hope to cure live at this layer: Heart failure, dementia, aging… The key thing to understand, especially for non-biologists building AI for bio, is that layer 3 & beyond is emergent. The body is not the sum of all cells, thanks to feedback loops within and between cells. And with emergent complexity, scaling 1 and/or 2 alone does not mean 3 gets solved. To give a software analogy. Layer 1 is optimizing individual functions where you can benchmark and test edge cases rapidly. Layer 2 is integration testing with simulated load, still controlled enough to iterate. Layer 3 is what actual users and competitors start doing after product launch, and how that affects your business. You can probably simulate users with agents now, but the real world still can't be predicted with confidence. AI thinking about biology is useful, for sure. And speeding up experiments in layers 1 and 2 helps disease research, for sure. Some diseases live at layer 2: infections, single-gene disorders, some cancers. And indeed the automation and scaling of layers 1 and 2 we’ve been doing for decades has given us decent answers to those already. But until we understand disease progression, any attempt to accelerate it (e.g. 'this protein is the problem, just put in a lot of it in the brain') risks creating a false goal. Neither more intelligence nor speeding up layers 1 and 2 can bypass the feedback loops of layer 3, which is where where most diseases we care about actually live. This is the most important reason why we aren’t currently slated to double lifespan even if we get a ‘country worth of geniuses’.
English
6
10
35
4.5K
Sebastian S. Cocioba🪄🌷
Sebastian S. Cocioba🪄🌷@ATinyGreenCell·
One of the several things holding me back from being excited about Ai is that I actually *need* to have a functional mental model of the experiments I conduct and offloading the thinking feels so deeply unsatisfying. A big part of why I science is for the brain feel of science.
English
5
0
35
1.1K
Smrithi Sunil
Smrithi Sunil@_SmrithiSunil·
"The availability of devices based on metahuman science gave rise to artefact hermeneutics. Scientists began attempting to ‘reverse engineer’ these artefacts, their goal being not to manufacture competing products, but simply to understand the physical principles underlying their operation."
English
0
0
0
63
Smrithi Sunil
Smrithi Sunil@_SmrithiSunil·
Just came across this lovely science fiction short from 2000 by Ted Chiang. "In the face of metahuman science, humans have become metascientists." nature.com/articles/35014…
English
2
0
3
196
Smrithi Sunil
Smrithi Sunil@_SmrithiSunil·
Biology needs to become prospective: “It moves the field from a retrospective audit of what's been learned to a prospective calculation of what should be measured next, ensuring that future data collection isn't just extensive, but intentional.” thestacks.org/publications/i…
English
0
0
1
74
Smrithi Sunil retweetledi
Charles Yang
Charles Yang@charlesxjyang·
I’m excited to share a Call To Action I organized with @RenPhilanthropy "On the Need for Autonomous Science Instruments" Signed by 25 leading researchers across the U.S., U.K., Canada, and Japan, we call for a new generation of autonomous science instruments based on three core pillars: ⚙️ Open Data & Software APIs 🤖 Design-for-Automation 🧩 Instrument Modularity We also published a press release supporting the Call To Action, which includes endorsing quotes from AI & science leaders: @Kevinweil (OpenAI), @AndyHickl (Allen Institute), @jrkelly (Gingko), @teresasmeyer (Carnegie Mellon), @smc_ (Acceleration Consortium), and Michael Brenner (Harvard/Deepmind).
Renaissance Philanthropy@RenPhilanthropy

Autonomous - or self-driving - labs are quickly becoming an emerging pillar of modern R&D. But they exist in an ecosystem constrained by legacy scientific instruments. An article published today, by one of our new Fellows @charlesxjyang and 25 prominent researchers and national lab scientists in the U.K., U.S., Canada, and Japan, says a redesign of instrumentation is the only way for scientific discovery to keep pace with rapid advances in AI. Read "On the Need for Autonomous Science Instruments" here chemrxiv.org/doi/full/10.26… 1/2

English
5
18
101
28K