Simon Batzner

620 posts

Simon Batzner banner
Simon Batzner

Simon Batzner

@simonbatzner

Research Scientist at DeepMind. Prev: Google Brain, Harvard, MIT

San Francisco, CA Katılım Kasım 2017
2.8K Takip Edilen6.4K Takipçiler
Markus J. Buehler
Markus J. Buehler@ProfBuehlerMIT·
Scientific discovery is reaching the limits of human capacity: too much data, too many disconnected fields, and too few ways to connect ideas fast enough to matter. The next breakthroughs in materials, medicine, energy, and beyond will not come from scaling today’s AI paradigm alone or from relying on serendipity alone. They will require a new kind of AI for knowledge discovery that not only models the world but shapes what it could become. At Unreasonable Labs, we are building superintelligence for knowledge discovery: systems that reason across disciplines, generate novel hypotheses, test them through simulation and experimentation, and help guide real-world discovery. Our AI engine is not confined to what it has seen in training. It creates new data, builds new tools, and maintains a persistent world model that grows more powerful as it reasons. Why now? Even today's most powerful AI models face a core limitation: they are trained on what we already know. True discovery begins when a system encounters something its current model cannot explain. This is why you cannot train your way to a discovery - a system has to reason through new problems, update its beliefs, and revise its understanding of the world as it thinks. Another critical insight is that rich knowledge already exists, but is not yet applied to solve pressing problems. It sits in millions of papers, patents, and datasets, trapped in isolated silos, often in legacy data vaults. What's missing is a way to connect it, scale it, unlock the potential, and synthesize genuine novel predictions. The time is now to build a system that enables practitioners to design, explore, and direct discovery, whether through human guidance or full automation, while capturing the tacit insight that domain experts bring. Steerable reasoning That is why we built an operating system for scientific discovery - one that replaces chance with steerable reasoning. Rather than retrieving static facts, our AI builds and continuously updates a living world model - a representation of knowledge the system can actively reason over, question, and revise. A concrete example: say you want to create "smart concrete" that can flex - a concept that doesn't exist yet. Our AI maps relationships across domains, finds a path from morphable smart materials to concrete, and identifies the most efficient way to bridge those concepts. It then autonomously writes simulations, tests the hypothesis, and refines the idea. Then it interacts with hardware to produce a physical artifact, and the loop expands into the real-world, where the machine becomes world-shaping. Our AI gives users full visibility into how the system arrived at a conclusion. It delineates which existing patents and papers it drew upon versus what is genuinely new - protecting IP and competitive concerns from the start, and offering deep compositional insights into technology advances. It takes unreasonable people to make progress Our team reflects the interdisciplinary expertise required to build this next breakthrough - my co-founder Yuan Cao @caoyuan33 (formerly DeepMind) and Andrew Lew, @HaiqianYang, Matt Insler, Jennifer Kang and Julia McLaughlin. We are backed by $13.5M in seed funding led by @PlaygroundVC with participation from @aixventures, @e14fund, and MS&AD. We are guided by advisors including Robert Langer (1,000+ patents), Kostya Novoselov (Nobel Prize in Physics), and @Thom_Wolf (Co-founder of Hugging Face). We already have multiple pilot programs underway with leading industrial partners in materials science and engineering, with additional engagements developing across energy, logistics, bioengineering, and other strategic domains. The biggest challenges of our time - fusion energy, sustainable materials, new medicines - demand exponentially more innovation than humans alone can produce. We are not replacing scientists, and instead are making every scientist capable of leading their own team of AI-powered researchers. Abundant innovation leads to abundant prosperity. Watch our launch video below to see what we're building @unreasonable_ai 👇
English
57
71
370
51.9K
Simon Batzner
Simon Batzner@simonbatzner·
every day in deep learning is a lesson that if you do the simple thing with extreme care, wonders happen
English
0
0
39
2.1K
bilal
bilal@bilaltwovec·
very belated update: i joined @isomorphiclabs right after graduating early last year—its been really incredible getting to work on pushing the frontier of scaling ai for science for very very difficult problems (solve all disease) in the real world!
Isomorphic Labs@IsomorphicLabs

Today we share a technical report demonstrating how our drug design engine achieves a step-change in accuracy for predicting biomolecular structures, more than doubling the performance of AlphaFold 3 on key benchmarks and unlocking rational drug design even for examples it has never seen before. Head to the comments to read our blog.

English
13
7
153
11.9K
marcel ⊙
marcel ⊙@marceldotsci·
@simonbatzner exciting! i assume these are in person either in mountain view or london?
English
1
0
1
90
Simon Batzner
Simon Batzner@simonbatzner·
Our team at @GoogleDeepMind is hiring for multiple roles in AI+materials. 🚀 We‘re pursuing an ambitious research program in materials and are looking for the world‘s best - semiconductor experts - ml+materials researchers - lab automation engineers Vibes are great, join us!
English
11
18
306
27.1K
Saurabh Shah
Saurabh Shah@saurabh_shah2·
I’ve joined humans&! My last blog post explains why I think a human-centric approach is the missing piece in modern AI systems. I’m super psyched about the technical direction of the company. Perhaps even more important, though, is the team; the humans at humans&. My coworkers are completely and wholly wonderful. They’re brilliant, yes, but they’re also kind, funny, focused, and just about every other good adjective I can think of. Put simply: vibes are goooood. We’re bringing together wonderful people united by a much-needed mission to build something truly different. If that excites you, I’d love to chat.
humans&@humansand

Today we introduce humans&, a human-centric frontier AI lab. We believe AI can be reimagined, centering around people and their relationships with each other. At its best, AI should serve as a deeper connective tissue that strengthens organizations and communities

English
40
8
231
40.4K
Rohan Pandey
Rohan Pandey@khoomeik·
proposal: all of sf flies to san diego for a week to hang out with sf ppl (in san diego)
English
14
5
345
22.6K
rohan anil
rohan anil@_arohan_·
I think I found the perfect beans for expresso.
rohan anil tweet media
English
6
0
34
4.9K
Charlie Snell
Charlie Snell@sea_snell·
Downloading for the flight
Charlie Snell tweet media
English
5
0
53
4.4K
Simon Batzner
Simon Batzner@simonbatzner·
@pfau david out of hot takes before gta-6
English
0
0
3
410
David Pfau
David Pfau@pfau·
I'm all out of hot takes. The current thing? It's bad. Or it's good. I just don't know any more.
English
3
0
19
2.5K
Tanya Marwah
Tanya Marwah@__tm__157·
Few things bring me as much joy as good science with a great team! Check out Walrus, a new foundation model for continuum dynamics built by @PolymathicAI! The future of AI for Science is very bright and PDEs will play a major role in it. Hope you use our model to push the boundaries even further! Check out the details in @mikemccabe210's thread below!
Mike McCabe@mikemccabe210

1/ Today with my colleagues @PolymathicAI, I'm excited to release our latest project, Walrus, a cross-domain foundation model for physical dynamics, into the world. polymathic-ai.org/blog/walrus/ Paper: arxiv.org/abs/2511.15684 Git: github.com/PolymathicAI/w… HF: huggingface.co/polymathic-ai/…

English
2
1
17
2.9K
Simon Batzner retweetledi
Janosh
Janosh@jrib_·
my thoughts exactly! there's been an overallocation of attention (pun/ambiguity intended) to MLIP architecture development. even though a sheer infinite number of higher impact projects you could do with the architectures we already developed is staring us all in the face! and if you really want to work on model architecture (rather than generating data in underserved regions of material space or extracting value from existing checkpoints by using them for actual materials science), go add new physical degrees of freedom (charges, spins, ...) to the existing architectures. that expands the science you can do with MLIPs way more than saturating a stale benchmark...
Simon Batzner@simonbatzner

Yet another NequIP in the top 10 with EquFlash, this time with some very clever accelerations! Bringing the total to ….? I’ll leave it to you how to count :) One question this raises is what a lot of folks have told me recently both on here and in private: they find it “disheartening” (to quote @SamMBlau) that we’ve had the same sota architecture since January 2021 now. My answer is always the same: we’re not building these models for the sake of building models. We’re building them because there are fundamental challenges that require the discovery of novel materials. These algorithms accelerate that. If the FF architecture isn’t the bottleneck, you should stop optimizing it and focus on more interesting problems (data, data, data, evals, scalability, and above all, actually finding and making materials). I can think of at least one other field that flourished when they stopped playing the architecture game. Take my words with a grain of salt though. I was told at APS 2019 by a very “senior” person in the field that the fitting problem of MLIPs was “solved”. That turned out it be horribly wrong. I’m rooting for every grad student to make a meaningful dent in this problem. And who knows, maybe there is more juice to be squeezed beyond a 1mev/atom MAE difference. (also if you’re building molecular FFs, different story, this is a materials benchmark)

English
1
6
40
5K
Simon Batzner retweetledi
Tim Duignan
Tim Duignan@TimothyDuignan·
Ha, just went and checked and this is honestly what I wrote in my notes the day after I first saw Simon talking about it on twitter. (Sorry it's full of swearing lol.)
Tim Duignan tweet media
Simon Batzner@simonbatzner

Yet another NequIP in the top 10 with EquFlash, this time with some very clever accelerations! Bringing the total to ….? I’ll leave it to you how to count :) One question this raises is what a lot of folks have told me recently both on here and in private: they find it “disheartening” (to quote @SamMBlau) that we’ve had the same sota architecture since January 2021 now. My answer is always the same: we’re not building these models for the sake of building models. We’re building them because there are fundamental challenges that require the discovery of novel materials. These algorithms accelerate that. If the FF architecture isn’t the bottleneck, you should stop optimizing it and focus on more interesting problems (data, data, data, evals, scalability, and above all, actually finding and making materials). I can think of at least one other field that flourished when they stopped playing the architecture game. Take my words with a grain of salt though. I was told at APS 2019 by a very “senior” person in the field that the fitting problem of MLIPs was “solved”. That turned out it be horribly wrong. I’m rooting for every grad student to make a meaningful dent in this problem. And who knows, maybe there is more juice to be squeezed beyond a 1mev/atom MAE difference. (also if you’re building molecular FFs, different story, this is a materials benchmark)

English
0
1
13
4.1K
Simon Batzner retweetledi
Xiang Fu
Xiang Fu@xiangfu_ml·
Second @simonbatzner. FF architecture was no longer a bottleneck for AI materials discovery since nequip. Lots of opportunities still in data, eval, sim2real, efficiency, etc.
Simon Batzner@simonbatzner

Yet another NequIP in the top 10 with EquFlash, this time with some very clever accelerations! Bringing the total to ….? I’ll leave it to you how to count :) One question this raises is what a lot of folks have told me recently both on here and in private: they find it “disheartening” (to quote @SamMBlau) that we’ve had the same sota architecture since January 2021 now. My answer is always the same: we’re not building these models for the sake of building models. We’re building them because there are fundamental challenges that require the discovery of novel materials. These algorithms accelerate that. If the FF architecture isn’t the bottleneck, you should stop optimizing it and focus on more interesting problems (data, data, data, evals, scalability, and above all, actually finding and making materials). I can think of at least one other field that flourished when they stopped playing the architecture game. Take my words with a grain of salt though. I was told at APS 2019 by a very “senior” person in the field that the fitting problem of MLIPs was “solved”. That turned out it be horribly wrong. I’m rooting for every grad student to make a meaningful dent in this problem. And who knows, maybe there is more juice to be squeezed beyond a 1mev/atom MAE difference. (also if you’re building molecular FFs, different story, this is a materials benchmark)

English
0
5
27
6.2K
Simon Batzner
Simon Batzner@simonbatzner·
And obviously papers like this that accelerate the methods meaningfully are incredibly useful and a step in the right direction.
English
0
0
4
1.2K
Simon Batzner
Simon Batzner@simonbatzner·
Yet another NequIP in the top 10 with EquFlash, this time with some very clever accelerations! Bringing the total to ….? I’ll leave it to you how to count :) One question this raises is what a lot of folks have told me recently both on here and in private: they find it “disheartening” (to quote @SamMBlau) that we’ve had the same sota architecture since January 2021 now. My answer is always the same: we’re not building these models for the sake of building models. We’re building them because there are fundamental challenges that require the discovery of novel materials. These algorithms accelerate that. If the FF architecture isn’t the bottleneck, you should stop optimizing it and focus on more interesting problems (data, data, data, evals, scalability, and above all, actually finding and making materials). I can think of at least one other field that flourished when they stopped playing the architecture game. Take my words with a grain of salt though. I was told at APS 2019 by a very “senior” person in the field that the fitting problem of MLIPs was “solved”. That turned out it be horribly wrong. I’m rooting for every grad student to make a meaningful dent in this problem. And who knows, maybe there is more juice to be squeezed beyond a 1mev/atom MAE difference. (also if you’re building molecular FFs, different story, this is a materials benchmark)
Simon Batzner tweet mediaSimon Batzner tweet media
English
5
10
62
20.1K