Simon Batzner
620 posts

Simon Batzner
@simonbatzner
Research Scientist at DeepMind. Prev: Google Brain, Harvard, MIT


Today we share a technical report demonstrating how our drug design engine achieves a step-change in accuracy for predicting biomolecular structures, more than doubling the performance of AlphaFold 3 on key benchmarks and unlocking rational drug design even for examples it has never seen before. Head to the comments to read our blog.



Very excited about our progress on materials! Super cool work, come join the AI for Science team.



Today we introduce humans&, a human-centric frontier AI lab. We believe AI can be reimagined, centering around people and their relationships with each other. At its best, AI should serve as a deeper connective tissue that strengthens organizations and communities

Today we introduce humans&, a human-centric frontier AI lab. We believe AI can be reimagined, centering around people and their relationships with each other. At its best, AI should serve as a deeper connective tissue that strengthens organizations and communities



1/ Today with my colleagues @PolymathicAI, I'm excited to release our latest project, Walrus, a cross-domain foundation model for physical dynamics, into the world. polymathic-ai.org/blog/walrus/ Paper: arxiv.org/abs/2511.15684 Git: github.com/PolymathicAI/w… HF: huggingface.co/polymathic-ai/…

Yet another NequIP in the top 10 with EquFlash, this time with some very clever accelerations! Bringing the total to ….? I’ll leave it to you how to count :) One question this raises is what a lot of folks have told me recently both on here and in private: they find it “disheartening” (to quote @SamMBlau) that we’ve had the same sota architecture since January 2021 now. My answer is always the same: we’re not building these models for the sake of building models. We’re building them because there are fundamental challenges that require the discovery of novel materials. These algorithms accelerate that. If the FF architecture isn’t the bottleneck, you should stop optimizing it and focus on more interesting problems (data, data, data, evals, scalability, and above all, actually finding and making materials). I can think of at least one other field that flourished when they stopped playing the architecture game. Take my words with a grain of salt though. I was told at APS 2019 by a very “senior” person in the field that the fitting problem of MLIPs was “solved”. That turned out it be horribly wrong. I’m rooting for every grad student to make a meaningful dent in this problem. And who knows, maybe there is more juice to be squeezed beyond a 1mev/atom MAE difference. (also if you’re building molecular FFs, different story, this is a materials benchmark)


Yet another NequIP in the top 10 with EquFlash, this time with some very clever accelerations! Bringing the total to ….? I’ll leave it to you how to count :) One question this raises is what a lot of folks have told me recently both on here and in private: they find it “disheartening” (to quote @SamMBlau) that we’ve had the same sota architecture since January 2021 now. My answer is always the same: we’re not building these models for the sake of building models. We’re building them because there are fundamental challenges that require the discovery of novel materials. These algorithms accelerate that. If the FF architecture isn’t the bottleneck, you should stop optimizing it and focus on more interesting problems (data, data, data, evals, scalability, and above all, actually finding and making materials). I can think of at least one other field that flourished when they stopped playing the architecture game. Take my words with a grain of salt though. I was told at APS 2019 by a very “senior” person in the field that the fitting problem of MLIPs was “solved”. That turned out it be horribly wrong. I’m rooting for every grad student to make a meaningful dent in this problem. And who knows, maybe there is more juice to be squeezed beyond a 1mev/atom MAE difference. (also if you’re building molecular FFs, different story, this is a materials benchmark)

Yet another NequIP in the top 10 with EquFlash, this time with some very clever accelerations! Bringing the total to ….? I’ll leave it to you how to count :) One question this raises is what a lot of folks have told me recently both on here and in private: they find it “disheartening” (to quote @SamMBlau) that we’ve had the same sota architecture since January 2021 now. My answer is always the same: we’re not building these models for the sake of building models. We’re building them because there are fundamental challenges that require the discovery of novel materials. These algorithms accelerate that. If the FF architecture isn’t the bottleneck, you should stop optimizing it and focus on more interesting problems (data, data, data, evals, scalability, and above all, actually finding and making materials). I can think of at least one other field that flourished when they stopped playing the architecture game. Take my words with a grain of salt though. I was told at APS 2019 by a very “senior” person in the field that the fitting problem of MLIPs was “solved”. That turned out it be horribly wrong. I’m rooting for every grad student to make a meaningful dent in this problem. And who knows, maybe there is more juice to be squeezed beyond a 1mev/atom MAE difference. (also if you’re building molecular FFs, different story, this is a materials benchmark)








