Rhys Goodall
301 posts

Rhys Goodall
@RhysGoodall
stuff @RadicalAI Born 362 ppm. https://t.co/eXRBdGWGwx


1/ 🇺🇸 Today, @CAForever submitted detailed plans for the next great American city, an hour north of Silicon Valley, including: Solano Foundry, America’s largest manufacturing park, Solano Shipyard, our largest shipyard, and walkable neighborhoods for 400,000 Californians.













Yet another NequIP in the top 10 with EquFlash, this time with some very clever accelerations! Bringing the total to ….? I’ll leave it to you how to count :) One question this raises is what a lot of folks have told me recently both on here and in private: they find it “disheartening” (to quote @SamMBlau) that we’ve had the same sota architecture since January 2021 now. My answer is always the same: we’re not building these models for the sake of building models. We’re building them because there are fundamental challenges that require the discovery of novel materials. These algorithms accelerate that. If the FF architecture isn’t the bottleneck, you should stop optimizing it and focus on more interesting problems (data, data, data, evals, scalability, and above all, actually finding and making materials). I can think of at least one other field that flourished when they stopped playing the architecture game. Take my words with a grain of salt though. I was told at APS 2019 by a very “senior” person in the field that the fitting problem of MLIPs was “solved”. That turned out it be horribly wrong. I’m rooting for every grad student to make a meaningful dent in this problem. And who knows, maybe there is more juice to be squeezed beyond a 1mev/atom MAE difference. (also if you’re building molecular FFs, different story, this is a materials benchmark)



The Computer Science section of @arxiv is now requiring prior peer review for Literature Surveys and Position Papers. Details in a new blog post






WhatsApp is banning general purpose chatbots from using its Business API techcrunch.com/2025/10/18/wha…



🔥Today we're excited to announce a major milestone for the machine-learned interatomic potential (MLIP) ecosystem: TorchSim is moving to community ownership and governance through a partnership with Radical AI and the open-source community! MLIPs have become critical computational tools for materials discovery. These models predict atomic forces orders of magnitude faster than traditional methods with high accuracy - bridging the gap between DFT and MD. But, the MLIP ecosystem has been fragmented. Each new MLIP requires custom integration code, and the existing simulation engines aren't built for GPU-native workflows. As such, research teams are currently spending too much time on infrastructure instead of discovery. TorchSim changes this. It's an atomistic simulation engine built for the AI era, offering faster batched inference, full GPU utilization, and perhaps most importantly, a unified interface across model architectures enabling rapid prototyping and model swapping. Our team at @UChicago and @argonne, is proud to help facilitate TorchSim’s development and growth as an open source community. Special thanks to @radicalai, who invested in and built the software. The original development team, including Abhijeet Gangan, Orion Archer Cohen, @jrib_ , Rhys Goodall, Adeesh Kolluru, Stefano Falletta, and Curtis Chong, built something special, and we want to ensure their work not only continues to serve the community but grows. A special shoutout to Radical AI founders Joseph F. Krause (@josephfkrause) and Jorge Colindres (@colindresj_) for making this transition not only possible but continuing to build with the community. 🙌 But, for more success we now need your help! 🔷 MD practitioners: Build examples, tutorials, and benchmark your workflows 🔶 ML engineers: Integrate new MLIP architectures and optimize GPU utilization 🔷 Computational scientists: Implement integrators, optimizers, and simulation methods 🔶 Everyone: Help us document and build this ecosystem along with the Hugging Face AI for Science community (@cgeorgiaw). Thanks to the many community contributors already pushing this forward, including Thomas Loux, Ryan Liu, J Kian Pu, Filippo Bigi, Stefan Bringuier, Ph.D. , Myles Stapelberg, Yutack Park, John Gardner, Guillame Fraux, Chuin Wei Tan, and Timo Reents. This is just the beginning. With your help, we see a future where these models are as easy to use in your research as LLMs today and help drive materials discovery across the world.






