
We need collective efforts from those who have real data to actually share it, so we can build one strong benchmark instead of countless fragmented ones. - Heng Ji @hengjinlp at #IJCAI2025 Panel on #AGI #Montreal #AI
Diversity in AI @ IJCAI 2025
28.8K posts

@DiverseInAI
2020-5 @RealAAAI+ @FAccTConference @iclr_conf @CVPR @aclmeeting @icmlconf @NeurIPSConf @COLING2020 @IJCAIconf @eaclmeeting @AIESconf @blackinxnetwork @AAMASconf

We need collective efforts from those who have real data to actually share it, so we can build one strong benchmark instead of countless fragmented ones. - Heng Ji @hengjinlp at #IJCAI2025 Panel on #AGI #Montreal #AI
















Today, we’re sharing a behind-the-scenes look at how we used Marble to create the worlds you see in our launch video.




🚀 NeuralOperator v2.0.0 is here! 🎊 Train at 64×64, fine-tune at 128×128, run inference at 512×512. No retraining. That's the power of learning in function spaces 🔥 🎪 At the @PyTorch conference today? Find us at the poster session, we’d love to discuss operator learning, and don’t miss @AnimaAnandkumar’s keynote on foundations for AI+Science! ✨ What's new: •New models: CodaNO, Mollified GNO, Fourier Continuation •Tensor-GaLore for memory-efficient training •The Well dataset integration •Physics-informed training: H-div loss, PINO reweighting, Fourier differentiation •Improved docs with tutorials, theory, user & dev guides •Major refactoring & 100+ bug fixes 📖 Docs: neuraloperator.github.io ⚡ Install: pip install neuraloperator 📦 Release: github.com/neuraloperator… Big thanks to our 16 contributors! 🙏 #MachineLearning #ScientificML #PyTorch #NeuralOperators


Afternoon keynotes continue with Animashree (Anima) Anandkumar's, Bren Professor of Computing and Mathematical Sciences at Caltech, session, “Foundations for AI+Science.” Clip below 🎥 📸 Happening now at #PyTorchCon. #PyTorchLive #PyTorch #OpenSourceAI


From molecular design to advanced materials, machine learning is rapidly reshaping #chemistry. 🌐⚗️ A new @PNASNews Special Feature—organized by #NASmembers Pablo Debenedetti, Juan de Pablo, and George Schatz—highlights discoveries and future directions: pnas.org/topic/578



🚀 NeuralOperator v2.0.0 is here! 🎊 Train at 64×64, fine-tune at 128×128, run inference at 512×512. No retraining. That's the power of learning in function spaces 🔥 🎪 At the @PyTorch conference today? Find us at the poster session, we’d love to discuss operator learning, and don’t miss @AnimaAnandkumar’s keynote on foundations for AI+Science! ✨ What's new: •New models: CodaNO, Mollified GNO, Fourier Continuation •Tensor-GaLore for memory-efficient training •The Well dataset integration •Physics-informed training: H-div loss, PINO reweighting, Fourier differentiation •Improved docs with tutorials, theory, user & dev guides •Major refactoring & 100+ bug fixes 📖 Docs: neuraloperator.github.io ⚡ Install: pip install neuraloperator 📦 Release: github.com/neuraloperator… Big thanks to our 16 contributors! 🙏 #MachineLearning #ScientificML #PyTorch #NeuralOperators
