Mathias Niepert

1.4K posts

Mathias Niepert banner
Mathias Niepert

Mathias Niepert

@Mniepert

Professor @ University of Stuttgart, Scientific Advisor @ NEC Labs, GraphML, geometric deep learning, ML for Science. Formerly @IUBloomington and @uwcse

Heidelberg, Germany Katılım Eylül 2013
501 Takip Edilen2.9K Takipçiler
Mathias Niepert retweetledi
Maya Bechler-Speicher
Maya Bechler-Speicher@mayabechlerspei·
Remember our ICML25 "Graph Learning Will Lose Relevance Due To Poor Benchmarks"? Fear no more! GraphBench is here! 🤩 We give you: The next generation of Graph Benchmarking! Including: -New shiny high-quality datasets from diverse domains spanning seven domains, including chip design, algorithmic reasoning, and weather forecasting. -Standardized hyperparameter tuning procedures, enabling fair and principled model comparison - Strong, transparent baselines that accurately reflect algorithmic progress - Comprehensive coverage of graph learning tasks, datasets, and modern GNN architectures - Reproducibility-focused design, minimizing variance and evaluation artifacts - Forward-looking benchmark designed for next-generation graph learning research A huge collab with: @chrsmrrs, @mmbronstein, @michael_galkin, @HolgerHoo, Timo Stoll, @ChendiQian, @benfinkelshtein, Ali Parvis, Darius Weber, @ffabffrasca, @HadarShavit, @antoinesrdin, Arman Mielke, Marie Anastacio, Erik Müller,
Maya Bechler-Speicher tweet media
English
3
11
39
4.3K
Mathias Niepert retweetledi
Christopher Morris
Christopher Morris@chrsmrrs·
GraphBench: Next-generation graph learning benchmarking is now available! 🔍📊 This work introduces GraphBench: a comprehensive benchmarking framework for graph learning that provides principled baselines and reference performance across modern models. graphbench.io
Christopher Morris tweet media
English
2
14
47
6.4K
Mathias Niepert
Mathias Niepert@Mniepert·
@_sophia_tang_ Thank you for writing this up! Having work presented in a more accessible and comprehensive way is incredibly important for the community and new students.
English
0
0
2
932
Sophia Tang
Sophia Tang@_sophia_tang_·
My "Complete Guide to Spherical Equivariant Graph Transformers" now has a permanent home on arXiv! 📖 📃 arXiv: arxiv.org/abs/2512.13927 Over a year ago, I published this article on my blog, Alchemy Bio 🔮 (alchemybio.substack.com). Even now, I’m always surprised by how often people tell me it has meaningfully supported their learning and research, which inspired me to publish this guide with a permanent DOI 🌟 Revisiting this article truly reminded me how much I’ve grown over this past year (and how much my research interests have evolved!). With the rapid advancements in ML-based interatomic potentials, materials and biomolecular design, and atomic simulation, the topics covered in the guide remain extremely relevant for understanding the theoretical foundations of equivariant architectures. 🧬 The arXiv version (99 pages, 46 figures) contains updated figures and refined content highlighting: 🌟 The mathematical foundations of how features transform under spherical equivariance, including group theory, spherical tensors, spherical harmonics, tensor products, and Clebsch-Gordan coefficients. 🌟 A step-by-step derivation of the SO(3)-equivariant kernel, which forms the core of rotation-equivariant message-passing layers. 🌟 Construction of Tensor Field Networks and their extension to attention mechanisms through the SE(3)-Transformer (+ annotated code excerpts). I am extremely grateful for all the positive feedback I’ve received on this article, and I hope it continues to support researchers and learners interested in geometric deep learning! 🧩💫
Sophia Tang tweet media
English
17
119
867
78.7K
Mathias Niepert retweetledi
Bela Wiertz
Bela Wiertz@blwiertz·
Sovereignty should not be Europe’s growth strategy. If your sales pitch is “buy us because we’re European,” you’re already losing. Europe needs companies like BlackForestLabs: global ambition from day one, built in Europe. Not: “We build AI for European SMBs so they know we keep their data save.” The path forward is competing with US companies in their home market from day one. Lovable, BlackForestLabs, n8n and others have shown how its done!
English
26
25
234
10.2K
Mathias Niepert retweetledi
Latent Labs
Latent Labs@Latent_Labs·
5 months from Latent-X1 to Latent-X2. AI-generated antibodies with drug-like developability and low immunogenicity in human panels — zero-shot. Available now for selected partners.
English
1
5
10
2.3K
Mathias Niepert retweetledi
Christopher Morris
Christopher Morris@chrsmrrs·
I respect that @iclr_conf had to respond to the OR leak, but I disagree with resetting scores. Many students worked hard on rebuttals and improved their papers in good faith. I hope the organizers reconsider and revert the reset. If you agree, feel free to retweet.
English
6
47
152
20K
Mathias Niepert retweetledi
Duy H. M. Nguyen
Duy H. M. Nguyen@DuyHMNguyen1·
✨ 𝐓𝐡𝐫𝐢𝐥𝐥𝐞𝐝 𝐭𝐨 𝐬𝐡𝐚𝐫𝐞 𝐭𝐡𝐚𝐭 𝐨𝐮𝐫 𝐩𝐚𝐩𝐞𝐫 𝐄𝐱𝐆𝐫𝐚-𝐌𝐞𝐝 [1] 𝐡𝐚𝐬 𝐛𝐞𝐞𝐧 𝐚𝐜𝐜𝐞𝐩𝐭𝐞𝐝 𝐭𝐨 NeurIPS 2025! (one of three other ones accepted this year 🚀 ) ✨ Over the past year, my collaborators and I have been exploring a fundamental limitation of 𝐦𝐮𝐥𝐭𝐢-𝐦𝐨𝐝𝐚𝐥 𝐥𝐚𝐫𝐠𝐞 𝐥𝐚𝐧𝐠𝐮𝐚𝐠𝐞 𝐦𝐨𝐝𝐞𝐥𝐬 (MLLMs) on 𝐝𝐚𝐭𝐚-𝐡𝐮𝐧𝐠𝐫𝐲 tasks and how to overcome it. So how does it help with your work? 🔍 𝐈. 𝐊𝐞𝐲 𝐅𝐢𝐧𝐝𝐢𝐧𝐠: 𝐀𝐮𝐭𝐨-𝐫𝐞𝐠𝐫𝐞𝐬𝐬𝐢𝐯𝐞 𝐌𝐋𝐋𝐌𝐬 𝐀𝐫𝐞 𝐄𝐱𝐭𝐫𝐞𝐦𝐞𝐥𝐲 𝐃𝐚𝐭𝐚-𝐇𝐮𝐧𝐠𝐫𝐲 𝐔𝐧𝐝𝐞𝐫 𝐃𝐨𝐦𝐚𝐢𝐧 𝐒𝐡𝐢𝐟𝐭⁣ ⁣ We found that 𝐫𝐞𝐝𝐮𝐜𝐢𝐧𝐠 𝐩𝐫𝐞-𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠 𝐝𝐚𝐭𝐚 during domain adaptation 𝐭𝐫𝐢𝐠𝐠𝐞𝐫𝐬 𝐢𝐫𝐫𝐞𝐜𝐨𝐯𝐞𝐫𝐚𝐛𝐥𝐞 𝐟𝐚𝐢𝐥𝐮𝐫𝐞𝐬 𝐢𝐧 𝐚𝐮𝐭𝐨-𝐫𝐞𝐠𝐫𝐞𝐬𝐬𝐢𝐯𝐞 𝐌𝐋𝐋𝐌𝐬 - failures that 𝐟𝐢𝐧𝐞-𝐭𝐮𝐧𝐢𝐧𝐠 𝐜𝐚𝐧𝐧𝐨𝐭 𝐟𝐢𝐱. This reveals a critical weakness: current models learnt by purely auto-regressive models are not robust to domain shift.⁣ Joint work with great collaborators @cmuptx @james_y_zou , and my supervisor @Mniepert. Details below
Duy H. M. Nguyen tweet media
English
2
3
8
1.8K
Mathias Niepert retweetledi
Andrei Manolache
Andrei Manolache@andreimano2·
@Mniepert @lfochamon Inspired by JEPA, C-FREE (w/ B. Ariguib & @mniepert) uses a predictive objective over egonets spanning 2D and 3D graphs, avoiding negatives, reconstructions, and augmentations. This simple multimodal design delivers strong results from low- to large-scale regimes. 🔬 🧵6/9
Andrei Manolache tweet media
English
1
2
4
257
Mathias Niepert retweetledi
Andrei Manolache
Andrei Manolache@andreimano2·
"Learning (Approximately) Equivariant Networks via Constrained Optimization" will be an Oral 🗣️ at NeurIPS! (w/ @Mniepert & @lfochamon) ACE goes beyond fixed equivariance, augmentations, and regularizers by learning from data when to enforce symmetry and when to break it. 🧵2/9
Andrei Manolache tweet media
English
1
5
10
473
Mathias Niepert retweetledi
Andrei Manolache
Andrei Manolache@andreimano2·
Looking forward to #NeurIPS next week! I’m presenting three works across equivariant learning, molecular graph pretraining, and real-world graph-based time series: • ACE (NeurIPS Oral) • C-FREE (NPGML Poster) • ChronoGraph (BERT2S Oral) Details in the thread ⬇️🧵1/9
English
1
2
8
322
Mathias Niepert retweetledi
Erik Bekkers
Erik Bekkers@erikjbekkers·
As promised after our great discussion, @chaitanyakjoshi! Your inspiring post led to our formal rejoinder: the Platonic Transformer. What if the "Equivariance vs. Scale" debate is a false premise? Our paper shows you can have both. 📄 Preprint: arxiv.org/abs/2510.03511 1/9
Erik Bekkers tweet media
Chaitanya K. Joshi@chaitjo

After a long hiatus, I've started blogging again! My first post was a difficult one to write, because I don't want to keep repeating what's already in papers. I tried to give some nuanced and (hopefully) fresh takes on equivariance and geometry in molecular modelling.

English
1
29
95
34.4K
Mathias Niepert retweetledi
Rishabh Anand
Rishabh Anand@rishabh16_·
"Equivariance matters even more at larger scales" ~ arxiv.org/abs/2510.09768 All the more reason we need scalable architectures with symmetry awareness. I know this is an obvious ask but I'm still confident that scaling and inductive bias need not be at odds. This paper (alongside arxiv.org/abs/2410.23179) is convincing evidence that believing "equivariance is dead/not necessary" and "scaling is all you need" might be myopic (ofc, no one has made this *strong* claim but it still seems to be an existing "community myth" of sorts) Stay tuned to this space – we're dropping something cool on this topic veryyy soon ;)
Rishabh Anand tweet mediaRishabh Anand tweet media
English
1
23
183
13.2K
Mathias Niepert retweetledi
Samir dar
Samir dar@Samir_Darouich·
I just read the Equilibrium Matching (EqM) paper; it’s excellent and insightful work! Interestingly, we recently published a related method called Adaptive Equilibrium Flow Matching (AEFM). Leaving out “adaptive” reveals strong conceptual parallels between the two approaches.
Yilun Du@du_yilun

Excited to share Equilibrium Matching (EqM)! EqM simplifies and outperforms flow matching, enabling strong generative performance of FID 1.96 on ImageNet 256x256. EqM learns a single static EBM landscape for generation, enabling a simple gradient-based generation procedure.

English
1
18
143
15.6K
Mathias Niepert retweetledi
Bruno Trentini
Bruno Trentini@brtrentini·
Call for Papers –Machine Learning for Simulations in Biology & Chemistry (Simbiochem) Workshop @ EurIPS 2025 📍 Copenhagen, Denmark (Dec 6th or 7th) 📅 Submission Deadline: 10th October, 2025 👉 Submit your paper: simbiochem.com Machine learning has transformed biology and chemistry, but many models lack physical grounding. At Simbiochem, we aim to bridge the gap between scalable ML and rigorous physical simulations, making methods faster, more accurate, and scientifically robust. We invite submissions on all aspects of machine learning applied to simulations in biology, chemistry, biophysics, and materials science. 🧪 Featuring keynote speakers and panelists from academia and industry, including University of Oxford, Microsoft Research, NVIDIA, Novo Nordisk, and more. --- Please x.com alorithm, make this reach as many of my fellow researchers as possible 🙏 --- #MachineLearning #AI4Science #Biology #Chemistry #Simbiochem
GIF
English
0
6
9
1.1K
Mathias Niepert retweetledi
Yuyang Wang
Yuyang Wang@YuyangW95·
New preprint & open-source! 🚨 “SimpleFold: Folding Proteins is Simpler than You Think” (arxiv.org/abs/2509.18480). We ask: Do protein folding models really need expensive and domain-specific modules like pair representation? We build SimpleFold, a 3B scalable folding model solely built on general-purpose transformers + flow matching, and is trained on 9M structures. SimpleFold supports easy deployment and efficient inference on consumer-level hardware with PyTorch/MLX (try it on your MacBook!) (1/n)
Yuyang Wang tweet media
English
12
87
349
95.9K
Mathias Niepert retweetledi
ICLR 2026
ICLR 2026@iclr_conf·
We’ve received A LOT OF submissions this year 🤯🤯 and are excited to see so much interest! To ensure high-quality review, we are looking for more dedicated reviewers. If you'd like to help, please sign up here docs.google.com/forms/d/e/1FAI…
English
12
71
373
100.7K
Mathias Niepert retweetledi
Petar Veličković
Petar Veličković@PetarV_93·
The @EEMLcommunity is coming to Podgorica 🇲🇪 on 8 November! Mark your calendars 🚀 Beyond excited to share that we're organising the Montenegrin ML Workshop (MMLW'25), part of EEML Workshop Series, together with @aisocietyme ❤️ (Free) registration required -- please see below!
Petar Veličković tweet media
English
1
6
8
2.3K