Eike Eberhard

39 posts

Eike Eberhard banner
Eike Eberhard

Eike Eberhard

@ESEberhard

PhD student focused on ML methods for quantum chemistry

DAML Lab @ TUMunich Katılım Ekim 2024
92 Takip Edilen48 Takipçiler
Sabitlenmiş Tweet
Eike Eberhard
Eike Eberhard@ESEberhard·
While Nicholas is switching gears, I am switching fields from Physics to Machine Learning. This started out as a curious open-ended project, but it turns out that GNNs might be able to advance DFT, too. I can‘t wait to find out where this line of research might lead us.
Nicholas Gao@n_gao96

Switching gears from QMC to DFT for this one. I'm excited to share our newest work, where we learn the non-local exchange-correlation functional in KS-DFT with equivariant graph neural networks! Joint work w/@ESEberhard, @guennemann 📝 arxiv.org/abs/2410.07972

English
0
1
10
858
Eike Eberhard retweetledi
Amine Ketata
Amine Ketata@amine_ketata·
Excited to share that my first PhD paper, which introduces a new diffusion model for relational databases, has been accepted to #NeurIPS2025! We will be presenting it this week in San Diego. ☀️🌴 Joint work with @ludke_david, @SchwinnLeo, and @guennemann. 🧵 1/
Amine Ketata tweet media
English
1
8
12
1.5K
Eike Eberhard retweetledi
Filippo Guerranti @ NeurIPS25
Heading to San Diego for #NeurIPS2025! 🌴☀️ I’ll be presenting 3 recent papers covering generative models for hierarchies, spatiotemporal tissue dynamics, and long-range graph learning. If you're around, drop by and say hi! 👋 Here’s the schedule 🧵👇
English
1
13
22
1.3K
Eike Eberhard retweetledi
Tim Tomov
Tim Tomov@timtomov·
Can we actually tell when LLMs know or don’t know? For questions with a single answer, that works. But once ambiguity enters - several answers are correct - current methods collapse, confusing model with data uncertainty. w/ @dfuchsgruber @TomWollschlager @guennemann [1/4]🧵
Tim Tomov tweet media
English
2
11
19
1.5K
Eike Eberhard retweetledi
Leo Schwinn
Leo Schwinn@SchwinnLeo·
We can sample highly transferable jailbreaks directly from non-autoregressive models (e.g., Diffusion LLMs) without any optimization!
David Lüdke@ludke_david

Open-source Diffusion LLMs easily break GPT-5! In “Diffusion LLMs are Natural Adversaries for any LLM,” we show that Inpainting on Diffusion LLMs yields efficient, transferable jailbreaks without any model access. @TomWollschlager, Paul Ungermann, @guennemann, @leoschwinn 🧵

English
0
5
10
793
Eike Eberhard retweetledi
David Lüdke
David Lüdke@ludke_david·
Open-source Diffusion LLMs easily break GPT-5! In “Diffusion LLMs are Natural Adversaries for any LLM,” we show that Inpainting on Diffusion LLMs yields efficient, transferable jailbreaks without any model access. @TomWollschlager, Paul Ungermann, @guennemann, @leoschwinn 🧵
David Lüdke tweet media
English
1
15
18
2.1K
Eike Eberhard retweetledi
NiklasKemper
NiklasKemper@kemper_ni·
The hunt for increased WL expressivity has led to many new GNNs but limited real-world success. So what are we missing? Can we find a better objective? We answer these questions in our new paper: arxiv.org/abs/2509.01254 Joint work /w @TomWollschlager @guennemann 🧵 (1/6)
English
1
9
18
934
Eike Eberhard retweetledi
Simon Geisler
Simon Geisler@geisler_si·
Super excited to be at #ICML2025 in Vancouver this week! 🇨🇦 Thrilled to be presenting new work and soaking in all the amazing research. This also marks my first conference trip since joining @GoogleResearch a couple of months ago! #LLMs #AIResearch
English
1
7
15
471
Eike Eberhard retweetledi
Jan Schuchardt
Jan Schuchardt@SchuchardtJan·
How private is DP-SGD for self-supervised training on sequences? Our #ICML2025 spotlight shows that it can be very private—if you parameterize it right! 📜arxiv.org/abs/2502.02410 #icml Joint work w/ M. Dalirrooyfard, J. Guzelkabaagac, A. Schneider, Y. Nevmyvaka, @guennemann 1/6
English
2
13
18
1.1K
Eike Eberhard retweetledi
Rianne van den Berg
Rianne van den Berg@vdbergrianne·
🚀 After two+ years of intense research, we’re thrilled to introduce Skala — a scalable deep learning density functional that hits chemical accuracy on atomization energies and matches hybrid-level accuracy on main group chemistry — all at the cost of semi-local DFT. ⚛️🔥🧪🧬
Rianne van den Berg tweet media
English
5
61
291
33K
Eike Eberhard retweetledi
Tom Wollschläger
Tom Wollschläger@TomWollschlager·
How do LLMs navigate refusal? Our new @ICMLConf paper introduces a gradient-based approach & Representational Independence to map this complex internal geometry. 🚨 New Research Thread! 🚨 The Geometry of Refusal in Large Language Models By @guennemann's lab & @GoogleAI. 🧵👇
English
1
12
29
2.1K
Eike Eberhard retweetledi
Tom Wollschläger
Tom Wollschläger@TomWollschlager·
3️⃣ Key Question ❓ Is refusal behavior governed by a single vector, or do multiple independent mechanisms exist? We introduce a novel gradient-based method to extract refusal-mediating directions more effectively! 🎯
English
1
1
7
111
Eike Eberhard retweetledi
Ricky T. Q. Chen
Ricky T. Q. Chen@RickyTQChen·
This ICLR is the best conference ever. Attendees are extremely friendly and cuddly. ..What do you mean this is the wrong hall?
Ricky T. Q. Chen tweet mediaRicky T. Q. Chen tweet mediaRicky T. Q. Chen tweet mediaRicky T. Q. Chen tweet media
English
11
27
405
27.7K
Eike Eberhard
Eike Eberhard@ESEberhard·
If you are attending #ICLR2025 and are interested in electronic structure modelling / quantum chemistry come by our poster on learnable non-local XC-functionals to discuss with @n_gao96 and me. 🗓️ Today | 3:00 pm– 5:30 pm 📍Hall 3 | Poster #3
Eike Eberhard tweet media
English
0
7
16
1.1K
Eike Eberhard retweetledi
Lukas Gosch
Lukas Gosch@lukgosch·
Excited to announce our #ICLR2025 spotlight work deriving the first exact certificates for neural networks against label poisoning 🎉. Joint work w/ @maha81193, @guennemann & Debarghya. For more details check out the thread below👇 or check out our paper arxiv.org/abs/2412.00537.
Mahalakshmi Sabanayagam@maha81193

🎉Excited to announce our #ICLR2025 Spotlight! 🚀@lukgosch and I will be presenting our paper on the first exact certificate against label poisoning for neural nets and graph neural nets. Joint work with @guennemann and Debarghya 👇[1/6]

English
0
14
22
1.1K