Katharina Limbeck

12 posts

Katharina Limbeck

Katharina Limbeck

@LimbeckKat

PhD Student at Helmholtz Munich

Munich Katılım Temmuz 2022
72 Takip Edilen76 Takipçiler
Katharina Limbeck retweetledi
Bastian Grossenbacher-Rieck
Bastian Grossenbacher-Rieck@Pseudomanifold·
And it just works! Our pooling methods perform well across tasks and… 🏆 …reach top classification and regression performance. 🔥 …retain this robust performance across pooling ratios. ✨ …preserve graph structure and spectral properties 🧵4/n
Bastian Grossenbacher-Rieck tweet media
English
1
1
3
75
Katharina Limbeck retweetledi
Bastian Grossenbacher-Rieck
Bastian Grossenbacher-Rieck@Pseudomanifold·
But…how? 🔍 We contract the most redundant edges that are least relevant for the graph’s structural diversity as measured by the magnitude or spread of a graph. 🧵3/n
Bastian Grossenbacher-Rieck tweet media
English
1
1
3
74
Katharina Limbeck retweetledi
Bastian Grossenbacher-Rieck
Bastian Grossenbacher-Rieck@Pseudomanifold·
Why do we need structure-aware pooling, anyway? 🔮Our methods, MagEdgePool and SpreadEdgePool, faithfully preserve the original graphs’ geometry. Alternative pooling layers destroy graph structure to varying extents. 🧵2/n
Bastian Grossenbacher-Rieck tweet media
English
1
1
4
84
Katharina Limbeck retweetledi
Bastian Grossenbacher-Rieck
Bastian Grossenbacher-Rieck@Pseudomanifold·
Big graph, little compute? In our #NeurIPS2025 paper we propose geometry-aware edge-contraction-based pooling methods for GNNs. Our methods preserve graph structure, make interpretable pooling choices, and ensure robust performance at downstream tasks. 🧵1/n
Bastian Grossenbacher-Rieck tweet media
English
1
7
14
933
Katharina Limbeck retweetledi
Emily Simons
Emily Simons@simons_emilym·
Hi!👋Emily Simons here, @FulbrightPrgrm Student Researcher with @HelmholtzMunich's AIDOS Lab. Happy to be sharing my first contribution to AIDOS in this 🧵. Say hiya to SCOTT, the perfect holiday (software) package🎁for the #curvature and #graph enthusiast in your life. 🧵1/n
Emily Simons tweet media
English
1
4
12
908
Katharina Limbeck retweetledi
Bastian Grossenbacher-Rieck
Bastian Grossenbacher-Rieck@Pseudomanifold·
Coming up today at #NeurIPS2024! Poster #3303 in the East Exhibit Hall from 16:30 to 19:30 EST! My awesome student @LimbeckKat will present her great work while I'll try to ignore the ML FOMO at home 😅
Bastian Grossenbacher-Rieck@Pseudomanifold

Coming up soon at #NeurIPS2024, combining #geometry and #topology: “Metric Space Magnitude for Evaluating the Diversity of Latent Representations” ❓ Ever wondered how to best assess the diversity of model outputs or how to choose between models? 👉We address these questions by measuring the diversity of generative models, assessing LLMs, and analysing latent spaces. Embeddings allow us to intuitively understand similarities between e.g. generated graphs, images or sentences, and we leverage this knowledge to automatically measure diversity. 🤔 But how to best measure diversity? From a mathematical perspective, diversity measures should fulfil theoretical guarantees. However, we find that many baseline diversity measures currently used for assessing latent spaces and generative models do not fulfil these necessary requirements. 🔍 We leverage the magnitude of metric spaces to define a family of novel, multi-scale diversity measures. Magnitude is a powerful geometric descriptor that measures diversity as the effective number of distinct points or clusters across varying scales of distance. By summarising magnitude across multiple resolutions, we then define theoretically-well funded measures of the intrinsic diversity or of the difference in diversity. 🚀 Experiments validate our proposed diversity measures. When evaluating LLMs, our magnitude-based measures characterise text embedding models by their diversity and best capture the ground-truth diversity of generated sentences. Further, when evaluating image and graph generative models, magnitude reliably detects mode collapse and mode dropping outperforming alternative evaluation metrics. 🔮 This work improves the diversity evaluation of latent spaces by leveraging magnitude as a theoretically-motivated measure of diversity and geometry. It paves the way for better benchmarking practices in generative model evaluation and further exploring the expressivity of magnitude in ML applications. Check out our paper or code to learn more! 📜 Paper: arxiv.org/abs/2311.16054 💻 Code: github.com/aidos-lab/magn… 📹 Video: youtube.com/watch?v=sZBP52… 👏Joint work w/ @LimbeckKat, @randreeva1 and @RikSarkarNet. 📢 Poster: #NeurIPS2024, Vancouver, Friday 13th December, 16:30

English
3
5
31
2.1K
Katharina Limbeck
Katharina Limbeck@LimbeckKat·
@artuursberzins @Pseudomanifold Thank you for your interest in magnitude and our work! I agree that maximising diversity is a fascinating extension, and I’d be excited to discuss this further!
English
0
1
1
488
Arturs Berzins
Arturs Berzins@artuursberzins·
@Pseudomanifold Great work! I just went through the book by Leinster and learned about magnitudes - I didn't expect to see it at NeurIPS already! In our work, we explicitly maximize the diversity of a generative model. It would be exciting to exchange some insights. 🤔 @LimbeckKat
English
2
1
5
648
Bastian Grossenbacher-Rieck
Bastian Grossenbacher-Rieck@Pseudomanifold·
Coming up soon at #NeurIPS2024, combining #geometry and #topology: “Metric Space Magnitude for Evaluating the Diversity of Latent Representations” ❓ Ever wondered how to best assess the diversity of model outputs or how to choose between models? 👉We address these questions by measuring the diversity of generative models, assessing LLMs, and analysing latent spaces. Embeddings allow us to intuitively understand similarities between e.g. generated graphs, images or sentences, and we leverage this knowledge to automatically measure diversity. 🤔 But how to best measure diversity? From a mathematical perspective, diversity measures should fulfil theoretical guarantees. However, we find that many baseline diversity measures currently used for assessing latent spaces and generative models do not fulfil these necessary requirements. 🔍 We leverage the magnitude of metric spaces to define a family of novel, multi-scale diversity measures. Magnitude is a powerful geometric descriptor that measures diversity as the effective number of distinct points or clusters across varying scales of distance. By summarising magnitude across multiple resolutions, we then define theoretically-well funded measures of the intrinsic diversity or of the difference in diversity. 🚀 Experiments validate our proposed diversity measures. When evaluating LLMs, our magnitude-based measures characterise text embedding models by their diversity and best capture the ground-truth diversity of generated sentences. Further, when evaluating image and graph generative models, magnitude reliably detects mode collapse and mode dropping outperforming alternative evaluation metrics. 🔮 This work improves the diversity evaluation of latent spaces by leveraging magnitude as a theoretically-motivated measure of diversity and geometry. It paves the way for better benchmarking practices in generative model evaluation and further exploring the expressivity of magnitude in ML applications. Check out our paper or code to learn more! 📜 Paper: arxiv.org/abs/2311.16054 💻 Code: github.com/aidos-lab/magn… 📹 Video: youtube.com/watch?v=sZBP52… 👏Joint work w/ @LimbeckKat, @randreeva1 and @RikSarkarNet. 📢 Poster: #NeurIPS2024, Vancouver, Friday 13th December, 16:30
YouTube video
YouTube
Bastian Grossenbacher-Rieck tweet media
English
4
11
121
16.4K
Katharina Limbeck
Katharina Limbeck@LimbeckKat·
🎉 Excited to share the first accepted paper of my PhD! 🔍 Our work explores latent spaces and demonstrates the utility of metric space magnitude for evaluating diversity. 💫 I’m grateful to everyone who made this possible and look forward to connecting at #NeurIPS2024!
Bastian Grossenbacher-Rieck@Pseudomanifold

Coming up soon at #NeurIPS2024, combining #geometry and #topology: “Metric Space Magnitude for Evaluating the Diversity of Latent Representations” ❓ Ever wondered how to best assess the diversity of model outputs or how to choose between models? 👉We address these questions by measuring the diversity of generative models, assessing LLMs, and analysing latent spaces. Embeddings allow us to intuitively understand similarities between e.g. generated graphs, images or sentences, and we leverage this knowledge to automatically measure diversity. 🤔 But how to best measure diversity? From a mathematical perspective, diversity measures should fulfil theoretical guarantees. However, we find that many baseline diversity measures currently used for assessing latent spaces and generative models do not fulfil these necessary requirements. 🔍 We leverage the magnitude of metric spaces to define a family of novel, multi-scale diversity measures. Magnitude is a powerful geometric descriptor that measures diversity as the effective number of distinct points or clusters across varying scales of distance. By summarising magnitude across multiple resolutions, we then define theoretically-well funded measures of the intrinsic diversity or of the difference in diversity. 🚀 Experiments validate our proposed diversity measures. When evaluating LLMs, our magnitude-based measures characterise text embedding models by their diversity and best capture the ground-truth diversity of generated sentences. Further, when evaluating image and graph generative models, magnitude reliably detects mode collapse and mode dropping outperforming alternative evaluation metrics. 🔮 This work improves the diversity evaluation of latent spaces by leveraging magnitude as a theoretically-motivated measure of diversity and geometry. It paves the way for better benchmarking practices in generative model evaluation and further exploring the expressivity of magnitude in ML applications. Check out our paper or code to learn more! 📜 Paper: arxiv.org/abs/2311.16054 💻 Code: github.com/aidos-lab/magn… 📹 Video: youtube.com/watch?v=sZBP52… 👏Joint work w/ @LimbeckKat, @randreeva1 and @RikSarkarNet. 📢 Poster: #NeurIPS2024, Vancouver, Friday 13th December, 16:30

English
1
5
26
3.7K
Katharina Limbeck
Katharina Limbeck@LimbeckKat·
In my first Tweet, I’m excited to share my first 👥collaboration with @randreeva1, @Pseudomanifold and @RikSarkarNet. Check out our 📝preprint to 🔮find out how magnitude relates to generalisation in neural networks! @helmholtz_ai @MunichDS @InfAtEd 📜 arxiv.org/abs/2305.05611
Rayna Andreeva@randreeva1

🚨So excited to share our first paper in collaboration with @Pseudomanifold, @LimbeckKat and @RikSarkarNet, in which we connect the "exotic" 🦄invariant magnitude with the generalisation error in neural networks! @BioMedAI_CDT @HelmholtzMunich 📜➡️arxiv.org/abs/2305.05611

English
0
3
14
3.6K