Davide Buffelli

90 posts

Davide Buffelli

Davide Buffelli

@DBuffelli

AI Researcher at MediaTek Research | PhD from @UniPadova, prev. intern at Meta AI, Samsung AI | Graph Representation Learning | Graph Neural Networks.

Sumali Eylül 2019
323 Sinusundan180 Mga Tagasunod
Davide Buffelli nag-retweet
Dobrik Georgiev
Dobrik Georgiev@DobrikG·
I will be presenting our paper on deep equilibrium algorithmic reasoning (poster sess 3) Feel free to swing by, say hi, have a coffee break with me in between sessions or suggest me good locations for a photo #NAR #NeurIPS2024 #NeurIPS
Dobrik Georgiev tweet mediaDobrik Georgiev tweet mediaDobrik Georgiev tweet media
English
0
6
38
5.9K
Davide Buffelli
Davide Buffelli@DBuffelli·
I'll be at NeurIPS next week presenting two papers (on the generalization of 2nd order methods, and on a new model for Neural Algorithmic Reasoning). Feel free to reach out if you'd like chat about optimization, graphs, foundation models for time series, or AI 4 chip design
English
0
1
4
349
Davide Buffelli nag-retweet
Petar Veličković
Petar Veličković@PetarV_93·
In < 1h, we will be taking the LoG stage for the new and improved version of our NAR tutorial 🔢! Hope you can join us -- it is open for all, publicly streamed on YouTube, and will feature a fun discussion of great reasoning research over both graphs 🕸️ and language 💬!
Learning on Graphs Conference 2025@LogConference

🌟 Day 4 of LoG 2024 – The Grand Finale! 🎙️ Keynote: @AldenHung ( @IsomorphicLabs ) 💻 Tutorial: Neural Algorithmic Reasoning II: From Graphs to Language 🏆 Oral Presentations: TMLR Track Join us for an exciting final day of insights and innovation! 🚀

English
2
13
56
9.4K
Davide Buffelli nag-retweet
Bastian Grossenbacher-Rieck
Bastian Grossenbacher-Rieck@Pseudomanifold·
📢 Exciting News! Our paper, “Bayesian Computation Meets Topology,” has just been published in TMLR! 🎉 👉openreview.net/forum?id=0h1Dt… 👈 Here’s a deep dive into how #topology and #bayes(ian) computation come together to enhance parameter inference: 📌 Why Topology? Topology provides powerful descriptors like persistent homology that capture the “shape” of data across scales. Our approach leverages these topological features in Bayesian inference—crucial for scenarios where data is scarce or structurally complex. 📎 The Gap We Address Topological methods are robust for capturing the overall shape of data but don’t integrate into Bayesian frameworks in a straight-forward fashion. We’ve developed a method that fills this gap, incorporating topology-based loss functions for Bayesian parameter estimation. 🛠️ Core Methodology Our framework uses topological loss functions based on persistent homology for inference, enabling us to construct a comparison-based posterior. This allows for uncertainty quantification without needing explicit likelihoods—essential for chaotic systems where the likelihood is analytically intractable. 🚀 Applications & Experiments We validate our approach on models with inherent complexity, such as the Vicsek swarm model and Lattice Boltzmann simulations. In these cases, topology-driven Bayesian inference produced significantly more accurate parameter estimates than standard geometry-based ones. 🧩 Implications This work brings Bayesian computation and topology closer together, paving the way for robust, simulation-based inference. Applications extend to fields like biology and physics, where data often exhibit complex, multi-scale structures that benefit from this topologically-informed approach. 👀 Check It Out! 📜 openreview.net/pdf?id=0h1DtRK… 💻 github.com/aidos-lab/TABAC Joint work with @JRohrscheidt and @SeBayesian!
English
0
19
110
4.9K
Davide Buffelli
Davide Buffelli@DBuffelli·
If you're curious, you can check out the preprint to read how we take advantage of persistent homology to provide higher-order information to GNNs arxiv.org/abs/2409.08217
English
0
1
4
256
Davide Buffelli
Davide Buffelli@DBuffelli·
Very happy that our paper "CliquePH: Higher-Order Information for Graph Neural Networks through Persistent Homology on Clique Graphs" has been accepted at @LogConference ! a huge thanks to my co-authors @csfarzin @Pseudomanifold
English
2
6
32
4.3K
Davide Buffelli
Davide Buffelli@DBuffelli·
I can't wait to share the papers soon! [6/6]
English
0
0
1
67
Davide Buffelli
Davide Buffelli@DBuffelli·
In this paper we introduce DEAR, a new equilibrium-based way to decide the termination, both during training and inference, in the setting of neural algorithmic reasoning. We provide a theoretical motivation for DEAR, and show it leads to great empirical performance. [5/6]
English
1
0
2
113
Davide Buffelli
Davide Buffelli@DBuffelli·
I'm very excited to announce two papers accepted at NeurIPS 2024 🎉🎊 Super thankful to all my collaborators! More information in thread [1/6]
English
1
0
5
523
Davide Buffelli nag-retweet
Dobrik Georgiev
Dobrik Georgiev@DobrikG·
#neurips2024 will be very DEAR to me this year! Deep Equilibrium Algorithmic Reasoning has also been accepted at NeurIPS after its selection for an oral presentation at MAR@CVPR 2024. Special thanks to my collaborators @DBuffelli @pl219_Cambridge and, my lucky charm, @sonjj74!
Dobrik Georgiev tweet media
English
2
4
37
6.4K
Davide Buffelli
Davide Buffelli@DBuffelli·
I’ll also be on the job market for for research scientist/postdoc positions from early 2023, and I’m happy to explore any opportunity
English
0
2
3
0
Davide Buffelli
Davide Buffelli@DBuffelli·
Next week I’ll be at #NeurIPS presenting our paper on GNNs and size generalization ✌️ if you want to have a chat about GNNs, research, or tv series, feel free to DM me!
English
2
1
20
0
Davide Buffelli
Davide Buffelli@DBuffelli·
This work was the outcome of my internship at Samsung AI Research last year working with Efthymia Tsamoura.
English
0
0
1
0
Davide Buffelli
Davide Buffelli@DBuffelli·
Have you ever wondered how we can use prior knowledge expressed in symbolic form to guide the training of deep learning models? Then check out our preprint here: arxiv.org/abs/2209.02749
English
1
0
4
0
Davide Buffelli
Davide Buffelli@DBuffelli·
Super excited that our paper "Scalable Theory-Driven Regularization of Scene Graph Generation Models" has been accepted at AAAI 2023!!🎉🎊
English
1
1
4
0
Davide Buffelli
Davide Buffelli@DBuffelli·
@brunofmr Thanks @brunofmr! That’s definitely an interesting direction to think about. Do you think it could make sense to define a causal model with variable representing spectral properties of the graphs (and their coarsened versions)? As many coarsening methods act on these
English
1
0
0
0
Bruno Ribeiro
Bruno Ribeiro@brunofmr·
Congrats! Great work! Very interesting to see the gains An interesting extension (workshop, journal) is to formally describe the implicit causal model assumption of coarsening In the end, there is always an (ad-hoc) causal mechanism as the learning task is fundamentally causal
Davide Buffelli@DBuffelli

I am very proud to announce that our paper "SizeShiftReg: a Regularization Method for Improving Size-Generalization in Graph Neural Networks" has been accepted at NeurIPS 2022! A huge thanks to my co-authors Pietro Liò and @vandinfa!

English
1
0
6
0