Samy Badreddine

12 posts

Samy Badreddine

Samy Badreddine

@sbadredd

Research Scientist at SonyAI - PhD student at FBK Interested in Neurosymbolic AI and Probabilistic ML https://t.co/pQK2WIBhEd

Katılım Kasım 2023
37 Takip Edilen22 Takipçiler
Samy Badreddine retweetledi
Emile van Krieken
Emile van Krieken@EmilevanKrieken·
And finally #3 🔨 Rank bottlenecks in KGEs: At Friday's "Salon des Refuses" I will present @sbadredd's new work on how rank bottlenecks limit knowledge graph embeddings arxiv.org/abs/2506.22271
Emile van Krieken tweet media
English
1
3
7
364
Samy Badreddine retweetledi
NeSy 2026
NeSy 2026@nesyconf·
@luislamb We're glad to announce the NeSy 2025 Test of Time award for "Probabilistic Inference Modulo Theories"! 🏆Rodrigo de Salvo Braz was here to accept the award. This is groundwork for recent NeSy approaches like DeepSeaProbLog and the probabilistic algebraic layer.
NeSy 2026 tweet mediaNeSy 2026 tweet mediaNeSy 2026 tweet media
English
1
8
13
852
Samy Badreddine
Samy Badreddine@sbadredd·
🎉Check our IJCAI'25 paper on #Explainable link prediction in #KG🔗 We use a path-based #RL model to generate explanations for predictions, rewarding paths that explain best ✨ And we've got user studies 🧑‍🔬 Great work led by Susana Nunes & @CPesquita👏 👉 arxiv.org/pdf/2509.02276
Samy Badreddine tweet media
English
0
2
5
689
Samy Badreddine
Samy Badreddine@sbadredd·
@orionweller In our work, we address this using a Mixture-of-Softmaxes output layer to score and rank concepts, breaking the bottleneck. We wonder if this could be a promising direction for the retrieval tasks you highlight as well. Great to see this parallel exploration! @EmilevanKrieken
English
1
0
2
38
Orion Weller
Orion Weller@orionweller·
Instructions/reasoning are now everywhere in retrieval - we want embeddings to do it all! 🚀 But... is it even possible? 🤔 Turns out, it's not possible for single-vector models 😱 theoretically and empirically! To make it obvious we OSS a simple eval SoTA models flop on! 🧵
Orion Weller tweet media
English
15
83
324
33.3K
Samy Badreddine
Samy Badreddine@sbadredd·
@deedydas In our work, we address this by adapting a Mixture-of-Softmaxes output layer to score concepts and break the bottleneck. We wonder if this could be a promising direction for the retrieval tasks you highlight as well. Great to see this parallel exploration! @EmilevanKrieken
English
0
0
2
40
Samy Badreddine
Samy Badreddine@sbadredd·
@deedydas Insightful! We tackled the same problem in Knowledge Graph Completion. Dot-product scoring on low-dim embeddings severely limits what a model can predict. We call this a “rank bottleneck” to align with the existing LM literature. Our paper for context: arxiv.org/abs/2506.22271
English
1
2
4
382
Deedy
Deedy@deedydas·
This new DeepMind research shows just how broken vector search is. Turns out some docs in your index are theoretically incapable of being retrieved by vector search, given a certain dimension count of the embedding. Plain old BM25 from 1994 outperforms it on recall. 1/4
Deedy tweet media
English
90
385
4.2K
475K