Aleksandar Bojchevski

1.1K posts

Aleksandar Bojchevski banner
Aleksandar Bojchevski

Aleksandar Bojchevski

@abojchevski

Trustworthy Machine Learning. Graphs. Professor at the University of Cologne. He/Him. 🏳️‍🌈 Open PhD/PostDoc positions: https://t.co/QSCqXRzlEu

Germany Katılım Ağustos 2009
1.5K Takip Edilen1.1K Takipçiler
Aleksandar Bojchevski retweetledi
Alireza Javanmardi
Alireza Javanmardi@AlirezaJVNMRDI·
1/5 Ever wondered how to apply conformal prediction when there's epistemic uncertainty? Our new paper addresses this question!CP can benefit from models like Bayesian, evidential, and credal predictors to have better prediction sets,for instance, in terms of conditional coverage.
Alireza Javanmardi tweet media
Statistics (Machine Learning) Papers@StatsPapers

Optimal Conformal Prediction under Epistemic Uncertainty. arxiv.org/abs/2505.19033

English
1
5
14
2.8K
Aleksandar Bojchevski retweetledi
Soroush H. Zargarbashi
Soroush H. Zargarbashi@zargar_soroush·
🚨 Robust conformal prediction is expensive as we need around 10000 forward passes per input. Or Is it? Checkout our ICLR2025 paper: openreview.net/forum?id=ltrxR… We extend conformal sets to worst case noise under any smoothing, with much less samples. Joint work with @abojchevski
English
1
1
4
783
Aleksandar Bojchevski retweetledi
Aleksandar Bojchevski retweetledi
Ruiqi Gao
Ruiqi Gao@RuiqiGao·
A common question nowadays: Which is better, diffusion or flow matching? 🤔 Our answer: They’re two sides of the same coin. We wrote a blog post to show how diffusion models and Gaussian flow matching are equivalent. That’s great: It means you can use them interchangeably.
Ruiqi Gao tweet media
English
16
199
945
172.2K
Martin Mundt
Martin Mundt@mundt_martin·
🎉 Exciting major personal news 🎉 I’ll be joining the University of Bremen as a full (tenured) professor for Lifelong Machine Learning in January 25! The focus will be on researching adaptive, sustainable & inclusive AI. -> apps for 2 new PhD students will open next week!
Martin Mundt tweet media
English
26
8
177
52.6K
Aleksandar Bojchevski retweetledi
Mehrdad Farajtabar
Mehrdad Farajtabar@MFarajtabar·
1/ Can Large Language Models (LLMs) truly reason? Or are they just sophisticated pattern matchers? In our latest preprint, we explore this key question through a large-scale study of both open-source like Llama, Phi, Gemma, and Mistral and leading closed models, including the recent OpenAI GPT-4o and o1-series. arxiv.org/pdf/2410.05229 Work done with @i_mirzadeh, @KeivanAlizadeh2, Hooman Shahrokhi, Samy Bengio, @OncelTuzel. #LLM #Reasoning #Mathematics #AGI #Research #Apple
Mehrdad Farajtabar tweet media
English
383
1.2K
5.6K
1.6M
Aleksandar Bojchevski retweetledi
Stanislav Fort
Stanislav Fort@stanislavfort·
✨🎨🏰Super excited to share our new paper Ensemble everything everywhere: Multi-scale aggregation for adversarial robustness Inspired by biology we 1) get adversarial robustness + interpretability for free, 2) turn classifiers into generators & 3) design attacks on vLLMs 1/12
Stanislav Fort tweet media
GIF
Stanislav Fort tweet media
English
24
196
1K
248.2K
Aleksandar Bojchevski retweetledi
Soroush H. Zargarbashi
Soroush H. Zargarbashi@zargar_soroush·
#ICML2024 With conformal prediction (CP) we return sets that include the true class with guaranteed high probability, With robust CP we maintain this probability even for worst case (adversarial) noisy input. With CAS (our method) we have robust CP with efficient (small) sets.
Soroush H. Zargarbashi tweet media
English
1
5
27
2.9K
Aleksandar Bojchevski retweetledi
Aleksandar Bojchevski retweetledi
Vijay Lingam
Vijay Lingam@vijaylingam08·
🤔 Ever wondered how vulnerable Graph Neural Networks (GNNs) really are? 🚀 Check out our slightly late, but crucial, #ICLR2024 paper: "Rethinking Label Poisoning for GNNs: Pitfalls and Attacks"! 🔥
English
1
5
13
1.1K
Aleksandar Bojchevski retweetledi
Mateo Díaz
Mateo Díaz@mateodd25·
This paper made me smile a lot while working on it, so I want to share a bit about it arxiv.org/abs/2405.09676. We draw a parallel story to the Eckart-Young Theorem (from numerical analysis) in stochastic optimization/learning problems. (with Josh Cutler and Dima Drusvyatskiy)
English
3
23
98
13.5K
Aleksandar Bojchevski retweetledi
Brendan Dolan-Gavitt
Another entry in a long-running series where Nicholas Carlini breaks ML defenses published at top security conferences with as little effort as possible (in this case a one line bugfix in the eval)
Brendan Dolan-Gavitt tweet media
English
13
82
662
201.3K
Aleksandar Bojchevski retweetledi
Simon Geisler
Simon Geisler@geisler_si·
We introduce Spatio-Spectral GNNs (S²GNNs) – an effective modeling paradigm via the synergy of spatially and spectrally parametrized graph conv. S²GNNs generalize the spatial + FFT conv. of State Space Models like H3/Hyena. Joint work w/ @ArthurK48147 @dan1elherbst @guennemann
GIF
English
3
28
108
13K
Aleksandar Bojchevski retweetledi
Arvind Narayanan
Arvind Narayanan@random_walker·
On tasks like coding we can keep increasing accuracy by indefinitely increasing inference compute, so leaderboards are meaningless. The HumanEval accuracy-cost Pareto curve is entirely zero-shot models + our dead simple baseline agents. New research w @sayashk @benediktstroebl 🧵
Arvind Narayanan tweet media
English
5
30
190
40.7K