Samuel Horváth

39 posts

Samuel Horváth

Samuel Horváth

@sam_hrvth

Assistant professor of Machine Learning @mbzuai, Ph.D. @KAUST_News, former intern @metaAI, @samsungresearch, and @amazon. Co-organizer of @flow_seminar.

Abu Dhabi, UAE Katılım Nisan 2022
336 Takip Edilen313 Takipçiler
Samuel Horváth retweetledi
Steve Laskaridis
Steve Laskaridis@stevelaskaridis·
📢 Announcing the AdaptFM Workshop @icmlconf As foundation models grow in scale and ubiquity, the ability to adapt inference dynamically to the task and available resources becomes critical. Submission deadline: May 1, 2026 (AoE) 📍Seoul, South Korea 🌐adaptfm.gitlab.io
Steve Laskaridis tweet media
English
1
3
4
310
Samuel Horváth retweetledi
Peter Richtarik
Peter Richtarik@peter_richtarik·
I am pleased to announce that together with 3 of my (then present and now former) KAUST PhD students - Samuel Horváth @sam_hrvth (now Assistant Professor at MBZUAI, Abu Dhabi), - Dmitry Kovalev @dakovalev1 (now Research Scientist at Yandex, Moscow), - Konstantin Mishchenko @konstmish (now Research Scientist at Meta, Paris) and Sebastian U Stich @SebastianUStich (now Faculty at CISPA, Saarbrücken; who was visiting us at KAUST at the time) we have won the  2023 Charles Broyden Prize for the paper Stochastic distributed learning with gradient quantization and double-variance reduction the paper: tandfonline.com/doi/full/10.10… previous prizes: tandfonline.com/journals/goms2… ***** The Charles Broyden Prize is an annual international award honoring the best paper published in the journal Optimization Methods and Software (OMS) during the preceding year. Established in 2009, the prize commemorates the life and work of British mathematician Charles George Broyden (1933–2011), a pioneer in numerical optimization known for his namesake methods and his role in the development of the BFGS algorithm. ***** I consider this prize to be shared with the authors of the original DIANA paper Konstantin Mishchenko @konstmish, Eduard Gorbunov @ed_gorbunov, Martin Takáč @TakacMartin and P.R. Distributed learning with compressed gradient differences, arxiv.org/abs/1901.09269, which was the inspiration for this follow up work that polished, generalized and improved it, and the authors of the previous SEGA work, Filip Hanzely, Konstantin Mishchenko @konstmish and P.R.. SEGA: Variance reduction via gradient sketching, Advances in Neural Information Processing Systems 31:2082-2093, 2018, which can be seen as a single-node variant of DIANA, as well as the authors of the JacSketch paper, which was the inspiration for SEGA: Robert M. Gower @gowerrobert, P.R. and Francis Bach @BachFrancis, Stochastic quasi-gradient methods: Variance reduction via Jacobian sketching, Mathematical Programming 188:135–192, 2021 (arxiv.org/abs/1805.02632) One can keep going like this, since every new discovery builds on prior work, but I'll stop here. So, once again, congrats to the authors of the award-winning paper, as well as to the authors of all these prior works!
Peter Richtarik tweet media
English
4
8
153
11.7K
Samuel Horváth retweetledi
Peter Richtarik
Peter Richtarik@peter_richtarik·
I am organizing "Workshop on Distributed Training in the Era of Large Models" at KAUST during November 24-26, 2025. If you have done some cool work in the area, you might want to attend. The talks are invite-only. I'll soon start sending invites. More info later!
Peter Richtarik tweet mediaPeter Richtarik tweet mediaPeter Richtarik tweet mediaPeter Richtarik tweet media
English
2
12
68
4.3K
Samuel Horváth retweetledi
Samuel Horváth retweetledi
Riccardo Zaccone @ NeurIPS
Riccardo Zaccone @ NeurIPS@RickZack96·
Do you feel FL research is stuck with methods that do not work well in realistic scenarios? 🤔 🫵We got you! Introducing 🚀Generalized Heavy-Ball Momentum (GHBM)🚀, accepted at TMLR: the FL algorithm with both SOTA theoretical guarantees and much better empirical results. 🧵1/9
Riccardo Zaccone @ NeurIPS tweet media
English
1
6
20
4.1K
Samuel Horváth retweetledi
Eduard Gorbunov
Eduard Gorbunov@ed_gorbunov·
I’m pleased to share that, starting August 1, I will be joining MBZUAI (@mbzuai) as an Assistant Professor in the Department of Statistics and Data Science. My research focuses on optimization for machine learning, with an emphasis on stochastic methods, federated learning, min-max problems, and the efficiency of optimization algorithms. I will have several openings for interns and postdocs interested in optimization-related problems in machine learning. If this aligns with your interests, feel free to email me your CV along with a brief description of your motivation.
English
11
14
215
21.1K
Samuel Horváth retweetledi
Samuel Horváth retweetledi
On-Device Learning for Foundation Models @ICML 25
We welcome submissions on efficient foundation models, on-device learning, and distributed learning to our @icmlconf workshop. Deadline: May 23rd @stevelaskaridis, @sam_hrvth, @BerivanISIK, @KairouzPeter, @_cgiannoula_, @bilgeacun, @angeloskath, @TakacMartin, @niclane7
On-Device Learning for Foundation Models @ICML 25@tinytitans_icml

Happy to announce our @icmlconf 2025 workshop: "The Next Wave of On-Device Learning for Foundation Models" Details👇 1/3

English
1
2
9
2.7K
Samuel Horváth retweetledi
Flower
Flower@flwrlabs·
📣 Flower Labs visits the Mohamed bin Zayed University of Artificial Intelligence (@MBZUAI)! We are excited to announce that Research Scientist, Mohammad Naseri (@mnaseri2770), will be visiting MBZUAI this week and giving a talk on Thursday, May 1. If you are in the area, and want to chat, reach out to him to meet up. Mohammad's talk, "Trustworthy Decentralized AI," will focus on how we can advance the development of secure, reliable, and decentralized AI systems. We would like to thank Samuel Horváth (@sam_hrvth) for his support in organizing this event. 👏🏻
Flower tweet media
English
0
1
5
316
Samuel Horváth retweetledi
Eduard Gorbunov
Eduard Gorbunov@ed_gorbunov·
This Thursday (tomorrow), I will present our work on high-probability convergence for composite and distributed problems with heavy-tailed noise at #ICML2024. Come to the talk (Hall A1, Oral 5B Optimization 2) at 11:00 and to the poster at 11:30 - 13:00 (Hall C 4-9 #1014).
Eduard Gorbunov tweet media
English
1
1
10
1.7K
Samuel Horváth retweetledi
Federated Learning One World Seminar (FLOW)
📢: The 112th FLOW talk is on Wednesday (7th February) at **1 pm UTC**. Eduard Gorbunov (@mbzuai) will discuss "Variance Reduction is an Antidote to Byzantines." Please register to our mailing list: bit.ly/3WVplLU.
Federated Learning One World Seminar (FLOW) tweet media
English
0
2
2
1.5K
Samuel Horváth retweetledi
Steve Laskaridis
Steve Laskaridis@stevelaskaridis·
Honoured to be invited at the 2nd MBZUAI Workshop on Collaborative Learning (mbzuai-cl.github.io/2023/). I'll be covering our latest work on trainable decompositions for efficiency called Maestro (accepted at WANT-AI@NeurIPS'23). Tune in for more. Pre-print: arxiv.org/abs/2308.14929
Steve Laskaridis tweet media
English
1
1
15
1.6K
Samuel Horváth retweetledi
Eduard Gorbunov
Eduard Gorbunov@ed_gorbunov·
I am on the job market for a Tenure Track Assistant Professor position! I work on optimization (stochastic optimization, distributed optimization, min-max problems/variational inequalities). If you are aware of some great options, please DM me. My website: eduardgorbunov.github.io
English
2
13
65
15.2K