Robert Bamler

34 posts

Robert Bamler banner
Robert Bamler

Robert Bamler

@robamler

Professor of Data Science and Machine Learning at @uni_tue, member of @ml4science and Tübingen AI Center.

Tübingen, Germany 가입일 Ekim 2011
25 팔로잉230 팔로워
Robert Bamler 리트윗함
Tim Xiao
Tim Xiao@TimZXiao·
✨ New paper: Flipping Against All Odds We found that large language models (LLMs) can describe probabilities—but fail to sample from them faithfully. Yes, even flipping a fair coin is hard. 🪙 🧵 Here’s what we learned—and how we fixed it. 🔗arxiv.org/abs/2506.09998 1/
Tim Xiao tweet media
English
4
7
16
2.8K
Robert Bamler
Robert Bamler@robamler·
How could online learning apps adapt to learners and improve over time? Even if you're not a machine learning expert, @hanqizh's blog post on our last ICLR paper explains new approaches in simple terms (joint work with @alvorithm and @TheCharleyWu, supported by @TheresaAuthaler).
English
0
2
5
413
Robert Bamler 리트윗함
Tim Xiao
Tim Xiao@TimZXiao·
🤔What about using an LLM as a function approximator for f(x; θ) where the parameters θ are natural language? 🤔Can we learn θ just like in machine learning (ML) where θ are numerical values? ✨Check out Verbalized ML, where data and models both operate in natural language! 🤩
GIF
English
1
12
43
8.7K
Robert Bamler
Robert Bamler@robamler·
Did you know that the training/test set split of the SVHN data set is biased, making SVHN unsuitable for evaluating generative models? Learn more from my students @TimZXiao and @johanneszenn at the DistShift workshop at #NeurIPS2023 tomorrow (10.30 am, room R06-R09).
Tim Xiao@TimZXiao

🚨The training and test set of the Street View House Numbers (SVHN) dataset are NOT from the same distribution!🚨 Join us at the #NeurIPS2023 workshop on DistShift this Friday (10:30 am, room R06-R09) to find out more! arxiv.org/abs/2312.02168 w/ @johanneszenn @robamler

English
0
3
7
768
Robert Bamler
Robert Bamler@robamler·
Training variational autoencoders on samples from a diffusion model essentially eliminates their known tendency to overfit the encoder without sacrificing model performance. Congrats to my PhD students @TimZXiao and @johanneszenn on their latest preprint! arxiv.org/abs/2310.19653
English
0
9
25
3.1K
Robert Bamler
Robert Bamler@robamler·
If you're at ICLR, join my student @johanneszenn at the Tiny Paper poster session today from 1.15 to 3.15 pm in room MH4. You'll be surprised how many insights can fit in a 2-page paper! arxiv.org/abs/2304.14390
English
0
2
10
615
Robert Bamler
Robert Bamler@robamler·
If you're in Kigali for ICLR this week, let's meet and chat over some drinks tomorrow at @TimZXiao's poster on rate/distortion theory of hierarchical VAEs. It's poster #106 in the MH rooms from 4:30 to 6:30. iclr.cc/virtual/2023/p…
English
0
4
10
1.5K
Robert Bamler 리트윗함
Johannes Zenn
Johannes Zenn@johanneszenn·
There is no need for gradients due to resampling in Differentiable Sequential Monte Carlo Samplers! Check out our recent work (arxiv.org/abs/2304.14390) with @robamler and meet us at the poster on Friday!
Robert Bamler@robamler

My student @johanneszenn found a useful fact about differential sequential Monte Carlo samplers: you can ignore any gradients due to resampling because they vanish in expectation. Check out his accepted ICLR DEI paper and meet us at the poster on Friday. arxiv.org/abs/2304.14390

English
1
5
9
1.9K
Robert Bamler
Robert Bamler@robamler·
My student @johanneszenn found a useful fact about differential sequential Monte Carlo samplers: you can ignore any gradients due to resampling because they vanish in expectation. Check out his accepted ICLR DEI paper and meet us at the poster on Friday. arxiv.org/abs/2304.14390
English
0
2
23
3.4K
Robert Bamler
Robert Bamler@robamler·
We just got the green light to hold another workshop on machine-learning-based data compression—this time at ICML. I'm very excited! Stay tuned for details and for the call for papers.
Stephan Mandt@StephanMandt

🎉Exciting news! Our "Neural Compression" workshop proposal has been accepted at #ICML 2023! Join us to explore the latest research developments, including perceptual losses and more compute-efficient models! @BerivanISIK, @YiboYang, @_dsevero, @karen_ullrich, @robamler

English
0
2
14
2.2K
Robert Bamler 리트윗함
Tim Xiao
Tim Xiao@TimZXiao·
How do LLMs connect to modern computers in zero-shot problem solving abilities and histories? Our latest blog post provides a fresh perspective on understanding LLMs and the prompting paradigm. Check it out! timx.me/blog/2023/comp… @Besteuler @robamler #ChatGPT
Tim Xiao tweet media
English
0
7
24
4.6K
Robert Bamler
Robert Bamler@robamler·
Looking for the one VAE to rule them all? The bad news: it doesn't exist. The good news: our recently accepted ICLR 2023 paper shows how to optimally allocate information to each latent layer depending on your application: arxiv.org/abs/2302.04855 @TimZXiao @ml4science #ICLR2023
English
1
10
72
7.5K
Robert Bamler
Robert Bamler@robamler·
Sehr cool, dass das RHET AI morgen eine kostenlose Kinovorführung anbietet! Gezeigt wird "Ex Machina" im Arsenal Kino Tübingen morgen (Freitag) um 20:00. Und wer mag, kann danach für eine Nachbesprechung bleiben. Ich bin gespannt! eventbrite.de/e/wie-viel-sci…
RHET AI Center@ai_rhet

2⃣ Wie viel Science steckt in der Fiction? Filmvorführung "Ex Machina" und Diskussion; mit Lukas Kohmann und Anne Burkhardt von RHET AI – und @robamler von @ml4science Eintritt frei! Zur Anmeldung (und dem ganzen Programm): #1565282panel-11-pc" target="_blank" rel="nofollow noopener">uni-tuebingen.de/#1565282panel-…

Deutsch
0
1
6
0
RHET AI Center
RHET AI Center@ai_rhet·
2⃣ Wie viel Science steckt in der Fiction? Filmvorführung "Ex Machina" und Diskussion; mit Lukas Kohmann und Anne Burkhardt von RHET AI – und @robamler von @ml4science Eintritt frei! Zur Anmeldung (und dem ganzen Programm): #1565282panel-11-pc" target="_blank" rel="nofollow noopener">uni-tuebingen.de/#1565282panel-…
Deutsch
1
1
3
0
RHET AI Center
RHET AI Center@ai_rhet·
Wir sind mit 2⃣ Veranstaltungen bei den Science & Innovation Days @uni_tue in dieser Woche! 1⃣ Künstliche Intelligenzen der Zukunft: Lesung & Gespräch mit Autorin Emma Braslavsky (@suhrkamp) und @nnludwig von @ml4science moderiert von @emgehoh 29.6., 20.30 Uhr, Café Willi
RHET AI Center tweet media
Deutsch
1
3
8
0
Robert Bamler
Robert Bamler@robamler·
Informationsangebote gibt es viele (u.a. auch von @ml4science und @Cyber_Valley). Man muss sie aber auch nutzen, und dabei offen sein für Fakten und wissenschaftliche Erkenntnisse, und v.a. nicht auf Populisten mit nationalistischer/fremdenfeindlicher Rhetorik hereinfallen. 2/2
Deutsch
0
0
0
0
Robert Bamler
Robert Bamler@robamler·
Ein super recherchierter Artikel! Ich bin begeistert, dass Zeitungen die gesellschaftliche Relevanz von IT-Themen erkennen. Schade nur, dass noch immer so viele Menschen ihrer Bürgerpflicht, sich über relevante Themen zu informieren, nicht nachkommen. 1/2 sueddeutsche.de/wirtschaft/soc…
Deutsch
1
0
1
0