Frederik Träuble

64 posts

Frederik Träuble

Frederik Träuble

@f_traeuble

Co-Founder at calliora

Katılım Ekim 2020
223 Takip Edilen244 Takipçiler
Frederik Träuble retweetledi
Cian Eastwood
Cian Eastwood@CianEastwood·
Introducing our #ICLR2023 paper: DCI-ES: An Extended Disentanglement Framework with Connections to Identifiability🚀 We propose a new notion of disentanglement based on the functional capacity required to use a representation arxiv.org/abs/2210.00364 github.com/andreinicolici… 1/12
Cian Eastwood tweet media
English
1
8
47
13.2K
Frederik Träuble retweetledi
Patrick Schwab
Patrick Schwab@schwabpa·
Creating a map of gene interactions is a fundamental step in drug discovery that generates ideas on what mechanisms may be targeted by future medicines Today, we announce the CausalBench challenge at gsk.ai/causalbench-ch… and invite you to contribute to this important problem!
Patrick Schwab tweet media
English
2
9
40
6.9K
Frederik Träuble retweetledi
Anirudh Goyal
Anirudh Goyal@anirudhg9119·
Discrete Key-Value Bottleneck (Updated) Compresses the information of a pre-trained model in learnable "key-value" codebook such that knowledge can be quickly adapted in a continual learning fashion. arxiv.org/abs/2207.11240
Anirudh Goyal tweet media
English
2
23
104
16.6K
Frederik Träuble retweetledi
Armin Kekić
Armin Kekić@armin_kekic·
Many countries employed an age-ranked vaccine allocation strategy to combat COVID-19. How effective was this strategy at preventing infections and severe cases? We study this and other questions using simulation-assisted causal modelling. 🧵 1/ preprint: bit.ly/3Wc37Ww
Armin Kekić tweet media
English
2
18
56
17.5K
Frederik Träuble
Frederik Träuble@f_traeuble·
By freezing all model parameters except for the value codes, we can keep learning under various distribution shifts. This is enabled via localized, input-dependent model updates, which don't affect the prediction from (key, value) pairs retrieved from unalike train samples. 5/6
Frederik Träuble tweet media
English
1
0
4
0
Frederik Träuble
Frederik Träuble@f_traeuble·
“Discrete Key-Value Bottlenecks” Amortizing information via a discrete bottleneck such that the knowledge is localized and results in flexible adaptation to distribution shifts such as non-stationary or imbalanced data streams. arxiv.org/abs/2207.11240 1/6
Frederik Träuble tweet media
English
4
12
91
0
Frederik Träuble retweetledi
Cian Eastwood
Cian Eastwood@CianEastwood·
Look forward to presenting our work! 🚀 We connect the DCI disentanglement scores to identifiability, and propose a new complementary notion of disentanglement based on the *functional capacity required to use a representation.* 🔗openreview.net/pdf?id=KiMUlK8… 🧵Short thread below
Causal Representation Learning Workshop @ UAI'22@crl_uai

Decisions and meta-reviews are now available---thanks to all reviewers! See you in ~1 month in Eindhoven for some hopefully stimulating discussions around causal representation learning. Please remember to register for @UncertaintyInAI if you plan to attend the workshop.

English
1
10
31
0
Frederik Träuble retweetledi
Francesco Locatello
Francesco Locatello@FrancescoLocat8·
“Visual Representation Learning Does Not Generalize Strongly Within the Same Domain”: regardless of architecture and training signal, deep nets struggle to generalize strongly to existing factors of variation in the training data. arxiv.org/abs/2107.08221
Francesco Locatello tweet media
English
1
4
18
0
Frederik Träuble
Frederik Träuble@f_traeuble·
Happy to share that our work on "The Role of Pretrained Representations for the OOD Generalization of RL Agents" was accepted to #iclr2022! 🎉
Andrea Dittadi@andrea_dittadi

Happy to announce our large-scale study on representation learning and generalization in reinforcement learning! arxiv.org/abs/2107.05686 How do properties of pre-trained representation backbones affect the robustness of downstream RL policies in simulation and real world? 1/5

English
1
5
53
0