rodolphe_jenatton

185 posts

rodolphe_jenatton

rodolphe_jenatton

@RJenatton

Katılım Şubat 2019
261 Takip Edilen333 Takipçiler
rodolphe_jenatton
rodolphe_jenatton@RJenatton·
Do you want to work with some of the best minds to transform biology? Then we want to hear from you! To find out how you can be a part of Bioptimus, visit Bioptimus.com/careers (3/3)
English
1
2
11
997
rodolphe_jenatton
rodolphe_jenatton@RJenatton·
With a successful seed funding round of $35M, we’ve combined a world-class team of scientists to revolutionize #biology with AI. (2/3)
English
1
0
7
861
rodolphe_jenatton retweetledi
Joan Puigcerver
Joan Puigcerver@joapuipe·
Introducing Soft MoE! Sparse MoEs are a popular method for increasing the model size without increasing its cost, but they come with several issues. Soft MoEs avoid them and significantly outperform ViT and different Sparse MoEs on image classification. arxiv.org/abs/2308.00951
English
5
61
244
78.9K
rodolphe_jenatton
rodolphe_jenatton@RJenatton·
If you are interested in how to best exploit pretrained models within the context of contrastive learning, go and check out our recent work led by @janundnik during a great @GoogleAI internship! (Full list of collaborators in the thread 👇)
Jannik Kossen@janundnik

👀 Looking for the best use of pre-trained classifiers in contrastive learning? 🏝Check out my @GoogleAI internship project at the ES-FoMo workshop @icmlconf in Hawaii next week! 🔥 With Three Towers, the image tower benefits from both contrastive learning and pre-training!

English
0
1
4
648
rodolphe_jenatton
rodolphe_jenatton@RJenatton·
Having side information, even only available at training time, can be helpful to deal with label noise. We study this phenomenon and give practical methods to exploit that information. Check out our ICML paper led by @gortizji during his great @GoogleAI internship!
Guillermo Ortiz-Jiménez@gortizji

Label noise is a ubiquitous problem in machine learning! 💥 Our ICML work 🌴: “When does privileged information explain away label noise?” answers how meta-data can help us solve this issue 🤔 Come to our poster on Wed and check it out! 🏄 📄: rb.gy/bti7q 🧵1/5

English
0
2
10
2.1K
rodolphe_jenatton
rodolphe_jenatton@RJenatton·
How to best take advantage of pretrained models for contrastive learning? Our approach is simple, flexible and robust. Joint work with fantastic colleagues at @GoogleAI. Special shout-out for @janundnik who led the project during his student researcher program 👏
Jannik Kossen@janundnik

New Preprint: 🔥Three Towers: Flexible Contrastive Learning with Pretrained Image Models🔥 We improve the contrastive learning of vision-language models by incorporating knowledge from pretrained image classifiers. 📄arxiv.org/abs/2305.16999 🧵[1/3]

English
0
2
13
1.1K
rodolphe_jenatton retweetledi
Leo Dirac
Leo Dirac@leopd·
@RJenatton Thanks Rodolphe! I miss working with you. I've seen some cool papers you've published recently. Would love to catch up some time.
English
1
0
0
137