
🎉 Excited to share that we've just hit a $76M funding milestone for @bioptimus_ai , with a fresh $41M round to build the first multiscale foundation model of biology: bioptimus.com/news/bioptimus…
rodolphe_jenatton
185 posts


🎉 Excited to share that we've just hit a $76M funding milestone for @bioptimus_ai , with a fresh $41M round to build the first multiscale foundation model of biology: bioptimus.com/news/bioptimus…

A happy author discovering the first hard copies

@bioptimus_ai releases H-optimus-0, the largest #opensource AI foundation model for histopathology! - code: github.com/bioptimus/rele… - press release: businesswire.com/news/home/2024… Enjoy! Congrats Charlie Saillard @RJenatton @FelipeLlinares @ZeldaMariet @DavidCahane @ericdurand

Wondering if AI can learn the language of life? Come join the @bioptimus_ai crew to joyfully change the world and shape the future of biology and medicine with AI foundation models! Check out bioptimus.com/careers for roles in engineering, data science, product, operations...

Mistral, LightOn, Shift Technology, Alan, Bioptimus, Google : ils sont de plus en plus nombreux à choisir la France pour innover en matière d’intelligence artificielle. Fierté. En investissant, nous faisons de la France un pays à la pointe de l’IA. Une IA le dit aussi !





ELOHIM PRANDI !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 🔥🔥🔥🔥🔥🔥🔥 Le tir de la dernière chance qui finit au fond de la cage suédoise ! Les Bleus égalisent à la dernière seconde ! Quelle folie ! #FRASUE #BleuetFier @FRAHandball Le direct sur TF1+ ➡️tf1.fr/tmc/direct

Our founding team is covering many AI fields from vision, with Patrick Pérez and Hervé Jégou (@hjegou) to LLMs with Edouard Grave (@EXGRV), audio with Neil Zeghidour (@neilzegh) and Alexandre Défossez (@honualx) and infra with Laurent Mazaré (@lmazare).



👀 Looking for the best use of pre-trained classifiers in contrastive learning? 🏝Check out my @GoogleAI internship project at the ES-FoMo workshop @icmlconf in Hawaii next week! 🔥 With Three Towers, the image tower benefits from both contrastive learning and pre-training!

Label noise is a ubiquitous problem in machine learning! 💥 Our ICML work 🌴: “When does privileged information explain away label noise?” answers how meta-data can help us solve this issue 🤔 Come to our poster on Wed and check it out! 🏄 📄: rb.gy/bti7q 🧵1/5

New Preprint: 🔥Three Towers: Flexible Contrastive Learning with Pretrained Image Models🔥 We improve the contrastive learning of vision-language models by incorporating knowledge from pretrained image classifiers. 📄arxiv.org/abs/2305.16999 🧵[1/3]

Three Towers: Flexible Contrastive Learning with Pretrained Image Models introduce Three Towers (3T), a flexible method to improve the contrastive learning of vision-language models by incorporating pretrained image classifiers. While contrastive models are usually trained from scratch, LiT (Zhai et al., 2022) has recently shown performance gains from using pretrained classifier embeddings. However, LiT directly replaces the image tower with the frozen embeddings, excluding any potential benefits of contrastively training the image tower. With 3T, we propose a more flexible strategy that allows the image tower to benefit from both pretrained embeddings and contrastive training. To achieve this, we introduce a third tower that contains the frozen pretrained embeddings, and we encourage alignment between this third tower and the main image-text towers. Empirically, 3T consistently improves over LiT and the CLIP-style from-scratch baseline for retrieval tasks. For classification, 3T reliably improves over the from-scratch baseline, and while it underperforms relative to LiT for JFT-pretrained models, it outperforms LiT for ImageNet-21k and Places365 pretraining. paper page: huggingface.co/papers/2305.16…
