
Belongie Lab
674 posts

Belongie Lab
@BelongieLab
Computer Vision & Machine Learning • PI @SergeBelongie • SØ(3) @diku_institut 2021-present • SE(3) @cornell_tech 2013-2021 • SO(3) @ucsd_cse 2001-2012





We welcome applications for a PhD position in CV/ML (fine-grained analysis of multimodal data, 2D/3D generative models, misinformation detection, self-supervised learning) @DIKU_Institut @AiCentreDK Apply though the ELLIS portal 💻 Deadline 15-Nov-2024 🗓️




On Wednesday, 30 October, Professor @trevordarrell from @Berkeley_EECS will give a talk in Copenhagen. 📍🇩🇰 Don't miss the opportunity to gain valuable insights into Professor Darrell's work. 👉 Sign up and find more information on our website: aicentre.dk/events/talk-on…

“Instead of being surprised by the discovery of each ‘unexpected’ animal ability, maybe we should be surprised that humans have such low expectations.” Rivka Galchen writes about the art of listening to animals. nyer.cm/cLQuZIQ




Hello world! fundamentalailab.github.io



My publicity chair retirement didn't last long 😉 I just joined the #ECCV2026 @eccvconf publicity team 😅 New mask, same task. We are looking to grow our team and coverage, including adding a social media account in China. Stay tuned.

Next stop #ECCV2026 Malmö, Sweden 🇸🇪 Conference page: eccv.ecva.net/Conferences/20…

The #ELLISPhD application portal is now open! Apply to top #AI labs & supervisors in Europe with a single application, and choose from different areas & tracks. The call for applications: ellis.eu/news/ellis-phd… Deadline: 15 November 2024 #PhD #PhDProgram #MachineLearning #ML



Test of time award

Unfortunately, I couldn't attend @eccvconf 2024. Thanks to @SergeBelongie for representing the team for the award! I'm grateful to have started my research career at ECCV. This photo is from my COCO paper presentation at ECCV 2014 in Zürich. Fond memories!

Test of time award



Ever wanted to train your own 13B Llama2 model from scratch on a 24GB GPU? Or fine-tune one without compromising performance compared to full training? 🦙 You now can, with LoQT: Low Rank Adapters for Quantized Training! arxiv.org/abs/2405.16528 1/4

