Video & Image Sense Lab (VIS Lab)

54 posts

Video & Image Sense Lab (VIS Lab)

Video & Image Sense Lab (VIS Lab)

@VISLab_UvA

Computer Vision research group at @UvA_Amsterdam directed by Cees Snoek (@cgmsnoek)

Amsterdam Se unió Mayıs 2024
27 Siguiendo57 Seguidores
Video & Image Sense Lab (VIS Lab) retuiteado
Cees Snoek
Cees Snoek@cgmsnoek·
📢📢 Beyond Model Adaptation at Test Time: A Survey by @zehao_xiao. TL;DR: we provide a comprehensive and systematic review on test-time adaptation, covering more than 400 recent papers 💯💯💯💯 🤩 #CVPR2025 #ICLR2025 arxiv.org/abs/2411.03687
Cees Snoek tweet media
English
1
17
58
8.1K
Video & Image Sense Lab (VIS Lab) retuiteado
Pascal Mettes
Pascal Mettes@PascalMettes·
All vision-language models should have hyperbolic embeddings. Vision and language are incredibly hierarchical in nature! See below our latest work on hyperbolic vision-language models that exploit visual compositions through entailment:
Avik Pal@theAvikPal

(1/6)🥳 Excited to share my latest research done as part of my MSc AI thesis! We introduced Hyperbolic Compositional CLIP (HyCoCLIP)—a novel framework that leverages the hierarchical nature of hyperbolic space for learning vision-language representations using scene compositions.

English
2
11
155
21.9K
Video & Image Sense Lab (VIS Lab) retuiteado
Aritra Bhowmik
Aritra Bhowmik@AritraBhowmik6·
🚀 Excited to share LynX! 🦁 🔑 A new method in visual grounding using a Dual Mixture of Experts—LynX enables pretrained VLMs to continuously learn grounding while retaining their image-language capabilities. 📄 Check out the full paper: arxiv.org/pdf/2410.10491
English
1
7
15
2.7K
Video & Image Sense Lab (VIS Lab) retuiteado
Michael Dorkenwald
Michael Dorkenwald@mdorkenw·
📢 Announcing TVBench: Temporal Video-Language Benchmark 📺 We reveal that widely used Video-Language benchmarks, such as MVBench, fall short in testing temporal understanding and propose an alternative TVBench: huggingface.co/datasets/FunAI…
Yuki@y_m_asano

Today, we're introducing TVBench! 📹💬 Video-language evaluation is crucial, but are we doing it right? We find that current benchmarks fall short in testing temporal understanding. 🧵👇

English
0
5
31
2.3K
Video & Image Sense Lab (VIS Lab) retuiteado
Yuki
Yuki@y_m_asano·
Today, we're introducing TVBench! 📹💬 Video-language evaluation is crucial, but are we doing it right? We find that current benchmarks fall short in testing temporal understanding. 🧵👇
English
2
12
69
9.4K
Video & Image Sense Lab (VIS Lab) retuiteado
Yuki
Yuki@y_m_asano·
Excited to announce that today I'm starting my new position at @utn_nuremberg as a full Professor 🎉. I thank everyone who has helped me to get to this point, you're all the best! Our lab is called FunAI Lab, where we strive to put the fun into fundamental research. 😎 Let's go!
FunAI@FunAILab

Hello world! fundamentalailab.github.io

English
43
14
273
15.9K
Video & Image Sense Lab (VIS Lab) retuiteado
David M. Knigge
David M. Knigge@davidmknigge·
🇨🇦 Deeeelighted to share that this work got into #neurips2024. Many thanks to my dear friend and co-author @Dafidofff, as well as the rest of the team. Solving PDEs in continuous space-time with Neural Fields on cool geometries while respecting their inherent symmetries! 💫💫
GIF
David M. Knigge@davidmknigge

🌀 Equivariant Neural Fields (ENFs) for continuous PDE solving! We use ENFs as representation for solving PDEs in continuous space/time on different geometries while respecting their symmetries! (such as this internally heated ball of fluid) More details 👇

English
3
30
196
22.9K
Video & Image Sense Lab (VIS Lab) retuiteado
Cees Snoek
Cees Snoek@cgmsnoek·
Congratulations Dr. @Jocy48305373 🥳
Cees Snoek tweet media
Français
0
3
24
1.2K
Video & Image Sense Lab (VIS Lab) retuiteado
Michael Dorkenwald
Michael Dorkenwald@mdorkenw·
The Self-Supervised Learning: What is Next? workshop at @eccvconf had a great turnout with excellent talks. Slides of most talks are available at sslwin.org (soon all 🤞). Thanks to all attendees, speakers, and co-organizers for making it a fantastic event!
Michael Dorkenwald tweet mediaMichael Dorkenwald tweet mediaMichael Dorkenwald tweet mediaMichael Dorkenwald tweet media
Michael Dorkenwald@mdorkenw

Interested in learning about the future of self-supervised learning? Don’t miss our workshop this Sunday at @eccvconf with an incredible lineup of speakers! 🔥 @imisra_ @oriane_simeoni @endernewton @olivierhenaff @y_m_asano @YutongBAI1002 More details at sslwin.org

English
2
13
63
9.1K
Video & Image Sense Lab (VIS Lab) retuiteado
Sarah Rastegar
Sarah Rastegar@rastegar_sarah·
Stop by today and discuss our @eccvconf paper (SelEx) with me, @doughty_hazel, and @cgmsnoek! 🎉 We present self-expertise—an alternative to self-supervision for learning from unlabelled data with fine-grained distinctions and unknown categories. 📍 Poster #89 🕥 10:30 AM
English
0
4
21
2.4K
Video & Image Sense Lab (VIS Lab) retuiteado
Mohammadreza Salehi
Mohammadreza Salehi@MrzSalehi·
🚀 Excited to present SIGMA at @eccvconf ! 🎉 We upgrade VideoMAE with Sinkhorn-Knopp on patch-level embeddings, pushing reconstruction to more semantic features. With @mdorkenw. Let’s connect at today's poster session at 4:30 PM, poster number 256, or send us a DM.
Michael Dorkenwald@mdorkenw

📢SIGMA: Sinkhorn-Guided Masked Video Modeling got accepted to @eccvconf #ECCV2024 TL;DR: Instead of using pixel targets in Video Masked Modeling, we reconstruct jointly trained features using Sinkhorn guidance, achieving SOTA. 📝Project page: quva-lab.github.io/SIGMA/ 🌐Paper: arxiv.org/abs/2407.15447 Joint work with @MrzSalehi @fmthoker @egavves @cgmsnoek @y_m_asano

English
0
5
11
3.6K
Video & Image Sense Lab (VIS Lab) retuiteado
Sarah Rastegar
Sarah Rastegar@rastegar_sarah·
🚀 Excited to present our work on Self-Expertise at #ECCV2024 in Milan! Join us at poster #89 on Friday, Oct 4 at 10:30 AM to see how self-expertise outperforms self-supervision in tackling unknown data in open-world settings! 🌍 #SelfSupervision #GeneralizedCategoryDiscovery
Sarah Rastegar tweet media
English
0
9
32
2.3K
Video & Image Sense Lab (VIS Lab) retuiteado
Sarah Rastegar
Sarah Rastegar@rastegar_sarah·
🚀 Excited to announce that our paper "SelEx: Self-Expertise in Fine-Grained Generalized Category Discovery" has been accepted to ECCV 2024! 🎉 Special thanks to my incredible coauthor @MrzSalehi and my amazing supervisors @y_m_asano, @doughty_hazel, and @cgmsnoek🙏.
Sarah Rastegar tweet media
English
5
11
39
3.4K
Video & Image Sense Lab (VIS Lab) retuiteado
UvA AMLab
UvA AMLab@AmlabUva·
We are hiring a postdoc! Come work with us in the booming AI ecosystem of beautiful Amsterdam on generative AI and/or uncertainty quantification 🤗 🎇vacatures.uva.nl/UvA/job/Postdo…
UvA AMLab tweet media
Christian A. Naesseth@canaesseth

I’m hiring a postdoc to work with me on exciting projects in generative modelling (AI) and/or uncertainty quantification. You'll be part of a great team, embedded in @AmlabUva and the UvA-Bosch Delta Lab. Apply here: vacatures.uva.nl/UvA/job/Postdo… RT appreciated! #ML #GenAI

English
0
7
31
2.8K
Video & Image Sense Lab (VIS Lab) retuiteado
Riccardo Valperga
Riccardo Valperga@RValperga·
I will be in Montréal until December for my internship with ServiceNow. I will be working on causal discovery from time-series. Get in touch if you are around and want to chat about some **apprentissage profond**
Riccardo Valperga tweet media
English
0
1
20
1.3K
Video & Image Sense Lab (VIS Lab) retuiteado
Efstratios Gavves
Efstratios Gavves@egavves·
📢📢📢 PhD vacancy alert 📢📢📢 We open several PhD positions supervised by myself and Georgios on #Robot Learning and #Dynamics! If you have strong #ML and/or #Robotics experience and want to dive into the next big thing in #AI, apply! Please share! linkedin.com/jobs/view/4021…
English
0
19
69
8.1K
Video & Image Sense Lab (VIS Lab) retuiteado
Yuki
Yuki@y_m_asano·
First time organising a Tutorial with an amazing team and am very excited 🎉! The topic is learning from videos, which I think will be the new 'Big' paradigm for new vision foundation models. Come to learn, chat and discuss @eccvconf!
Shashank@shawshank_v

Incredibly excited to announce the 1st edition of our tutorial at @eccvconf w/ the amazing @y_m_asano and @MrzSalehi! "Time is precious: Self-Supervised Learning Beyond Images" on 30th Sept. from 09:00 to 13:00 at Amber 7+ 8 Catch the details here⬇️ shashankvkt.github.io/eccv2024-SSLBI…

English
2
7
51
4.8K
Video & Image Sense Lab (VIS Lab) retuiteado
Yuki
Yuki@y_m_asano·
Happy to present this paper accepted @eccvconf: upgrades VideoMAE to use Sinkhorn-Knopp on patch-level embeddings. This moves reconstruction one level up towards more semantic features. Training is simple & stable.
Michael Dorkenwald@mdorkenw

📢SIGMA: Sinkhorn-Guided Masked Video Modeling got accepted to @eccvconf #ECCV2024 TL;DR: Instead of using pixel targets in Video Masked Modeling, we reconstruct jointly trained features using Sinkhorn guidance, achieving SOTA. 📝Project page: quva-lab.github.io/SIGMA/ 🌐Paper: arxiv.org/abs/2407.15447 Joint work with @MrzSalehi @fmthoker @egavves @cgmsnoek @y_m_asano

English
0
4
43
2.5K