Visual Inference Lab

156 posts

Visual Inference Lab banner
Visual Inference Lab

Visual Inference Lab

@visinf

Visual Inference Lab of @stefanroth at @TUDarmstadt. Research in Computer Vision and Machine Learning.

Darmstadt, Germany Katılım Nisan 2012
387 Takip Edilen721 Takipçiler
Visual Inference Lab retweetledi
Gabriele Trivigno
Gabriele Trivigno@gabTrivv·
What if a model could learn dense semantic matches from just a handful of annotated landmarks, while still generalizing to unseen keypoints and categories — and running 10× faster than diffusion-based approaches? MARCO is selected as an Oral at #CVPR2026! A unified model for generalizable semantic correspondence, built on DINOv2⭐️ 👉 Try our model: github.com/visinf/MARCO
English
3
25
131
12.7K
Visual Inference Lab retweetledi
Claudia Cuttano
Claudia Cuttano@ClaudiaCuttano·
✨ As a first-year PhD student, I used to wonder what it must feel like to have a paper selected as an Oral at #CVPR. Today, I’m experiencing that feeling twice! I’m beyond happy to share that both of my first-author papers have been selected as #Oral at #CVPR2026 🎉
Claudia Cuttano tweet media
English
28
16
611
30.4K
Visual Inference Lab retweetledi
Gabriele Trivigno
Gabriele Trivigno@gabTrivv·
🔥 Can in-context segmentation emerge directly from frozen DINOv3 features? At #CVPR2026, we present INSID3: Training-Free In-Context Segmentation with DINQv3 — a collaboration between PoliTo, TU Darmstadt and TU Munich. A training free approach that generalizes from object-level to part-level and personalized segmentation, across natural, medical, underwater, and aerial domains Check it out: github.com/visinf/INSID3
English
3
33
203
29.9K
Visual Inference Lab retweetledi
Apratim Bhattacharyya
Apratim Bhattacharyya@apratimbh·
🚨🚨🚨 Introducing the AI Coach Challenge at the 2nd VAR Workshop @CVPR 2026 👉 Answers are passive; guidance is active. Don't just build a model that watches but one that intervenes. Details: varworkshop.github.io/challenges/
English
1
4
9
4.6K
Visual Inference Lab retweetledi
Apratim Bhattacharyya
Apratim Bhattacharyya@apratimbh·
🚨"Can Multi-Modal LLMs Provide Live Step-by-Step Task Guidance?" #NeurIPS2025 Check out the Qualcomm Interactive Cooking Dataset for proactive mistake-aware task guidance. 📅Wed, Dec 3 11:00 AM – 2:00 PM PST 📌Exhibit Hall C,D,E #5403 Project page: apratimbh.github.io/livecook
English
1
3
7
429
Visual Inference Lab retweetledi
Nikita Araslanov
Nikita Araslanov@neekans·
📢 NeurIPS 2025 Spotlight 📢 Can we embed motion into image representations? Trained on videos, FlowFeat embeds optical flow into pixel-level representations (up to a linear transform), which results in sharp feature grids, especially for dynamic objects. We demonstrate benefits for ⭐️video object segmentation; ⭐️semantic segmentation; ⭐️and monocular depth. Paper: arxiv.org/abs/2511.07696 Project website: tum-vision.github.io/flowfeat Code and models: github.com/tum-vision/flo… Joint work with Anna Sonnweber and Daniel Cremers @tumcvg and @MunichCenterML. Come by our poster @NeurIPSConf on Thursday (Exhibit Hall C,D,E #4816)!
English
0
4
15
856
Visual Inference Lab
Visual Inference Lab@visinf·
📢 Join @TUDarmstadt as a PhD/Postdoc in the new project HAICC - Human–AI Collaboration for Cybersecurity! Explore how LLM–based AI agents and humans can jointly analyse security data and rethink cybersecurity architectures - supervised by @IGurevych, @stefanroth, and many more!
English
1
1
3
171
Visual Inference Lab retweetledi
Claudia Cuttano
Claudia Cuttano@ClaudiaCuttano·
✨ We found that #SegmentAnything hides a rich semantic structure, and we show how to unlock it! Our paper SANSA: Unleashing the Hidden Semantics in SAM2 for Few-Shot Segmentation is a #NeurIPS2025 Spotlight. 📍 Come check it out! Poster Friday, 11 a.m. 📄github.com/ClaudiaCuttano…
English
0
3
9
1K
Visual Inference Lab retweetledi
Gabriele Trivigno
Gabriele Trivigno@gabTrivv·
🔥 Our paper SANSA is a #NeurIPS2025 Spotlight! We turn #SAM2 into a semantic few-shot segmenter for objects and parts, fully promptable (mask · point · box · scribble); only 10M trainable parameters and 5× faster than competitors. Code, models & demo github.com/ClaudiaCuttano… 👇
English
1
10
22
2K
Visual Inference Lab retweetledi
Justus Thies
Justus Thies@JustusThies·
📢We are looking for PhD students that want to work on various aspects of Reasonable AI (RAI). Here, you can apply for a position with a focus on observational AI (and also work with me): career.tu-darmstadt.de/HPv3.Jobs/TU-D…
Justus Thies tweet media
English
4
8
12
2.2K
Visual Inference Lab retweetledi
Christoph Reich
Christoph Reich@ChristophR1996·
Interested in 3D DINO features from a single image or unsupervised scene understanding?🦖 Come by our SceneDINO poster at NeuSLAM today, 14:15 (Kamehameha II room) or Tue, 15:15 (Exhibit Hall I #627)!🖼️ W/ A. Jevtić, @felixwimbauer @olvr_hhn, C. Rupprecht, @stefanroth, D. Cremers
GIF
English
0
24
182
8.6K
Visual Inference Lab
Visual Inference Lab@visinf·
[1/8] We are presenting four main conference papers, two workshop papers, and a workshop at @ICCVConference 2025 in Hawaii! 🎉🏝
English
2
7
32
4.9K