Nikita Araslanov

70 posts

Nikita Araslanov banner
Nikita Araslanov

Nikita Araslanov

@neekans

Researcher at University of Oxford / TU Munich

Munich Katılım Şubat 2009
317 Takip Edilen304 Takipçiler
Nikita Araslanov retweetledi
Dima Damen
Dima Damen@dimadamen·
📢Applications are now open for PhD visiting students @bristolcs @BristolUni in 2026 - DL 29 Jan. Would you like to work with any of the Faculty working in Machine Learning and Computer Vision #mavi as part of our summer of research @ Bristol program? @MaVi.html" target="_blank" rel="nofollow noopener">uob-mavi.github.io/Summer@MaVi.ht…
Dima Damen tweet media
English
1
8
21
7.7K
Kosta Derpanis
Kosta Derpanis@CSProfKGD·
Cool idea @neekans!❤️ Reminded me of some of @Michael_J_Black’s work on steerable flow fields, eg MJ Black, Y Yacoob, AD Jepson, DJ Fleet, Learning parameterized models of image motion, CVPR, 1997 💪
English
1
0
6
1.5K
Kosta Derpanis
Kosta Derpanis@CSProfKGD·
Afternoon e-reading
Kosta Derpanis tweet media
English
3
4
113
7.5K
Kosta Derpanis
Kosta Derpanis@CSProfKGD·
Repeat after me: This is NOT a paper contribution, it’s an expected component!
Kosta Derpanis tweet media
English
4
7
82
16.3K
Nikita Araslanov
Nikita Araslanov@neekans·
📢 NeurIPS 2025 Spotlight 📢 Can we embed motion into image representations? Trained on videos, FlowFeat embeds optical flow into pixel-level representations (up to a linear transform), which results in sharp feature grids, especially for dynamic objects. We demonstrate benefits for ⭐️video object segmentation; ⭐️semantic segmentation; ⭐️and monocular depth. Paper: arxiv.org/abs/2511.07696 Project website: tum-vision.github.io/flowfeat Code and models: github.com/tum-vision/flo… Joint work with Anna Sonnweber and Daniel Cremers @tumcvg and @MunichCenterML. Come by our poster @NeurIPSConf on Thursday (Exhibit Hall C,D,E #4816)!
English
0
4
15
855
Nikita Araslanov retweetledi
AI Bites | YouTube Channel
FlowFeat distills optical flow networks into pixel-level task-agnostic representations. FlowFeat provides versatile pixel-level features. Using motion-driven embedding statistics, it achieves high spatial precision and temporal consistency Paper Title: FlowFeat: Pixel-Dense Embedding of Motion Profiles Project: tum-vision.github.io/flowfeat Link: arxiv.org/abs/2511.07696 #Video #Motion #MotionGraphics #AI #AIイラスト
English
0
1
3
368
Nikita Araslanov
Nikita Araslanov@neekans·
#ICCV2025 Spotlight talk (SP4V Workshop) Training on videos should yield more 3D-aware models than images — but it doesn’t! Presenting The Diashow Paradox in our talk, 16:15 @ 323A. With Tien Duc Nguyen (@hoaquin10), Anna Sonnweber, Mark Weber, Daniel Cremers (@tumcvg)
Nikita Araslanov tweet media
English
1
3
12
754
Nikita Araslanov retweetledi
TUM Computer Vision Group
@tumcvg goes #ICCV2025 in Hawaii! 🛫🌋 We are very proud of our students who will present five papers (+ 1 workshop) during the conference! In particular, check out Back-on-track, which is an award candidate. (Congrats @wrchen530!)
TUM Computer Vision Group tweet media
English
0
16
28
2.1K
Nikita Araslanov
Nikita Araslanov@neekans·
The implication is quite exciting: unsupervised classifiers. We can assign semantic labels to visual data without any paired data, at least for some semantic concepts. #CVPR2025
Dominik Schnaus@dominik_schnaus

Can we match vision and language representations without any supervision or paired data? Surprisingly, yes!  Our #CVPR2025 paper with @neekans and Daniel Cremers shows that the pairwise distances in both modalities are often enough to find correspondences. ⬇️1/4

English
0
0
10
171
Nikita Araslanov retweetledi
Dominik Schnaus
Dominik Schnaus@dominik_schnaus·
Can we match vision and language representations without any supervision or paired data? Surprisingly, yes!  Our #CVPR2025 paper with @neekans and Daniel Cremers shows that the pairwise distances in both modalities are often enough to find correspondences. ⬇️1/4
English
1
10
30
1.5K
Zhifan Zhu
Zhifan Zhu@zhifan_zhu·
@ducha_aiki @felixwimbauer @neekans Do they really decouple the static points from dynamic cleanly? In Eq-(1) they have `m` a floating number in [0, 1], I'm curious what those m=0.5 points look like.
English
1
0
0
70
Nikita Araslanov
Nikita Araslanov@neekans·
@SattlerTorsten @ducha_aiki @felixwimbauer Still more work to be done: masking out dynamic points yet works slightly better for VO. However, it’s surprising (at least to me) that a point tracker can disentangle the dynamic and static components from the observed motion (which can be really complex).
English
0
0
3
57
Nikita Araslanov
Nikita Araslanov@neekans·
@SattlerTorsten @ducha_aiki @felixwimbauer You’re quite right about previous work! However, we don’t filter out dynamic points. Instead, we remove the dynamic motion component of those points, keeping the static one (due to camera motion) and pass all points to BA.
English
1
0
4
80
Nikita Araslanov retweetledi
Visual Inference Lab
Visual Inference Lab@visinf·
📢 #CVPR2025: Scene-Centric Unsupervised Panoptic Segmentation 🔥 We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery. Using self-supervised features, depth & motion, we achieve SotA results! 🌎 visinf.github.io/cups
English
1
44
224
20.9K
Nikita Araslanov retweetledi
TUM Computer Vision Group
📣 #CVPR2025 (Highlight): Scene-Centric Unsupervised Panoptic Segmentation Check out our recent CVG paper on unsupervised panoptic segmentation!🚀
Visual Inference Lab@visinf

📢 #CVPR2025: Scene-Centric Unsupervised Panoptic Segmentation 🔥 We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery. Using self-supervised features, depth & motion, we achieve SotA results! 🌎 visinf.github.io/cups

English
0
5
41
2.1K
Nikita Araslanov retweetledi
TUM Computer Vision Group
We are thrilled that our group has twelve papers accepted at #CVPR2025! 🚀 Congratulations to all of our students for this great achievement! 🎉 For more details, check out: cvg.cit.tum.de
TUM Computer Vision Group tweet media
English
1
22
130
10.8K