Matthew Walmer

23 posts

Matthew Walmer banner
Matthew Walmer

Matthew Walmer

@MatthewWalmer

Computer Vision PhD student at University of Maryland College Park Website: https://t.co/7rfVPC9ZUS

Katılım Haziran 2022
13 Takip Edilen138 Takipçiler
Matthew Walmer retweetledi
Soumik Mukhopadhyay
Soumik Mukhopadhyay@soumikkanad·
Diffusion models be like: “this image is 97% noise… better process all 256×256 pixels anyway” If very noisy diffusion states contain no more useful information than a tiny downsampled image, Then why run expensive full-res computation on them? 🧵
Soumik Mukhopadhyay tweet media
English
2
7
19
953
Matthew Walmer
Matthew Walmer@MatthewWalmer·
@Minseok96_kr @_sakshams_ @AnirudAgg @abhi2610 Hi Minseok, UPLiFT operates on the VAE's latent features similarly to the DINO features. We do sample the features first, essentially using the features as they would be fed to the VAE decoder later.
English
1
0
2
277
Minseok
Minseok@Minseok96_kr·
@MatthewWalmer @_sakshams_ @AnirudAgg @abhi2610 Super cool paper! I was wondering how the upsampling works in the VAE space. Since the encoded features from the VAE encoder are vector representations, does your method operate directly on these features? I’ve only skimmed the table and figure haha...
English
1
0
2
304
Matthew Walmer
Matthew Walmer@MatthewWalmer·
We’re excited to announce UPLiFT, our lightweight, pixel-dense feature upsampler. UPLiFT boosts feature density, preserves semantics, and has better efficiency scaling than recent SOTA methods. See all links in the thread below. Coauthors: @_sakshams_ @AnirudAgg @abhi2610 🧵[1/6]
Matthew Walmer tweet media
English
8
52
393
19.2K
Matthew Walmer
Matthew Walmer@MatthewWalmer·
@_sakshams_ @AnirudAgg @abhi2610 In addition, UPLiFT + SD1.5 VAE achieves comparable visual quality to the state-of-the-art method FM-Boost (CFM), while using less training data, few parameters, and fewer inference-time iterations. 🧵[6/6]
Matthew Walmer tweet media
English
1
0
4
569
Matthew Walmer
Matthew Walmer@MatthewWalmer·
@_sakshams_ @AnirudAgg @abhi2610 We demonstrate the versatility and effectiveness of UPLiFT for both predictive and generative tasks, including semantic segmentation, depth estimation, image super-resolution, and efficient T2I generation. 🧵[5/6]
Matthew Walmer tweet media
English
0
0
10
569
Matthew Walmer
Matthew Walmer@MatthewWalmer·
@_sakshams_ @AnirudAgg @abhi2610 Through this approach, our method maintains linear-time-scaling with respect to the number of visual tokens. Meanwhile, cross-attention-based upsamplers have quadratic scaling. This allows UPLiFT to scale and make denser features for larger images. 🧵[4/6]
Matthew Walmer tweet media
English
0
0
7
611
Matthew Walmer
Matthew Walmer@MatthewWalmer·
@_sakshams_ @AnirudAgg @abhi2610 UPLiFT uses iterative feature growing, which avoids the high computational costs of recent cross-attention-based methods. We also present a new Local Attender feature-pooling module, which reformulates local attention using operations based on relative directional offsets 🧵[3/6]
Matthew Walmer tweet media
English
0
0
6
738
Matthew Walmer retweetledi
Pulkit
Pulkit@pulkitkumar95·
🎉 Excited to share our paper "Trokens: Semantic-Aware Relational Trajectory Tokens for Few-Shot Action Recognition" has been accepted to #ICCV2025! Equally co-led with @ShuaiyiH — we advance few-shot action recognition via smart point tracking. 🔗 trokens-iccv25.github.io 🧵👇
Pulkit tweet media
English
6
25
146
10.6K
Matthew Walmer retweetledi
Saksham Suri
Saksham Suri@_sakshams_·
We are happy to release our LiFT code and pretrained models! 📢 Code: github.com/saksham-s/lift Project Page: cs.umd.edu/~sakshams/LiFT Here are some super spooky super resolved feature visualizations to make the season scarier 🎃 Coauthors: @MatthewWalmer @kamalgupta09 @abhi2610
Saksham Suri tweet media
Saksham Suri@_sakshams_

We introduce LiFT, an easy to train, lightweight, and efficient feature upsampler to get dense ViT features without the need to retrain the ViT. Visit our poster @eccvconf #eccv2024 in Milan on Oct 1st (Tuesday), 16:30 (local), Poster: 79. Project Page: cs.umd.edu/~sakshams/LiFT

English
2
44
236
15.2K
Matthew Walmer retweetledi
Saksham Suri
Saksham Suri@_sakshams_·
We introduce LiFT, an easy to train, lightweight, and efficient feature upsampler to get dense ViT features without the need to retrain the ViT. Visit our poster @eccvconf #eccv2024 in Milan on Oct 1st (Tuesday), 16:30 (local), Poster: 79. Project Page: cs.umd.edu/~sakshams/LiFT
Saksham Suri tweet media
English
6
147
928
62.6K
Matthew Walmer
Matthew Walmer@MatthewWalmer·
@_sakshams_ @kamalgupta09 @abhi2610 The best layer for a downstream task varies depending on both the task and the pretraining. For example, on keypoint correspondence, most of the ViTs have their best performance with layers 7 or 8 (of 12). We present comparisons for both locally and globally focused tasks. [5/5]
Matthew Walmer tweet media
English
0
0
3
108
Matthew Walmer
Matthew Walmer@MatthewWalmer·
We’re looking forward to presenting our work “Teaching Matters: Investigating the Role of Supervision in Vision Transformers” next week at #CVPR2023! We’ll be in the Tues-PM poster session at board 321. Links and some key results below. @_sakshams_ @kamalgupta09 @abhi2610 [1/5]
GIF
English
4
1
7
2.6K
Matthew Walmer
Matthew Walmer@MatthewWalmer·
@_sakshams_ @kamalgupta09 @abhi2610 Even though MAE has no CLS objective, we find evidence that it learns to embed semantic information in the CLS token even before fine-tuning. Through CKA analysis, we find some similarity between MAE, DINO, and MoCo CLS token representations. [4/5]
Matthew Walmer tweet media
English
0
0
3
114
Matthew Walmer
Matthew Walmer@MatthewWalmer·
@_sakshams_ @kamalgupta09 @abhi2610 Did you know that ViTs learn to use offset local attention heads? These heads attend locally, but to a position that is one off in one direction. The existence of these heads may actually demonstrate a strength of CNNs over ViTs. [3/5]
Matthew Walmer tweet media
English
0
0
3
76
AerIn
AerIn@aerinykim·
Before I forget, I'd like to summarize some interesting papers that I found at #CVPR2022. Dual-key multimodal backdoors for visual question answering arxiv.org/abs/2112.07668 1. This paper proposes an interesting Trojan attack method. To start, what exactly is a Trojan attack?
AerIn tweet media
English
6
48
292
0