vislang.ai

93 posts

vislang.ai banner
vislang.ai

vislang.ai

@vislang

Twitter account of the Vision, Language and Learning Lab, Computer Science @ Rice University.

Rice University, Houston, TX Katılım Haziran 2020
1.2K Takip Edilen3.1K Takipçiler
vislang.ai retweetledi
Moayed Haji Ali
Moayed Haji Ali@moayedhajiali·
Not all pixels are equally hard, but DiTs still allocate compute uniformly across pixels, wasting efforts on easy regions. ELIT adds two lightweight cross-attention layers to focus compute where it matters, cutting FID by 53%. ELIT: snap-research.github.io/elit
Moayed Haji Ali tweet media
English
4
23
163
12.8K
vislang.ai retweetledi
Guilherme Favaron
Guilherme Favaron@guifav·
Diffusion transformers waste compute by treating every pixel equally, regardless of content complexity. ELIT (Elastic Latent Interface Transformer) fixes this with a simple idea: insert a variable length set of latent tokens that learn where to spend computation. Two lightweight cross attention layers (Read/Write) route information between spatial tokens and latents. The model learns importance ordering during training by randomly dropping tail latents, so earlier tokens capture global structure while later ones handle fine details. Results on ImageNet 1K at 512px: 35.3% better FID, 39.6% better FDD scores, ~33% cheaper classifier free guidance. Works across DiT, UViT, HDiT, and MMDiT architectures with no changes to the training objective. By Moayed Haji Ali, @vislang (Rice University), @SergeyTulyakov, Aliaksandr Siarohin, Willi Menapace, Ivan Skorokhodov and team at @Snap. Accepted at CVPR 2026.
Guilherme Favaron tweet media
English
1
2
5
278
vislang.ai retweetledi
Zilin Xiao
Zilin Xiao@ZilinXiao2·
🚀 Two papers accepted to #ICLR2026 on test-time scaling for vision-language systems (retrieval + reasoning)! 1) MetaEmbed (Oral Presentation): Meta Tokens + Matryoshka multi-vector training → flexible late interaction, choose #vectors at test time for accuracy↔efficiency. Paper: arxiv.org/abs/2509.18095 Work done at @AIatMeta with amazing collaborators: Qi Ma, @Mengting_Gu, Jason Chen, Xintao Chen, @vislang and @MohanVijaimohan! 2) ProxyThinker: training-free test-time guidance from small “slow-thinking” visual reasoners → self-verification / self-correction via distribution-level guidance. Paper: arxiv.org/abs/2505.24872 Work done with @JaywonK17250, @Siru_Ouyang, @jefehern, @yumeng0818 and @vislang! While I won't be able to travel to Brazil🇧🇷, please say Hi to the team :-) #MultimodalRetrieval #VisualReasoning #VisionLanguage #TestTimeCompute #Embeddings
English
4
21
92
20K
vislang.ai retweetledi
AK
AK@_akhaliq·
MetaEmbed Scaling Multimodal Retrieval at Test-Time with Flexible Late Interaction
AK tweet media
English
1
6
34
11.2K
vislang.ai retweetledi
Aran Komatsuzaki
Aran Komatsuzaki@arankomatsuzaki·
Meta Superintelligence Labs presents MetaEmbed: Scalable multimodal retrieval • Flexible late interaction via Meta Tokens • Test-time scaling: trade off retrieval accuracy vs efficiency • SOTA on MMEB + ViDoRe, robust up to 32B models • Matryoshka training → coarse-to-fine multi-vector embeddings
Aran Komatsuzaki tweet media
English
6
42
316
54.2K
vislang.ai retweetledi
AK
AK@_akhaliq·
CLIP-Lite: Information Efficient Visual Representation Learning from Textual Annotations abs: arxiv.org/abs/2112.07133 CLIP-Lite obtains a +15.4% mAP absolute gain in performance on Pascal VOC classification, and a +22.1% top-1 accuracy gain on ImageNet
AK tweet media
English
0
10
99
0
vislang.ai retweetledi
Zilin Xiao
Zilin Xiao@ZilinXiao2·
Looking for a new (image) re-ranking paradigm? Check this out! LoCoRe (Long-Context Reranker) is trained with a long-context sequence model and token-level supervision to achieve **one-pass** re-ranking for all image candidates. Catch us at #CVPR Poster Session 2 #401 on Friday, 4pm-6pm!
Zilin Xiao tweet media
English
1
3
12
747
vislang.ai
vislang.ai@vislang·
Check our new work on cross-modal audio-video generation. Our work produces audio with the best alignment we have seen with respect to actions happening on video. Particularly useful in the era of astounding progress in generative video models.
Moayed Haji Ali@moayedhajiali

Can pretrained diffusion models connect for cross-modal generation? 📢 Introducing AV-Link ♾ Bridging unimodal diffusion models in one framework to enable: 📽️ ➡️ 🔊 Video-to-Audio 🔊 ➡️ 📽️ Audio-to-Video 🌐: snap-research.github.io/AVLink/ 📄: hf.co/papers/2412.15… ⤵️ Results

English
0
1
3
556
vislang.ai retweetledi
Moayed Haji Ali
Moayed Haji Ali@moayedhajiali·
Can pretrained diffusion models connect for cross-modal generation? 📢 Introducing AV-Link ♾ Bridging unimodal diffusion models in one framework to enable: 📽️ ➡️ 🔊 Video-to-Audio 🔊 ➡️ 📽️ Audio-to-Video 🌐: snap-research.github.io/AVLink/ 📄: hf.co/papers/2412.15… ⤵️ Results
English
2
12
23
5.9K
vislang.ai retweetledi
Reginald DesRoches
Reginald DesRoches@RDesRoches·
Rice is shaping the future of AI! Our researchers are working on groundbreaking methods to eliminate the "weird" or distorted images that AI sometimes generates. This innovation could lead to more accurate and realistic visuals created by artificial intelligence. The future of AI-generated imagery is looking clearer and brighter thanks to our researchers! 🔍💡Read more about this fascinating research and its potential impact: bit.ly/4ezY37m #RiceU #FutureOfAI #Innovation
Reginald DesRoches tweet media
English
0
1
12
749
vislang.ai retweetledi
Zilin Xiao
Zilin Xiao@ZilinXiao2·
I am excited to share that two of our research works will be presented at ECCV 2024. #ECCV2024 They focus on augmenting language models with fine-grained visual recognition ability. AutoVER made successful attempts at generative visual recognition. It was accepted to the ECCV 2024 main conference and was invited to the ILR Workshop as an oral presentation. Collaboration w/ @pcascanteb @vislang #Microsoft Extractive Reranker was accepted to the ILR Workshop as a poster. We explored how the long-context sequence modeling ability of language models can benefit image retrieval, a fundamental computer vision problem.
Zilin Xiao tweet mediaZilin Xiao tweet mediaZilin Xiao tweet media
English
0
4
8
1.1K
vislang.ai retweetledi
Rice Computer Science
Rice Computer Science@RiceCompSci·
Rice CS welcomes Zhengzhong Tu, Texas A&M assistant professor, next Tuesday, 9/24 at 4pm in Duncan Hall 3076. Dr. Tu will discuss Democratizing Diffusion Models for Controllable & Efficient Computational Imaging. PLEASE RSVP: bit.ly/4eraBh1 @_vztu @vislang
Rice Computer Science tweet media
English
0
4
8
1.4K
vislang.ai retweetledi
Rice Computer Science
Rice Computer Science@RiceCompSci·
GenAI has struggled to create consistent images, but research from Rice CS' @vislang lab could make weird AI images a thing of the past. Moayed Haji Ali and Vicente Ordónez-Román have developed a way to improve the performance of AI diffusion models. bit.ly/3BcIlQQ
Rice Computer Science tweet media
English
0
1
3
379
vislang.ai retweetledi
Rice Computer Science
Rice Computer Science@RiceCompSci·
Rice CS' @cathyrzhe presented her paper, Improved Visual Grounding through Self-Consistent Explanations, at @CVPR 2024. SelfEQ helps computers ‘see’ more accurately and consistently. She is advised by faculty member Vicente Ordóñez-Román. bit.ly/4dfe9CS @vislang
Rice Computer Science tweet media
English
0
3
7
1K
vislang.ai retweetledi
Ruozhen Catherine He
Ruozhen Catherine He@cathyrzhe·
(1/4) Excited to share our latest work at #CVPR2024 @CVPR!🔥 Join us tomorrow, Thursday, June 20, from 10:30am to noon at Poster Session 3, # 334, to learn about "Improved Visual Grounding through Self-Consistent Explanations" with @pcascanteb, Ziyan, @alexandercberg, @vislang.
Ruozhen Catherine He tweet media
English
1
2
5
1.7K
vislang.ai retweetledi
harpreet
harpreet@DataScienceHarp·
Chatted with @cathyrzhe and @pcascanteb from @vislang at @RiceUniversity about the paper they had accepted to @CVPR Their paper introduces Self-Consistency Equivalence Tuning (SelfEQ) to improve visual grounding in vision-and-language models using paraphrases. The Problem: Models struggle with precise object localization when textual descriptions vary. Current methods need detailed annotations and show inconsistency with varied texts. The Solution: SelfEQ uses paraphrases generated by a large language model and finetunes the model with GradCAM for consistent visual explanations. How It Works 1) Start with an existing method: Uses ALBEF model without object location annotations. 2) Improvements by SelfEQ: Generates paraphrases and ensures consistent visual attention maps. Why It's Better • Expanded Vocabulary: Handles more textual descriptions. •Improved Localization: Precise and consistent without bounding box annotations. •Efficiency: Reduces need for detailed annotations. Key Contributions • Introduces SelfEQ for better visual grounding. • Uses large language models for paraphrases. • Improves performance on benchmarks (Flickr30k, ReferIt, RefCOCO+). #CVPR2024 #CVPR
English
2
6
8
1K
vislang.ai retweetledi
MOAYED HAJi ALi
MOAYED HAJi ALi@MoayedHaji·
Great news from #CVPR2024 🎉🎉🎉 Happy to share that our paper ElasticDiffusion: Training-free Arbitrary Size Image Generation was accepted @CVPR. Big thanks to my collaborators @bluevincent and Guha Balakrishnan. Checkout more details from here: elasticdiffusion.github.io
MOAYED HAJi ALi tweet media
English
2
5
21
2.3K
vislang.ai retweetledi
Jaspreet Ranjit
Jaspreet Ranjit@jaspreetranjit_·
How do biases change before and after finetuning large scale visual recognition models? Our @afciworkshop paper incorporates sets of canonical images to highlight changes in biases for an array of off-the-shelf pretrained models. #NeurIPS2023 Link: arxiv.org/abs/2303.07615
Jaspreet Ranjit tweet media
English
1
1
10
4.4K
vislang.ai retweetledi
Rice Computer Science
Rice Computer Science@RiceCompSci·
Rice CS PhD student @pcascanteb introduces a 1M-scale synthetic dataset at #iccv2023. It allows users to add synthetically generated objects like furniture & humans to an image & is the result of her collaboration with her @vislang advisor Vicente Ordónez. bit.ly/3QIqA1p
Rice Computer Science tweet media
English
0
3
13
1.2K