LeapLab@THU

15 posts

LeapLab@THU

LeapLab@THU

@LeapLabTHU

LeapLab@THU

Tsinghua University, China Katılım Aralık 2023
44 Takip Edilen109 Takipçiler
LeapLab@THU retweetledi
AK
AK@_akhaliq·
Microsoft releases ART Anonymous Region Transformer for Variable Multi-Layer Transparent Image Generation
English
3
30
146
18.7K
LeapLab@THU
LeapLab@THU@LeapLabTHU·
🚀 Excited to share our work on #ECCV2024: "AdaNAT: Exploring Adaptive Policy for Token-Based Image Generation". 🖼️ We introduce AdaNAT, a novel approach for efficient and high-quality image generation using adaptive policies in Non-autoregressive Transformers.
LeapLab@THU tweet media
English
2
1
5
458
LeapLab@THU
LeapLab@THU@LeapLabTHU·
🔑 Key features: Learnable policy network for adaptive modulation of token generation Adversarial reward model for improved quality and diversity Significantly reduced inference time compared to diffusion models 📊 Impressive results on ImageNet, MSCOCO, and CC3M datasets!
LeapLab@THU tweet mediaLeapLab@THU tweet mediaLeapLab@THU tweet mediaLeapLab@THU tweet media
English
0
1
2
205
LeapLab@THU retweetledi
LeapLab@THU retweetledi
AK
AK@_akhaliq·
ConvLLaVA Hierarchical Backbones as Visual Encoder for Large Multimodal Models High-resolution Large Multimodal Models (LMMs) encounter the challenges of excessive visual tokens and quadratic visual complexity. Current high-resolution LMMs address the quadratic
AK tweet media
English
1
24
115
16.9K
LeapLab@THU retweetledi
Chunjiang Ge
Chunjiang Ge@GeChunjiang·
📢Excited to share our recent work on Large Multimodal Models: ConvLLaVA. Without the encoding multiple image patches and multiple encoders, we use a hierarchical backbone, ConvNeXt, realizing high resolution understanding. arxiv.org/pdf/2405.15738
Chunjiang Ge tweet media
English
1
1
3
267
LeapLab@THU
LeapLab@THU@LeapLabTHU·
EfficientTrain++ is accepted by TPAMI2024🤩 🔥An off-the-shelf, easy-to-implement algorithm for training foundation visual backbones efficiently! 🔥1.5−3.0× lossless training/pre-training speedup on ImageNet-1K/22K! Paper&Code: arxiv.org/abs/2405.08768 github.com/LeapLabTHU/Eff…
LeapLab@THU tweet mediaLeapLab@THU tweet mediaLeapLab@THU tweet mediaLeapLab@THU tweet media
English
0
4
7
918
LeapLab@THU
LeapLab@THU@LeapLabTHU·
Excited to share our #NeurIPS2023 spotlight paper! 🌟 It proposes a novel offline-to-online RL algorithm, efficiently utilizing collected samples by training a family of policies offline and selecting suitable ones online. Check out our paper for details! arxiv.org/abs/2310.17966
LeapLab@THU tweet media
English
0
4
7
555
LeapLab@THU retweetledi
LeapLab@THU retweetledi
Rui Lu
Rui Lu@RayLu_THU·
Check us out at #NeurIPS2023 poster!We investigate into Q-value divergence phenomenon in offline RL and find self-excitation to be the main reason. Using layernorm in RL models can fundamentally prevent this from happening. arxiv.org/pdf/2310.04411…
Rui Lu tweet media
English
1
4
8
527