Xingjian Bai

9 posts

Xingjian Bai

Xingjian Bai

@SimulatedAnneal

Ph.D. student at @MITEECS. Previous RA at @Oxford_VGG.

Entrou em Ağustos 2022
507 Seguindo488 Seguidores
Xingjian Bai
Xingjian Bai@SimulatedAnneal·
In our formulation, image tokenization and latent generation become two sides of the same coin. One model, one stage, from scratch—no pretrained encoders needed. Especially excited about applying UNITE to modalities like molecules and crystals, where a pretrained DINO simply doesn't exist. Very unforgettable collaboration with @ShivamDuggal4 and our amazing team at Adobe Research!
Shivam Duggal@ShivamDuggal4

Tokenization & Generation power Large Models. But are they really separate? Tokenization=Generation under strong observability UNITE: An end-to-end training framework where one shared Generative Encoder (GE) performs both token. & latent denoising Paper: arxiv.org/abs/2603.22283

English
1
9
58
7.4K
Xingjian Bai
Xingjian Bai@SimulatedAnneal·
Trained from scratch, SCD beat previous models in all metrics at 4x lower latency. We also fine-tuned from WAN 2.1, matching the VBench performance of the best frame-wise autoregressive models, while having 35% lower latency than Self Forcing, >10x faster than original WAN 2.1.
English
1
2
26
3.1K
Xingjian Bai
Xingjian Bai@SimulatedAnneal·
Do causal video diffusers really need dense causal attention at every layer, every denoising step? We looked inside and found: no. Causality is separable from denoising. Here are two surprising observations that hold across architectures, training objectives, and scales.
Xingjian Bai tweet media
English
4
49
332
66.9K