
A new take on Fréchet Distance flips the script for generative models.
This paper shows you can train directly on FD-loss in representation space—decoupling the big 50k-sample FD estimate from the batch size needed for gradients. The payoff? Surprising boosts in visual quality, with a one-step generator hitting 0.72 FID on ImageNet 256x256.
Even more: FD-loss turns multi-step generators into strong one-step ones, no distillation or adversarial tricks needed. Plus, they show FID can misrank sample quality, proposing a new FDr^k metric across modern representations.
Get the full analysis here: yesnoerror.com/abs/2604.28190
// alpha identified
// $YNE
English