Jonathan Heek

17 posts

Jonathan Heek

Jonathan Heek

@JonathanHeek

Katılım Ağustos 2019
5 Takip Edilen500 Takipçiler
Jonathan Heek
Jonathan Heek@JonathanHeek·
5/5 Results on ImageNet-512: competitive FID of 1.4 with high reconstruction quality (PSNR: 25.7). On Kinetics-600 video generation: we set a new state-of-the-art FVD of 1.3. Even our small model hits 1.7 FVD. Finally, we scale to text-to-image with strong perceptual quality.
English
1
0
11
1.6K
Jonathan Heek
Jonathan Heek@JonathanHeek·
4/6 This gives you a simple knob to control the reconstruction vs. modeling trade-off. Higher bitrate = better reconstruction but harder to model. Lower bitrate = easier to model but you lose fine details.
English
1
0
8
1.8K
Jonathan Heek retweetledi
Emiel Hoogeboom
Emiel Hoogeboom@emiel_hoogeboom·
If diffusion models are so great, why do they require modifications to work well? Like latent diffusion and superres diffusion? Introducing "simple diffusion": a single straightforward diffusion model for high res images (arxiv.org/abs/2301.11093) . w/ @JonathanHeek @TimSalimans
Emiel Hoogeboom tweet media
English
5
61
389
91.6K
Jonathan Heek retweetledi
Pierre Foret
Pierre Foret@Foret_p·
🥳 It is now super easy to fine-tune EfficientNet in FLAX! We open sourced a FLAX version of all officials EfficientNet checkpoints as a by product of our last paper: github.com/google-researc…
English
0
7
63
0
Jonathan Heek retweetledi
James Bradbury
James Bradbury@jekbradbury·
JAX on Cloud TPUs is getting a big upgrade! Come to our NeurIPS demo Tue. Dec. 8 at 11AM PT/19 GMT to see it in action, plus catch a sneak peek of a new Flax-based library for language research on TPU pods. Link: neurips.cc/ExpoConference… (neurips.cc/Register2 is still open!)
James Bradbury tweet media
English
4
28
220
0
Jonathan Heek
Jonathan Heek@JonathanHeek·
@LazyOp @NalKalchbrenner Thanks for spotting that. You are correct, those terms are missing from the pseudo-code. I will make sure that this gets fixed in the revision.
English
0
0
0
0
LO
LO@LazyOp·
@NalKalchbrenner In algorithm 1 lines 10 and 11, shouldn't there be a +theta and +xi?
English
1
0
0
0
Jonathan Heek retweetledi
Nal
Nal@nalkalc·
Announcing exciting progress in Bayesian deep learning: the new ATMC sampler achieves first of its kind Bayesian inference results on ImageNet Check out the results and the paper 👇 Heek et al: arxiv.org/abs/1908.03491
Nal tweet media
English
9
79
333
0
Jonathan Heek
Jonathan Heek@JonathanHeek·
@duane_rocks @avitaloliver @DeepSpiker Actually it's both. There's uncertainty in the model outputs and uncertainty about the model parameters. Sampling is used to marginalize over the uncertainty in the model parameters to obtain predictive uncertainty.
English
0
0
2
0
Avital Oliver
Avital Oliver@avitaloliver·
Super proud of work from my teammate @JonathanHeek from Google Brain Amsterdam: Scales up Bayesian Inference with a sampler that outperforms all ImageNet models that don't use batch norm. Important added benefit is accurate uncertainty estimates, via rigorous calibration testing
Nal@nalkalc

Announcing exciting progress in Bayesian deep learning: the new ATMC sampler achieves first of its kind Bayesian inference results on ImageNet Check out the results and the paper 👇 Heek et al: arxiv.org/abs/1908.03491

English
0
15
59
0
Jonathan Heek
Jonathan Heek@JonathanHeek·
@goodfellow_ian @NalKalchbrenner There's definitely reason to believe that a "Bayesian discriminator" will result in a better behaved estimate of D*. The predictions will be less saturated potentially resulting in a better signal for the generator. An ensemble of discriminators could improve robustness further.
English
0
0
2
0
Ian Goodfellow
Ian Goodfellow@goodfellow_ian·
@NalKalchbrenner Can it make a better GAN discriminator due to better estimate of D* = p_data / (p_data + p_model)?
English
4
2
51
0