Justin Domke

42 posts

Justin Domke banner
Justin Domke

Justin Domke

@JustinDomke

Fan of high nat/flop ratios.

Katılım Mayıs 2020
62 Takip Edilen325 Takipçiler
Justin Domke
Justin Domke@JustinDomke·
@FelixHill84 I tend to think it's better not to do this before the rebuttal, as after the reviewers have converged it's even harder for the author response to change anyone's mind.
English
1
0
0
0
Felix Hill
Felix Hill@FelixHill84·
I didn't AC for Neurips this time, but when I did before we were encouraged to get reviewers to consult with the goal of reducing variance before rebuttal. I found this to be a useful process that overall improved reviews (tho with some risks). Did anyone try this year?
English
3
1
11
0
Justin Domke
Justin Domke@JustinDomke·
@MaximeVono This is a very interesting paper, thanks. Have you thought about how you might apply such ideas with (non-instrumental) hierarchical models?
English
0
0
0
0
Maxime Vono
Maxime Vono@MaximeVono·
specifically designed for parallel/distributed Bayesian inference. We show non-asymptotic mixing time bounds in both Wasserstein and TV distances with explicit dependences on the dimension and regularity constants.
English
1
0
2
0
Maxime Vono
Maxime Vono@MaximeVono·
Our JMLR paper "Efficient MCMC Sampling with Dimension-Free Convergence Rate using ADMM-type Splitting" with D. Paulin and @ArnaudDoucet1 is now publicly available online: jmlr.org/papers/v23/20-… ! Based on an ADMM-type splitting, we build an approx. posterior (1/2)
English
1
12
45
0
Justin Domke
Justin Domke@JustinDomke·
@MassRMV Thanks, FYI the reason I can't do it online is because of your requirement to use that 2-factor authentication app, which I can't run on my phone. (Calling would be fine but as I'm sure you know the wait times are crazy these days.)
English
1
0
0
0
Massachusetts RMV
Massachusetts RMV@MassRMV·
@JustinDomke The quickest and easiest way to change is online. If you are not able to call us to make the change, please mail the form to RMV, PO Box 55889, Boston, MA 02205.
English
1
0
1
0
Justin Domke
Justin Domke@JustinDomke·
@MassRMV Hi, I sent a change of address form by mail a month ago. I've triple checked that I used the right address (RMV- Address Change, PO Box 199106, Boston MA 02119-9106) A month later this was returned to me as "No mail receptacle, unable to forward". What's the deal?
English
1
0
0
0
Justin Domke
Justin Domke@JustinDomke·
@MassRMV OK, thanks! But can you please answer: Do you no longer accept change of address forms by mail?
English
1
0
0
0
Justin Domke
Justin Domke@JustinDomke·
@MassRMV Do you no longer accept change of address forms by mail?
English
1
0
0
0
Massachusetts RMV
Massachusetts RMV@MassRMV·
@JustinDomke @JustinDomke That is a form we no longer use and is not on our website. It appears some municipalities may still have that on their website. For RMV questions and assistance, please be sure to visit our website at Mass.Gov/RMV.
English
1
0
0
0
Justin Domke
Justin Domke@JustinDomke·
@MassRMV Can you clarify why my letter was returned? I followed the instructions on your form and sent it to the correct address, shouldn't this work?
English
1
0
0
0
Justin Domke retweetledi
Vincent Fortuin @vincefort.bsky.social
Hark, Bayesians! Everyone's favorite approximate inference symposium is back! The submission deadline is November 19th, so send us all your recent work on approximate Bayesian inference. Please share, retweet, and tell all your friends: approximateinference.org/call/
English
1
22
72
0
Justin Domke
Justin Domke@JustinDomke·
@sethaxen This paper from 1995 shows that you can understand EM for gaussian mixtures as a kind of gradient update with a positive definite conditioning matrix. It suggests this improves convergence rates. I'm sure regular gradient descent is usually fine, though! dspace.mit.edu/bitstream/hand…
English
0
0
1
0
Justin Domke
Justin Domke@JustinDomke·
@sam_power_825 @Branchini97 Thanks! In defense of the bounding perspective, in cases where you just want to maximize a likelihood and don't care about the posterior, it is nice that you can just use Jensen's inequality and all the augmentations / couplings / etc. mostly just buys you "insight".
English
0
0
2
0
Sam Power
Sam Power@sp_monte_carlo·
@Branchini97 @JustinDomke It would be foolish of me to advocate against keeping an open mind 🙂 but I would say that for me, there have been a limited number of cases in which I preferred the bounds. Justin's work is indeed very nice (esp. with a background in Monte Carlo, extended state spaces, etc.).
English
3
0
1
0
Nicola Branchini
Nicola Branchini@Branchini_Nic·
It seems widely reported that "the VAE lower bound (LB) is a special case of the IWAE's with K=1". But surely I can use K>1 samples with VAE - just get ∑log instead of log∑ (IWAE). One is a (biased&consistent) estimator of log p(x) while the other estimates a *LB* to log p(x).
English
2
1
7
0
Justin Domke
Justin Domke@JustinDomke·
@mdreid looks great! (but i hope you have a crash pad)
English
1
0
0
0
Mark Reid
Mark Reid@mdreid·
Finished! Now the route setting begins…
Mark Reid tweet media
English
4
0
18
0
Justin Domke
Justin Domke@JustinDomke·
@sam_power_825 Thanks! The main limitations to watch out for are (1) the cost of doing inference repeatedly and (2) VI might make KL(q||p) low but still leave SKL(q||p) high. (I mention this in section 6.2)
English
0
0
2
0
Sam Power
Sam Power@sp_monte_carlo·
i vaguely remember reading this earlier in the year, but somehow elected not to tweet about it - rectifying that now: arxiv.org/abs/2103.01030 'An Easy to Interpret Diagnostic for Approximate Inference: Symmetric Divergence Over Simulations' - Justin Domke
Sam Power tweet mediaSam Power tweet mediaSam Power tweet mediaSam Power tweet media
English
7
4
44
0
Justin Domke
Justin Domke@JustinDomke·
@CiccioneLorenzo @StanDehaene Yeah, the point about Deming regression is very nice. Your paper will be a good resource for anyone who wants to train themselves to do this kind of thing better. I may refer to it in a class I'm going to teach in a few months, so thanks again!
English
0
0
1
0
Lorenzo Ciccione
Lorenzo Ciccione@CiccioneLorenzo·
@JustinDomke @StanDehaene Thanks! We did not directly compare people accuracy to existing algorithms (hint for future study?). We just showed that people: 1) don't perform simple OLS but Deming; 2) extrapolate from non-linear trends; 3) are bad with accelerating functions (new study is coming about this!)
English
1
0
0
0
Justin Domke
Justin Domke@JustinDomke·
@CiccioneLorenzo @StanDehaene Thanks for the pointer. I particularly liked your experiment 4 with the tricky extrapolation problem. I can't figure out how people compare to algorithms in terms of accuracy though. (Maybe it's in the supplementary materials?)
English
1
0
1
0