Ethan Epperly

517 posts

Ethan Epperly banner
Ethan Epperly

Ethan Epperly

@ethanepperly

PhD candidate in applied math @Caltech and @doecsgf fellow interested in computational linear algebra he/him

Pasadena, CA Katılım Temmuz 2020
456 Takip Edilen1.5K Takipçiler
Sabitlenmiş Tweet
Ethan Epperly
Ethan Epperly@ethanepperly·
Randomized block Krylov is great at low-rank approximation. But existing analysis gives good results for only large and small block sizes, even though the best block size in practice tends to be in the middle. We resolve this theory-practice gap in arxiv.org/abs/2508.06486
Ethan Epperly tweet media
English
1
4
45
4.5K
Ethan Epperly
Ethan Epperly@ethanepperly·
@angeris @damekdavis Hm. Not sure you’re on base here. I assume you’re talking about Bubeck §3.5 stuff? We study Ax=b when A is not pd. So matrix-vector products are not gradients of a convex quadratic. A similar result is known for Krylov algorithms, but we bound against arbitrary rand algorithms
English
0
0
2
37
guille
guille@angeris·
@damekdavis @ethanepperly wait isn’t the second result just formalizing the “propagate heat forward” hard problem that nesterov (iirc??) describes for quadratic minimization? (i haven’t read the paper but the latter result i think was known)
English
2
0
3
233
Ethan Epperly
Ethan Epperly@ethanepperly·
New paper out with Chris Camaño, Raphael Meyer, and Joel Tropp re-examining sketching algorithms! Included: subspace injections as an alternative to subspace embeddings, the theory and practice of sparse sketching, tensor sketching, and much more! arxiv.org/abs/2508.21189
English
0
11
101
6.4K
Ethan Epperly
Ethan Epperly@ethanepperly·
@magoghm Seems correct but also still uses a lot of explicit integrals, which I’m hoping to avoid…
English
1
0
1
115
Gerardo Horvilleur
Gerardo Horvilleur@magoghm·
@ethanepperly This well beyond my math level, but here is what ChatGPT 5 Pro said about it. Is this correct?
Gerardo Horvilleur tweet mediaGerardo Horvilleur tweet media
English
2
0
0
265
Ethan Epperly
Ethan Epperly@ethanepperly·
Anyone know an easy way of proving the identity 𝔼[g₁²/(g₁²+a²g₂²)] = 1/(1+a) where g₁, g₂ are iid standard Gaussians and a > 0? Ideally, I want an approach that avoids explicitly integrating over the Gaussian/chi-square/F pdf
English
2
1
14
2.3K
Ethan Epperly
Ethan Epperly@ethanepperly·
@PengLiangzu This is a nice solution. Somehow, still isn’t quite what I’m after. I’m really hoping for a simple direct argument that uses properties of the Gaussian distribution rather than known relations to other distributions
English
0
0
0
157
Liangzu Peng
Liangzu Peng@PengLiangzu·
chatgpt prompt: proving the identity 𝔼[g₁²/(g₁²+a²g₂²)] = 1/(1+a) without integration. (chatgpt answer in short: this quantity follows a beta distribution and then you can analyze its mean. #Derived_from_other_distributions" target="_blank" rel="nofollow noopener">en.wikipedia.org/wiki/Beta_dist…)
Ethan Epperly@ethanepperly

Anyone know an easy way of proving the identity 𝔼[g₁²/(g₁²+a²g₂²)] = 1/(1+a) where g₁, g₂ are iid standard Gaussians and a > 0? Ideally, I want an approach that avoids explicitly integrating over the Gaussian/chi-square/F pdf

English
2
0
4
506
Ethan Epperly
Ethan Epperly@ethanepperly·
@mathlfs The full version of Gautschi’s bound is attained if all the locations are in a ray on the complex plane. So that shows the exponential scaling is tight. I don’t know of any lower bounds for arbitrary locations
English
0
0
1
61
Ethan Epperly
Ethan Epperly@ethanepperly·
New blog post out! Vandermonde matrices are famously ill-conditioned, but just how bad are they? In this post, I discuss Gautschi’s 1962 bound showing that Vandermonde matrices are merely exponentially ill-conditioned ethanepperly.com/index.php/2025…
Ethan Epperly tweet media
English
1
14
86
7.1K
Ethan Epperly
Ethan Epperly@ethanepperly·
Joint work with my awesome collaborators Tyler Chen, Akash Rao, Raphael Meyer, and Chris Musco!
English
0
0
1
784
Ethan Epperly
Ethan Epperly@ethanepperly·
Randomized block Krylov is great at low-rank approximation. But existing analysis gives good results for only large and small block sizes, even though the best block size in practice tends to be in the middle. We resolve this theory-practice gap in arxiv.org/abs/2508.06486
Ethan Epperly tweet media
English
1
4
45
4.5K
Ethan Epperly
Ethan Epperly@ethanepperly·
@miniapeur It’s the easy direction of Stein’s lemma. I really regard Stein’s lemma as the converse to this result
English
0
0
2
208
Mathieu
Mathieu@miniapeur·
@ethanepperly I could be wrong, but it looks a lot like the Stein’s lemma?
English
1
0
5
324
Ethan Epperly
Ethan Epperly@ethanepperly·
New blog post up about the amazingly useful Gaussian integration by parts formula! As an application, we use it to analyze power iteration from a random start ethanepperly.com/index.php/2025…
Ethan Epperly tweet media
English
3
13
108
6.6K
Ethan Epperly
Ethan Epperly@ethanepperly·
Very excited to share that I’ve been awarded a SIAM student paper prize! I look forward to seeing any of you who will be at #SIAMAN25 in Montréal. Thanks to the committee at @TheSIAMNews for this honor #Epperly" target="_blank" rel="nofollow noopener">siam.org/publications/s…
English
0
0
16
711
Mark Schmidt
Mark Schmidt@MarkSchmidtUBC·
@ethanepperly Nice post, especially Proposition 2. Uniform indeed often outperforms squared-norm sampling. We discuss some similar ideas in our paper (Section 4.2) where we argued that in many applications *greedy* rules may be even better: arxiv.org/abs/1612.07838
English
1
0
5
333
Ethan Epperly
Ethan Epperly@ethanepperly·
New blog post up about the randomized Kaczmarz algorithm. The classic RK algorithms samples rows according to their squared norms, but what happens if you sample them uniformly? The answer surprised me: Uniform sampling is often just as good or even better ethanepperly.com/index.php/2025…
Ethan Epperly tweet mediaEthan Epperly tweet media
English
3
3
28
2.3K
Gautam Goel
Gautam Goel@gautamcgoel·
@ethanepperly Just wanna say these posts have been fantastic, I've learned a lot! Keep 'em coming.
English
1
0
1
64