Ian Covert

39 posts

Ian Covert banner
Ian Covert

Ian Covert

@ianccovert

Postdoc @Stanford, previously @uwcse @GoogleAI and @Columbia. Interested in deep learning and explainable AI

Palo Alto, CA Katılım Şubat 2017
154 Takip Edilen368 Takipçiler
Sabitlenmiş Tweet
Ian Covert
Ian Covert@ianccovert·
Making this class with Su-In, Hugh and Chris was one of the most fun things I did in grad school. We covered a ton of material, definitely check out all the slides we made courses.cs.washington.edu/courses/csep59… I'm excited to see how the course evolves in the next couple years!
English
0
3
24
0
Ian Covert retweetledi
Sahil Verma
Sahil Verma@Sahil1V·
📣 📣 📣 Our new paper investigates the question of how many images 🖼️ of a concept are required by a diffusion model 🤖 to imitate it. This question is critical for understanding and mitigating the copyright and privacy infringements of these models! arxiv.org/abs/2410.15002
Sahil Verma tweet media
English
10
56
224
38.4K
Ian Covert retweetledi
James Zou
James Zou@james_y_zou·
Very excited to introduce locality alignment, an efficient post-training algorithm to improve your ViTs + VLMs, essentially for free🚀 Local align = new self-supervised objective ensuring that encoder captures fine-grained spatial info. No new data needed. Here's the idea 1/3
James Zou tweet media
English
5
57
300
52.7K
Ian Covert retweetledi
Soham Gadgil
Soham Gadgil@soham_gadgil·
How to perform dynamic feature selection without assumptions about the data distribution or fitting generative models? We develop a learning approach to estimate the conditional mutual information in a discriminative fashion for selecting features. arxiv.org/pdf/2306.03301…
Soham Gadgil tweet media
English
1
2
9
2.9K
Ian Covert
Ian Covert@ianccovert·
Large models are tough because you may not be able to query the model thousands of times to get attributions (e.g., KernelSHAP). This is something we've tackled in a couple other papers FastSHAP (ICLR'22): arxiv.org/abs/2107.07436 ViT Shapley (ICLR'23): arxiv.org/abs/2206.05282
English
1
2
5
2.6K
Ian Covert
Ian Covert@ianccovert·
In our recent Nature MI paper, we looked at the surprising number of algorithms that estimate Shapley values (whose computation scales exponentially with the number of players). There are a lot, we counted at least 24 papers on this topic! Paper: rdcu.be/dcICX
English
1
8
33
13.2K
Ian Covert
Ian Covert@ianccovert·
Our experiments used three image datasets and models as big as ViT-Large (arXiv needs to be updated), but there's still plenty of room to scale this up. My guess is that 1) it gets better with more data, 2) it can help learn better representations than the original task (8/n)
English
1
0
2
388
Ian Covert
Ian Covert@ianccovert·
The question we’re trying to answer is *which patches influence the prediction.* And Shapley values are a surprisingly simple approach: they're like leave-one-out, but the effect of removing a patch is averaged across all sets of preceding patches (3/n)
English
1
0
3
568
Ian Covert
Ian Covert@ianccovert·
TL;DR: we fine-tune a ViT to directly predict its Shapley values, without using a dataset of ground truth examples. And it works quite well (2/n)
Ian Covert tweet media
English
1
0
8
559
Ian Covert
Ian Covert@ianccovert·
If you want to know what your ViT pays attention to...you might not want to use attention values! Shapley values can do this better, and now they can even do it efficiently. Check out our new paper (ICLR spotlight) arxiv.org/abs/2206.05282 🧵⬇️
Ian Covert tweet media
English
2
25
111
23K