Daniel Shao

17 posts

Daniel Shao

Daniel Shao

@Daniel__Shao

Katılım Ağustos 2020
130 Takip Edilen51 Takipçiler
Daniel Shao retweetledi
Andrew H. Song
Andrew H. Song@GreatAndrew90·
Check out our latest work on deriving crucial insights for transferability of multiple instance learning (MIL) models in #computationalpathology to be presented in @icmlconf (Spotlight)! Kudos to @Daniel__Shao who led the study 😎
Faisal Mahmood@AI4Pathology

📣 Excited to share our new ICML 2025 Spotlight article, “Do Multiple Instance Learning Models Transfer?” – addressing a foundational questions for building robust and generalizable MIL models. Read the article: arxiv.org/pdf/2506.09022 👉Enhanced Performance & Robustness: Pretrained MIL models consistently lead to improved performance even when the pre-training data comes from a different organ, site, disease model than the target task. 👉Aggregation Transfers: Transfer gains come from the MIL aggregation module, not just patch encoders. Resetting attention layers drops performance by 5–8%, showing they encode generalizable pooling logic. 👉Pancancer Generalization: Models pretrained on a more diverse and challenging data (e.g. 108-class pancancer classification task) achieve the stronger overall transfer performance. 👉Robust benefits across patch encoders: Benefits from MIL transfer are consistent across a wide range of patch encoders, from out-of-domain encoders such as ResNet50 pretrained on natural images, to in-domain encoders including Gigapath and UNIv2. This research highlights supervised pretraining as a highly accessible path to generalizable MIL models, offering a data and compute-efficient route for developing slide level encoders with flexible combination of MIL method and patch encoder. Congratulations to @Daniel__Shao @GreatAndrew90 @richardjchen and everyone else who contributed. Stay tuned for an array of pre-trained MIL models ready to transfer to any task! Visit us at @icmlconf.

English
0
3
10
789
Daniel Shao
Daniel Shao@Daniel__Shao·
Thrilled to share my latest PhD work exploring fundamental questions about the transferability of multiple instance learning models in #computationalpathology, accepted as a spotlight paper to @icmlconf ! Many thanks to @richardjchen @GreatAndrew90 @AI4Pathology and all coauthors
Faisal Mahmood@AI4Pathology

📣 Excited to share our new ICML 2025 Spotlight article, “Do Multiple Instance Learning Models Transfer?” – addressing a foundational question for building robust and generalizable MIL models. Read the article: arxiv.org/pdf/2506.09022 👉Enhanced Performance & Robustness: Pretrained MIL models consistently lead to improved performance even when the pre-training data comes from a different organ, site, disease model than the target task. 👉Aggregation Transfers: Transfer gains come from the MIL aggregation module, not just patch encoders. Resetting attention layers drops performance by 5–8%, showing they encode generalizable pooling logic. 👉Pancancer Generalization: Models pretrained on a more diverse and challenging data (e.g. 108-class pancancer classification task) achieve the stronger overall transfer performance. 👉Robust benefits across patch encoders: Benefits from MIL transfer are consistent across a wide range of patch encoders, from out-of-domain encoders such as ResNet50 pretrained on natural images, to in-domain encoders including Gigapath and UNIv2. This research highlights supervised pretraining as a highly accessible path to generalizable MIL models, offering a data and compute-efficient route for developing slide level encoders with flexible combination of MIL method and patch encoder. Congratulations to @Daniel__Shao @GreatAndrew90 @richardjchen and everyone else who contributed. Stay tuned for an array of pre-trained MIL models ready to transfer to any task! Visit us at @icmlconf.

English
0
1
2
161
Daniel Shao retweetledi
Faisal Mahmood
Faisal Mahmood@AI4Pathology·
📣 Excited to share our new ICML 2025 Spotlight article, “Do Multiple Instance Learning Models Transfer?” – addressing a foundational question for building robust and generalizable MIL models. Read the article: arxiv.org/pdf/2506.09022 👉Enhanced Performance & Robustness: Pretrained MIL models consistently lead to improved performance even when the pre-training data comes from a different organ, site, disease model than the target task. 👉Aggregation Transfers: Transfer gains come from the MIL aggregation module, not just patch encoders. Resetting attention layers drops performance by 5–8%, showing they encode generalizable pooling logic. 👉Pancancer Generalization: Models pretrained on a more diverse and challenging data (e.g. 108-class pancancer classification task) achieve the stronger overall transfer performance. 👉Robust benefits across patch encoders: Benefits from MIL transfer are consistent across a wide range of patch encoders, from out-of-domain encoders such as ResNet50 pretrained on natural images, to in-domain encoders including Gigapath and UNIv2. This research highlights supervised pretraining as a highly accessible path to generalizable MIL models, offering a data and compute-efficient route for developing slide level encoders with flexible combination of MIL method and patch encoder. Congratulations to @Daniel__Shao @GreatAndrew90 @richardjchen and everyone else who contributed. Stay tuned for an array of pre-trained MIL models ready to transfer to any task! Visit us at @icmlconf.
Faisal Mahmood tweet media
English
0
11
46
4K
Daniel Shao retweetledi
Anant Madabhushi
Anant Madabhushi@anantm·
Tips on writing a NIH R01 on #AI. In last 6 weeks our group landed 4 @NIH / @theNCI R & U grants on #AI in #imaging & #medicine. I got a number of enquiries about grant writing tips. I share some thoughts below. Also free offer - Happy to review & provide f/b on your S aims page
Anant Madabhushi tweet mediaAnant Madabhushi tweet mediaAnant Madabhushi tweet media
English
20
231
1.2K
0
Daniel Shao retweetledi
Cambridge Fire Dept.
Cambridge Fire Dept.@CambridgeMAFire·
II) 2 alarms Box 2-38: Arriving with heavy smoke from the bldg, firefighters rescued 1 trapped resident (w non life threatening injury) from floor 3 over a 35' ladder in a rear alley in the smoke. All other residents & 1 cat were evacuated safely. All 8 residents were displaced.
Cambridge Fire Dept. tweet mediaCambridge Fire Dept. tweet mediaCambridge Fire Dept. tweet media
English
1
11
25
0
Daniel Shao
Daniel Shao@Daniel__Shao·
@koyuturkm @Mustafa33467434 This reminds me of what you told us in class: to try simple methods first rather than use complex methods for complexity's sake
English
1
0
1
0
Mehmet Koyutürk
Mehmet Koyutürk@koyuturkm·
Say we would like to use Graph Convolutional Networks for some link prediction task that involves biological/biomedical networks. Is the degree-normalized adjacency matrix (as commonly utilized in the literature) the best convolution matrix we can use for this purpose?
Mehmet Koyutürk tweet media
English
2
4
13
0
Daniel Shao retweetledi
Anant Madabhushi
Anant Madabhushi@anantm·
I usually post about new grant & patent awards to our group @CCIPD_Case. I think it only appropriate to share my rejections too. An R01 I put in was #NotDiscussed. Junior investigators, its not just you. Happens to senior investigators too. Have to #persevere & hv a thick skin.
Anant Madabhushi tweet media
English
6
3
42
0
Daniel Shao retweetledi
Case Western Reserve
Three #CWRU students—Zahin Islam, Daniel Shao and Evan Vesper—will each receive $7,500 toward tuition, room and board, books and other expenses per academic year as Barry Goldwater Scholarship recipients. ow.ly/pEQd50BeQog #Congratulations
English
1
2
31
0