MLiNS Lab

41 posts

MLiNS Lab banner
MLiNS Lab

MLiNS Lab

@mlins_lab

The MLiNS Lab studies how AI/ML can be used to improve the diagnosis and treatment of patients with neurosurgical diseases.

Katılım Ocak 2023
74 Takip Edilen97 Takipçiler
MLiNS Lab retweetledi
Michigan Neurosurgery
Michigan Neurosurgery@umichneuro·
A well-deserved congratulations to @ToddCHollon for being honored as the inaugural holder of the Joseph R. Novello, MD and Alfredo Quiñones-Hinojosa, MD Research Professorship!
Michigan Neurosurgery tweet media
English
0
5
19
5K
MLiNS Lab retweetledi
Todd Hollon
Todd Hollon@ToddCHollon·
🚀 Proud to introduce #FastGlioma: the first foundation model enabling rapid, accurate detection of brain tumor infiltration during surgery, in under 10 seconds. With FastGlioma, we’re minimizing the risk of residual tumor and enhancing outcomes for glioma patients. This work sets a new standard in real-time, microscopic-level detection, powered by AI in healthcare. Kudos to the incredible team @mlins, @HerveyJumper, @DanOrringerMD, @InvenioImaging, @CameloPiragua! Read the full paper in @Nature: nature.com/articles/s4158…
English
8
29
133
20K
MLiNS Lab retweetledi
Max Lu
Max Lu@MYLu97·
4/7 📊 Hou & Jiang et al. present SPT, a framework for learning self-supervised slide representations, which is consistently able to learn strong slide-level features across a variety of encoders, including UNI. arXiv: arxiv.org/abs/2402.06188 ⚡️This is the first work to investigate slide pretraining across a diverse variety of ROI encoders. The analyses in Hou & Jiang et al. suggest that slide pretraining provide the biggest performance gains in less powerful ROI encoders, with least benefit in HiDisc and UNI. They also show the importance of further finetuning as well, which can yield as big of an improvement as slide pretraining. 💭 We believe more development needs to happen in self-supervised slide encoders than ROI encoders. Few works in this area, and where most of the technical advancements need to be made 🔥
Max Lu tweet media
English
1
5
14
840
MLiNS Lab retweetledi
Todd Hollon
Todd Hollon@ToddCHollon·
Great paper from @AI4Pathology on predicting cancer survival with AI.
Faisal Mahmood@AI4Pathology

⚡️📣Delighted to announce MMP, a prototype-based multimodal framework combining histology and transcriptomics for cancer outcome prediction, to appear in #ICML 2024 @icmlconf. Congratulations to our superstar postdoc @GreatAndrew90 and rest of the team who helped the study. Paper: arxiv.org/pdf/2407.00224 Code: github.com/mahmoodlab/MMP This represents the latest iteration of the multimodal fusion frameworks our lab has investigated since Pathomic Fusion by @richardjchen in 2019. Few interesting facts to know about MMP - Multimodal extension of PANTHER (CVPR 2024), combining morphological prototypes and transcriptomic prototypes (pathways) - Outperforms other multimodal baselines with ~10x less computation - Intuitive prototype-oriented cross-modal interpretability analyses #ComputationalPathology #DigitalPathology #ICML2024 #MultimodalFusion

English
0
1
6
649
MLiNS Lab retweetledi
MichiganAI
MichiganAI@michigan_AI·
🎉 Thanks to all who joined us for a #CVPR2024 Michigan AI meetup, it was fantastic connecting with everyone! Looking forward to more gatherings like these!
MichiganAI tweet media
English
0
2
20
2.5K
MLiNS Lab retweetledi
Todd Hollon
Todd Hollon@ToddCHollon·
Great work from our @SamirHarake and @mlins_lab! Using AI to make spine surgeon's lives just a bit easier. Development and validation of an artificial intelligence model to accurately predict spinopelvic parameters doi.org/10.3171/2024.1…
English
2
1
7
700
MLiNS Lab retweetledi
Faisal Mahmood
Faisal Mahmood@AI4Pathology·
⚡️🔬📣Excited to share our two new @NatureMedicine articles, we develop computational pathology foundation models, 1. UNI, a self-supervised computational pathology model trained on 100 million pathology images from 100k+ slides. 2. CONCH, a vision-language model for computational pathology trained on 1.17 million pathology image-text pairs. Access the articles @NatureMedicine UNI: nature.com/articles/s4159… CONCH: nature.com/articles/s4159… Access the code, models: UNI: github.com/mahmoodlab/UNI CONCH: github.com/mahmoodlab/CON… Interesting aspects: - Both models are evaluated on a host of different clinically relevant tasks for WSI classification, ROI classification, segmentation, image retrieval, image-to-text retrieval, text-to-image retrieval, in 0-shot, few-shot and supervised settings. These adaptations encompass the utility of large public datasets and evaluations on independent test cohorts. - Both models exclude commonly used public computational pathology benchmarks from pre-training allowing for a much more holistic evaluation. Some limitations: Both UNI and CONCH represent early developments in foundation models for pathology. More data, and additional evaluation is needed to realize the full potential of these models. Nevertheless, we show the models capabilities on a variety of different benchmarks with several demonstrating state-of-the-art performance. Future work and insights: While these developments are exciting, they represent work we did about a year ago when the pre-prints were made available, since then we have been busy collecting significantly larger datasets and hope to make larger models available in the future. We have also used UNI and CONCH as the backbone for our Pathology specific chatbot, PathChat (arxiv.org/abs/2312.07814), which is further trained on hundreds of thousands of pathology specific Q-A instructions. We are also excited to see foundation models for several other areas of biomedicine including for single cell data (nature.com/articles/s4159…), radiology (nature.com/articles/s4225…) and the general trajectory towards general purpose AI for biomedicine. Congratulations to our superstar leaders @richardjchen @MYLu97 @DFKW_MD @TongDing99, Bowen Chen and everyone else who contributed to these studies @GuillaumeJaume @GreatAndrew90 @sharifa_sahai @Aparwani_dpath and others.
Faisal Mahmood tweet mediaFaisal Mahmood tweet mediaFaisal Mahmood tweet mediaFaisal Mahmood tweet media
English
23
201
655
113.8K
MLiNS Lab
MLiNS Lab@mlins_lab·
Our @__chengjia__ has just received his NIH F31 from @NIH_NINDS on using optical imaging and deep learning for label-free single-cell phenotyping. Looking forward to some impressive work over the next two years!
English
1
2
10
475
MLiNS Lab retweetledi
Todd Hollon
Todd Hollon@ToddCHollon·
Great night out with the @mlins_lab! So proud of our team and everything we have done in 2023.
Todd Hollon tweet media
English
0
2
15
1K