Bettina Baeßler

4.2K posts

Bettina Baeßler banner
Bettina Baeßler

Bettina Baeßler

@Baessler_Rad

UKW #radiologist; #radiomics; #MachineLearning; #deeplearning; #AI; #CMR; education; #diversity; #inclusion; @DRG.de; @EurRadiology

Würzburg, Germany Katılım Kasım 2017
499 Takip Edilen999 Takipçiler
Bettina Baeßler retweetledi
ESR Journals
ESR Journals@ESRJournals·
This study demonstrates the validity and reliability of automated ASPECTS evaluation for supporting neurologists in the clinical care process for acute ischemic stroke patients. (Shu Wan et al) #EuropeanRadiology 🔗 buff.ly/3LGfTJQ
ESR Journals tweet media
English
0
1
1
396
Bettina Baeßler retweetledi
ESR Journals
ESR Journals@ESRJournals·
Application of #AI-derived contours yields results comparable to manual segmentations. (Jan Gröschel et al.) #EuropeanRadiology Read more here 👉 buff.ly/3PyLiiv
ESR Journals tweet media
English
0
1
1
371
Bettina Baeßler retweetledi
Mike Klontzas
Mike Klontzas@Klonmich·
The reproducibility of RQS is extremely low. This has now been highlighted in @EurRadiology and should be taken into account in future studies assessing the quality of radiomics analyses. @Tugba_Akinci_MD @renatocuocolo @EuSoMII
Renato Cuocolo@renatocuocolo

The RQS has undoubtedly had an important role in raising awareness on #radiomics and #MachineLearning research. It's also showing limitations, as seen in the latest @EuSoMII Radiomics Auditing Group paper. Now available on @EurRadiology (#openaccess). doi.org/10.1007/s00330…

English
0
3
13
1.2K
Bettina Baeßler retweetledi
Akshay Chaudhari
Akshay Chaudhari@Dr_ASChaudhari·
Thanks @_akhaliq for featuring this work! A full thread on the methods/implications is x.com/Dr_ASChaudhari…
AK@_akhaliq

Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts paper page: huggingface.co/papers/2309.07… Sifting through vast textual data and summarizing key information imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy across diverse clinical summarization tasks has not yet been rigorously examined. In this work, we employ domain adaptation methods on eight LLMs, spanning six datasets and four distinct summarization tasks: radiology reports, patient questions, progress notes, and doctor-patient dialogue. Our thorough quantitative assessment reveals trade-offs between models and adaptation methods in addition to instances where recent advances in LLMs may not lead to improved results. Further, in a clinical reader study with six physicians, we depict that summaries from the best adapted LLM are preferable to human summaries in terms of completeness and correctness. Our ensuing qualitative analysis delineates mutual challenges faced by both LLMs and human experts. Lastly, we correlate traditional quantitative NLP metrics with reader study scores to enhance our understanding of how these metrics align with physician preferences. Our research marks the first evidence of LLMs outperforming human experts in clinical text summarization across multiple tasks. This implies that integrating LLMs into clinical workflows could alleviate documentation burden, empowering clinicians to focus more on personalized patient care and other irreplaceable human aspects of medicine.

English
0
2
6
1K