Raym Geis MD FSIIM

6.1K posts

Raym Geis MD FSIIM banner
Raym Geis MD FSIIM

Raym Geis MD FSIIM

@quantrad

Radiology data | Ethics | Paddle/ski/hike @NJHealth @AcrDsi @CURadiology

Fort Collins, CO Bergabung Ağustos 2013
1.6K Mengikuti2.1K Pengikut
Tweet Disematkan
Raym Geis MD FSIIM
Raym Geis MD FSIIM@quantrad·
Radiology AI
Raym Geis MD FSIIM tweet media
हिन्दी
12
48
220
0
Raym Geis MD FSIIM
Raym Geis MD FSIIM@quantrad·
@woojinrad @AIHealthUncut Basic, like testing on 4-6 cases, should be obvious. But beyond almost that level of understanding stats, physicians, & most physicians who review, aren't statisticians & shouldn't be asked by journals to do that part of a review. Journals should hire statisticians. (Fat chance)
English
1
0
4
96
Woojin Kim
Woojin Kim@woojinrad·
It's Time for Doctors Who Publish to Learn Basic Math - @AIHealthUncut Amid the reproducibility crisis, peer reviewers are the last frontier of quality medical research. And they're failing—badly. buff.ly/klgmejw
Woojin Kim tweet mediaWoojin Kim tweet media
English
1
2
13
1.2K
Raym Geis MD FSIIM
Raym Geis MD FSIIM@quantrad·
Hey rads, informatics folks, CVML folks. What will make the transition to that colored Sky site instead of twitter? How might we facilitate that?
English
1
0
4
293
Raym Geis MD FSIIM me-retweet
Alex Reibman 🖇️
Alex Reibman 🖇️@AlexReibman·
Looking for better ways to code with Cursor and came across this banger. Anyone else have workflows that work for them?
Alex Reibman 🖇️ tweet media
English
175
428
6.7K
813.4K
Raym Geis MD FSIIM me-retweet
m_ric
m_ric@AymericRoucher·
Introducing open-Deep-Research by @huggingface ! 💥 Deep Research from @OpenAI is really good... But it's closed, as usual. > So with a team of cracked colleagues, we set ourselves a 24hours deadline to replicate and open-source Deep Research! ➡️ We built open-Deep-Research, an entirely open agent that can: navigate the web autonomously, scroll and search through pages, download and manipulate files, run calculation on data... We aimed for the best performance: are the agent's answers really rigorous? On GAIA benchmark, Deep Research had 67% accuracy on the validation set. ➡️ open Deep Research is at 55% (powered by o1), but it is: - the best pass@1 solution submitted - the best open solution And it's only getting started ! Please jump in, drop PRs, and let's bring it to the top 🚀
m_ric tweet media
English
78
554
3.5K
405.2K
Raym Geis MD FSIIM me-retweet
Danny G
Danny G@the_danny_g·
--- You are the world’s best software engineer, comedic roaster, and mentor. For the code I provide: 1. **Roast** it mercilessly with humor and sarcasm. 2. **Educate** on precisely what’s wrong: discuss the architecture, design patterns, naming, structure, testing pitfalls, etc. 3. **Refactor** the code to perfection: - Use best practices and current frameworks/libraries - Maintain a consistent coding style and naming conventions - Add in TSDoc or relevant docstrings for clarity 4. **Deliver** a final, fully working code sample: - **No placeholders or pseudo-code** - Complete file(s) with all relevant code 5. **Explain** how these changes benefit future expansions, especially for AI-based code refactoring or generation. ---
English
1
8
126
10.9K
Raym Geis MD FSIIM me-retweet
ManaMoassefi
ManaMoassefi@ManaMSF94·
I’m excited to share our recent paper published in @TheAJNR on glioblastoma and tumefactive demyelinating lesions of the brain, with multiple validation steps. A special thanks to my co-authors and @NIH for their support on this project! ajnr.org/content/early/…
English
0
8
73
2.7K
Raym Geis MD FSIIM me-retweet
Sara Hooker
Sara Hooker@sarahookr·
Feel the need to point out again — even slightly more sophisticated sampling can overcome mode collapse in synthetic data. twitter.com/sarahookr/stat…
Luiza Jarovsky, PhD@LuizaJarovsky

🚨 "AI models collapse when trained on recursively generated data" was among the most influential AI papers of 2024 - don't miss it! Bookmark & download it below. Interesting quotes: "The development of LLMs is very involved and requires large quantities of training data. Yet, although current LLMs2,4–6, including GPT-3, were trained on predominantly human-generated text, this may change. If the training data of most future models are also scraped from the web, then they will inevitably train on data produced by their predecessors. In this paper, we investigate what happens when text produced by, for example, a version of GPT forms most of the training dataset of following models. What happens to GPT generations GPT-{n} as n increases? We discover that indiscriminately learning from data produced by other models causes ‘model collapse’—a degenerative process whereby, over time, models forget the true underlying data distribution, even in the absence of a shift in the distribution over time" - "Our evaluation suggests a ‘first mover advantage’ when it comes to training models such as LLMs. In our work, we demonstrate that training on samples from another generative model can induce a distribution shift, which—over time—causes model collapse. This in turn causes the model to misperceive the underlying learning task. To sustain learning over a long period of time, we need to make sure that access to the original data source is preserved and that further data not generated by LLMs remain available over time. The need to distinguish data generated by LLMs from other data raises questions about the provenance of content that is crawled from the Internet: it is unclear how content generated by LLMs can be tracked at scale. (...)" ➡ Authors: Ilia Shumailov, Zakhar Shumaylov, Yiren (Aaron) Zhao, Nicolas Papernot, Ross Anderson & Yarin Gal ➡ Link to the paper below. 🔥 To stay up to date with the latest developments in AI policy, compliance & regulation, including excellent research, join 44,400+ people who subscribe to my AI newsletter (link below).

English
14
56
392
45.1K
Raym Geis MD FSIIM
Raym Geis MD FSIIM@quantrad·
This. Finally starting to understand it, I think. I would summarize it as LLMs don't "reason" like we do, or like we expect, and as a result when we ask questions from our framework, LLMs don't necessarily answer as expected. Is this a foundational flaw? arxiv.org/abs/2410.05229
English
0
0
0
92
Raym Geis MD FSIIM me-retweet
Dr Ellie Murray, ScD
Dr Ellie Murray, ScD@EpiEllie·
Contrarians love to ask: “But what did people do before [insert public health measure]??” And the answer is always just: “They died, or they buried everyone they loved.” theconversation.com/infectious-dis…
English
30
392
1.4K
38.4K
Raym Geis MD FSIIM me-retweet
Dimitris Papailiopoulos
Dimitris Papailiopoulos@DimitrisPapail·
I've been thinking about in-context learning for nearly 3 years. While there is still plenty I don't fully understand, five papers have--to a very large extent--shaped my perspective on it, and I believe everyone should read them. 1. "What Can Transformers Learn In-Context? A Case Study of Simple Function Classes", by my now MSR colleague Shivam Garg (@shivamg_13) and Dimitris Tsipras (@tsiprasd ) et al. 2. "What learning algorithm is in-context learning? Investigations with linear models" by Ekin Akyürek (@akyurekekin) et al. 3. "Transformers learn in-context by gradient descent" by my friend Johannes von Oswald (@oswaldjoh) et al 4. "MetaICL: Learning to Learn In Context" by Sewon Min (@sewon__min) et al. 5. "Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?" again by Sewon Min et al.
English
12
85
682
93.1K
Raym Geis MD FSIIM me-retweet
Elias Bareinboim
Elias Bareinboim@eliasbareinboim·
The true generative model is Nature -- a collection of causal mechanisms. Under what conditions can a trained model with partial observability exhibit patterns similar to those found in Nature? We explored this question with Bengio, Xia, and Lee in a NeurIPS-21 paper: causalai.net/r80.pdf. Specifically, we developed the concept of causal inductive biases and examined what makes a neural or any other learned model 'generative.' The key insight to answering this question comes from the constraints imposed on the underlying distributions and graphical models studied within the Pearl Causal Hierarchy framework, as introduced in causalai.net/r60.pdf. (The implications of such discussion resolved some long-standing confusion in the literature, which conflates the concepts of generative and causal-- where the latter implies the former but not vice versa.) The newly developed machinery can help us tackle many modern ML tasks, including counterfactual inferences (causalai.net/r87.pdf), causal abstractions (causalai.net/r101.pdf), counterfactual image editing (causalai.net/r103.pdf), and fair ML (causalai.net/r90.pdf). @kchonyc @tdietterich @yudapearl
Elias Bareinboim tweet mediaElias Bareinboim tweet mediaElias Bareinboim tweet mediaElias Bareinboim tweet media
Danilo J. Rezende@DaniloJRezende

💯 "synthetic data" only makes sense if the data generating model is a better model of reality than the model being trained. This only happens in very special cases (eg when first-principles simulators are available).

English
0
42
187
30.6K
Raym Geis MD FSIIM me-retweet
Evidently AI
Evidently AI@EvidentlyAI·
✅ An enterprise guide for implementing secure, controlled access to Generative AI models. Expedia shares how they developed the GenAI tool kit — GenerativeAI Proxy & EG-Guardrail Service: service architecture and guardrails. medium.com/expedia-group-…
English
0
3
3
222
Raym Geis MD FSIIM
Raym Geis MD FSIIM@quantrad·
@AnthonyAGatti @heacockmd RECIST measurements come to mind first. Usually they include identification and (mostly) unidirectional measurement of masses and lymph nodes, and comparison over time. Essential but time consuming and rads rarely do them.
English
0
0
0
13
Anthony A Gatti
Anthony A Gatti@AnthonyAGatti·
@heacockmd @quantrad Ah! I read OP as getting quantitative measures from “everything”, so I was mostly thinking of things that aren’t currently measured (and therefore reimbursed). I assume there are lots of things that there could be some value to measure but aren’t because just takes too long?
English
2
0
2
35
Raym Geis MD FSIIM
Raym Geis MD FSIIM@quantrad·
What would you be willing to delegate to a first-month rad res? AI to do those tasks would really help. Rads will jump at tools that make measurements automatically, compare with priors, and put measurements into the report.
Andrej Karpathy@karpathy

The most bullish AI capability I'm looking for is not whether it's able to solve PhD grade problems. It's whether you'd hire it as a junior intern. Not "solve this theorem" but "get your slack set up, read these onboarding docs, do this task and let's check in next week".

English
1
1
2
513
Raym Geis MD FSIIM
Raym Geis MD FSIIM@quantrad·
@AnthonyAGatti Cost is a barrier (improving efficiency will never be reimbursed in the traditional sense). If cost of product is less than decreased operating cost, theoretically it should work. Question is who will pay - institutions balk at paying for something where benefit accrues to rads.
English
0
0
2
22
Anthony A Gatti
Anthony A Gatti@AnthonyAGatti·
@quantrad This was possible 5 years ago. Just need modest dataset size of labelled/segmented data (2-500 examples) and you can train a good seg/detection model, automatically compute measurements, and spit them out in a formatted report. Is reimbursement the main barrier?
English
2
0
1
66