Thomas Cassini, M.D.

416 posts

Thomas Cassini, M.D.

Thomas Cassini, M.D.

@TACGeneMD

Med-Peds Biochemical Geneticist @VUMChealth w/ interest in rare/undiagnosed disease, husband, proud Mexican-American, football fan, he/him, views my own

Nashville, TN Katılım Eylül 2019
280 Takip Edilen426 Takipçiler
Thomas Cassini, M.D. retweetledi
Dr. Dominic Ng
Dr. Dominic Ng@DrDominicNg·
Microsoft claims their new AI framework diagnoses 4x better than doctors. I'm a medical doctor and I actually read the paper. Here's my perspective on why this is both impressive AND misleading ... 🧵
Dr. Dominic Ng tweet media
English
276
1.2K
8.7K
1.6M
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
After this it revised it answer. It is notable that I didn't have to explicitly instruct this to consider the previous penetrance estimates, it did this after the previous prompt
Thomas Cassini, M.D. tweet media
English
0
0
1
52
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
I asked it about the penetrance for BRCA1 pathogenic variants after this
Thomas Cassini, M.D. tweet media
English
1
0
0
59
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
ChatGPT tends to struggle with issues related to penetrance if not directed to consider this. Here is an example. Despite using a "reasoning" model, this just does a simple mathematical calculation and does not use all the information provided.
Thomas Cassini, M.D. tweet media
English
1
0
0
104
Thomas Cassini, M.D. retweetledi
Arvind Narayanan
Arvind Narayanan@random_walker·
I find the story of AI and radiology fascinating. Of course, Hinton's prediction was wrong* and tech advances don't automatically and straightforwardly cause job replacement — that's not the interesting part. Radiology has embraced AI enthusiastically, and the labor force is growing nevertheless. The augmentation-not-automation effect of AI is despite the fact that AFAICT there is no identified "task" at which human radiologists beat AI. So maybe the "jobs are bundles of tasks" model in labor economics is incomplete. Paraphrasing something @MelMitchell1 pointed out to me, if you define jobs in terms of tasks maybe you're actually defining away the most nuanced and hardest-to-automate aspects of jobs, which are at the boundaries between tasks. Can you break up your own job into a set of well-defined tasks such that if each of them is automated, your job as a whole can be automated? I suspect most people will say no. But when we think about *other people's jobs* that we don't understand as well as our own, the task model seems plausible because we don't appreciate all the nuances. If this is correct, it is irrelevant how good AI gets at task-based capability benchmarks. If you need to specify things precisely enough to be amenable to benchmarking, you will necessarily miss the fact that the lack of precise specification is often what makes jobs messy and complex in the first place. So benchmarks can tell us very little about automation vs augmentation. * Hinton insists that he was directionally correct but merely wrong in terms of timing. This is a classic motte-and-bailey retreat of forecasters who get it wrong. It has the benefit of being unfalsifiable! It's always possible to claim that we simply haven't waited long enough for the claimed prediction to come true.
Arvind Narayanan tweet media
English
132
333
1.9K
533.2K
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
This is interesting because if you substitute “medicine” for “software”/“tech” and take “you” to be the plural incorporating yourself, the patient, and the healthcare team, the second paragraph actually well describes practicing medicine
Thomas Cassini, M.D. tweet media
English
0
0
1
101
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
I'm curating genes to add to an inborn errors of immunity panel. o3 did a pretty thoughtful job of explaining why to add some on the 2024 IUI list (not sure yet about FOXI3) I need to look at the rest of the list still to see if any were missed that may have sufficient evidence
Thomas Cassini, M.D. tweet media
English
0
0
1
79
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
@drkeithsiau We also give hydroxocobalamin emergently for neonates with methylmalonic acidemia while awaiting further diagnostic testing in case it is due to a disorder of intercellular cobalamin metabolism (ie Cobalamin V disease)
English
0
0
1
137
Keith Siau
Keith Siau@drkeithsiau·
There are only 2 indications for an emergency Vitamin B12 injection - what are they?
English
101
175
1.8K
229.5K
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
Interesting article. The "three performance regimes" subjectively correlate with my practical experience: 1. low-complexity: standard models outperform LRMs 2. medium-complexity: LRMs demonstrate advantage 3. high-complexity: both experience collapse machinelearning.apple.com/research/illus…
English
0
0
0
61
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
@kimmonismus They certainly felt like a breakthrough. Whether they “think” seems more semantic to me and an attention-grabbing title attempt. I do agree with them though that, at least subjectively, my user experience has followed the “three performance regimes” they outline
English
0
0
0
109
Chubby♨️
Chubby♨️@kimmonismus·
Apple doesn't see reasoning models as a major breakthrough over standard LLMs - new study Here is why:
Chubby♨️ tweet media
English
216
272
5.2K
880.9K
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
There were obviously multiple steps involved in the reasoning. It found the transcript and inserted the duplicated base. Then the smart part! it wrote a translation program that simply translated these codons similar to the way it works in the biological system.
Thomas Cassini, M.D. tweet media
English
0
0
1
37
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
Saw something interesting from ChatGPT today. Asked it to predict the protein sequence for this variant: PARD6B NM_032521.3:c.761dup (p.Asn254LysfsTer17) It got it right (top reading from on the UCSC screenshot), and the way it did it was pretty cool!
Thomas Cassini, M.D. tweet mediaThomas Cassini, M.D. tweet media
English
1
0
2
88
Thomas Cassini, M.D. retweetledi
Akshay 🚀
Akshay 🚀@akshay_pachaar·
Self-attention in LLMs, clearly explained:
English
8
75
603
210.6K
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
These are some issues with classifying CDC42 NM_001791.4:c.556C>T 1. Enough de novo occurrences to apply PS2 at Very High 2. Both over use PS3 when not enough control variants 3. Gemini still uses PP5 4. PM1 seems like an over call here (admittedly this one can be subjective)
Thomas Cassini, M.D. tweet mediaThomas Cassini, M.D. tweet media
English
0
0
0
45
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
ChatGPT o3 + Gemini 2.5 Pro are great for gathering the information needed to classify a genetic variant, but it seems like they tend to struggle applying classification criteria. There is some subjectivity here, but even when guided to ACMG criteria they don't apply accurately
English
1
0
0
252
Sarahh
Sarahh@Sarahhuniverse·
Norway Flag 🇳🇴 😲 © interestingfacts
English
83
732
9.4K
4.5M
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
So intuitively it makes sense that the probability of the next token would be more strongly influenced by words than this series of symbols. If anyone who understands LLMs and GPTs better has any insight though I would be interested to hear your take.
English
0
0
0
30
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
I am guessing this is because the HPO terms are in a common format (words) and the symbols used in transcript variants, which are much more commonly seen in other contexts than together in this format (i.e. ">" usually means greater than, not ref to alt)
English
1
0
0
31
Thomas Cassini, M.D.
Thomas Cassini, M.D.@TACGeneMD·
I used to try to just put genetic variants into GPTs and see if it could provide diagnostic insight. I found something interesting when I just put in variants and HPO terms like this prompt
Thomas Cassini, M.D. tweet media
English
1
0
1
55