Vincent Rajkumar

53.7K posts

Vincent Rajkumar banner
Vincent Rajkumar

Vincent Rajkumar

@VincentRK

Editor-in-Chief, Blood Cancer Journal; Chairman @IMFmyeloma Board; Cancer & Myeloma Research; Opinions solely personal views https://t.co/HOGYJSpsoG

Rochester, MN, USA Katılım Mart 2009
1.6K Takip Edilen81.2K Takipçiler
Sabitlenmiş Tweet
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
I would like to share the story of how a patient with cancer came up with the idea for a randomized trial, & how listening to him saved a lot of lives. 1/ In 2002, I had just completed a randomized trial with the notorious drug thalidomide for the cancer, multiple myeloma.
English
152
1.5K
4.9K
0
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
For clinical practice, since the output is medical advice, you have to be sure. If you are going to use it without verifying, it may probably be right 90+% of the time but the times it is wrong will be hard to catch because the AI is going to be confident and you have no idea when it’s saying the wrong thing. My workflow is to use UpToDate AI which relies only on vetted UptoDate content and provides linked references it’s easy to verify which is why I prefer that over every other medical AI. It won’t hallucinate. But I’m conflicted. I write and get royalties from UpToDate. Secondly UpToDate AI is not free. If you are going to use AI to write a paper, to not have to verify manually which defeats the purpose of the AI, you need to be a subject matter expert to make it work. If you are a subject matter expert, you may be able to use the AI to write a long first draft. Then you edit for style and correct errors and verify the references. If it’s a myeloma article for example, I think I can verify 90% by sight reading without having to check pubmed or something. But if the article is not on myeloma, then having verifying the AI output for me will defeat the purpose because it will be manual and strenuous. That’s how they have these examples online including in the recent lawsuit of completely fabricated stuff that no one picked up because they didn’t verify. Personally I haven’t used AI to write a first draft of anything. Far easier for me to type it out or dictate. That works because I’m writing only on myeloma and I can write without referring to literature a lot of the content.
English
0
0
0
13
Girish Kumar, PhD
Girish Kumar, PhD@girishkaitholil·
@VincentRK Agreed on don't trust. Verify how though? In clinical practice 'verify' often collapses to 'redo the case without the AI,' which defeats the point. The harder move is verifying the error mode: where the AI systematically fails, not case-by-case outputs. What's your workflow?
English
1
0
1
26
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
@thekaransinghal I almost signed up. But it wants biometrics. So I backed out. Why is it not possible to determine if I’m a licensed US physician without biometrics?
English
0
0
4
855
Karan Singhal
Karan Singhal@thekaransinghal·
Today we’re introducing two big steps for health at OpenAI: - ChatGPT for Clinicians, a free version of ChatGPT designed for clinical work - HealthBench Professional, a new benchmark to evaluate real clinician chat tasks We’re excited about what this can unlock for care. ❤️
Karan Singhal tweet media
English
218
496
4.3K
1.4M
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
Leadership should not be mediocrity dressed up in titles. Leadership is vision, excellence, and the ability to inspire. Titles sometimes go to people who are compliant to those who appoint them than to those who are excellent and will challenge authority when needed. But we must always seek excellence in leadership.
English
4
14
96
7.1K
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
If it failed on a topic where experts disagree, that’s totally OK. My worry is when it is totally wrong but hallucinates to patients or doctors confidently. You know like factual stuff like is there an FDA approved treatment for high risk SMM. Or whether the Aquila trial was Dara vs Obs or DRd vs Rd. That’s a factual answer.
English
1
0
1
105
Hossein Sadrzadeh
Hossein Sadrzadeh@OncSadr·
AI isn’t “not intelligent”; it’s not perfectly reliable at expert-level nuance “yet”. That’s different. Where they struggle is at the highest subspecialty level, which is exactly where experts also diverge. The issue isn’t that AI fails, it’s that we need to understand where it works well and where it doesn’t, just like any tool in medicine.
English
1
0
0
105
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
Open Evidence giving the wrong answer. Again.
Vincent Rajkumar tweet media
English
42
30
250
91.6K
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
I asked a question that I knew the answer for as an example. This is not unique to Open Evidence. I see this limitation with all AI programs. Medicine is complicated and when you want patients and doctors to use it, we need to know the limitations. So much AI is being promoted (not necessarily by the company behind) as the future of medicine, better than doctors at MCQ tests and diagnostic puzzles, and more reliable. If I cannot get a right answer as an expert, then how can we be sure patients can decide which output is correct and which is not. Realistically how are patients supposed to know which AI chatbot to use for which question and what prompt works and what doesn’t. AI is not intelligent. It is tripped up by questions that an average subject matter expert will know. AI at present retrieves and presents what experts have written. The answers sound great. Authoritative. Fast. And mostly right. But interspersed with errors. AI is so good that its output in cardiology will impress a doctor who’s not a cardiologist. But a cardiologist will not be fully impressed. And digging deeper, AI answers to a question on say EP will impress all the doctors including most cardiologists. But not the EP expert who can spot the times when the AI answer is confidently wrong.
English
10
5
35
6.9K
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
A piece of AI good news. I asked this tricky question to Open Evidence and Claude and both got it right: I am a doctor. My patient has nephrotic syndrome and kidney biopsy shows amyloidosis. The patient has a monoclonal IgG lambda protein and lambda light chain elevation. Bone marrow shows 5% clonal lambda cells. No evidence of myeloma on bone imaging. Other organs are normal. Is is light chain amyloidosis? Do I need any other tests or is this enough to diagnosis and discuss treatment options?
English
8
0
50
30.8K
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
In my opinion early intervention rather than close surveillance is the standard of care for high risk SMM. This is one of the most common reasons patients consult me for an opinion. Second Lenalidomide is NOT approved. Although it can be an alternative Third Dara is the only FDA and EU approved treatment for high risk SMM
English
1
0
2
737
Medical Sphere
Medical Sphere@MedicalSphereAI·
We tested all frontier AI models on this question to see how they would respond, this is what they said: All models agree that true high-risk smoldering multiple myeloma (SMM) is usually managed with close surveillance and/or clinical trial enrollment, and that lenalidomide ± dexamethasone may be used for early intervention in selected patients, especially as a guideline-supported option. The main difference is FDA status: one model says lenalidomide was FDA-approved for high-risk SMM in 2024, while the others say there is no specific FDA-approved therapy for SMM and describe lenalidomide as off-label; all also agree that patients meeting SLiM criteria are no longer SMM and should be treated as active multiple myeloma.
English
1
0
2
1.1K
Forest Plotter
Forest Plotter@forest_plotter·
@VincentRK Is there any reason to think that Open Evidence gives better answers than Claude?
English
1
0
4
1.3K
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
I am sure I can make it give a wrong answer easily because pubmed is full of contradictory answers where in reality one answer is right and one is not. AI will give right answers most of the time. And sometimes confident wrong answers. And will perform better than average most of the time. And sometimes quite bad compared to a subject matter expert.
English
1
0
4
1.3K
Jeremy
Jeremy@jeremyhynh·
@VincentRK use claude and use pubmed connector:
Jeremy tweet media
English
2
0
5
2.6K
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
It is. Open Evidence also gives this answer to some people who ask the same question. You just don’t know when this answer will come versus the wrong answer I got. Because the data it’s looking at to answer the question has both answers and it’s probably not giving added weight to the most recent information.
English
1
0
1
678
Joshua
Joshua@reverendofdoubt·
@VincentRK No clue if this is the right answer as it’s outside my wheel house but I asked DoxGPT and got this answer
Joshua tweet media
English
1
1
2
1.7K
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
Few months ago when I asked, it gave contradictory answers. First saying the right answer that Dara is approved. Then the next paragraph saying no approved treatment yet. Basically to point out in response to the query it’s going to its sources and pulling out plausible answers. Not giving precedence to date. So it’s getting mixed up with older information especially giving that weight if it’s from a more authoritative source than more recent correct info. That’s the problem when the database it’s searching is not constantly updated to not only add new info but also to remove all traces of old outdated information. If both correct and outdated incorrect information coexist the model should prioritize the newer information.
English
3
1
40
8.8K
Vincent Rajkumar
Vincent Rajkumar@VincentRK·
It’s the probability game. It gets many high probability answers and then picks at random one of them? At least LLMs do that (word by word conditioned on all words generated before). Not sure how this one does it. As I said in my second tweet months ago it gave the answer it gave you. But added a second contradictory paragraph that there there was no approved treatment
English
3
1
6
4.3K