Nisarg Patel, MD

5.2K posts

Nisarg Patel, MD banner
Nisarg Patel, MD

Nisarg Patel, MD

@nxpatel

@UCSF surgeon. Researching AI for clinical reasoning. Health policy for @FT, WaPo. Cofounder @memorahealth (acq). Alum @ycombinator w18, @harvardmed.

San Francisco, CA Katılım Aralık 2012
918 Takip Edilen2.7K Takipçiler
Nisarg Patel, MD
Nisarg Patel, MD@nxpatel·
Healthcare is one of the few sectors where policy has arguably *driven* tech innovation and diffusion rather than vice-versa. With that in mind, there are plenty of policy levers in both public health and medicine that are positive-sum! Reference pricing to promote service competition and better correlate price to quality, all payer rate setting (e.g., Maryland) to reduce admin complexity, lead removal from paint/water to reduce childhood lead toxicity and resulting healthcare costs, etc. Even tech-specific policies like Cures and TEFCA that improve interop will likely reduce admin overhead as CMS rolls out implementation details over the next few years. Until we see more policy movement and the implementation science (via academics or industry) to adopt and expand new system-level incentives and interventions, most newcos will, very reasonably, try to maximize adoption by operating within the tradeoffs of cost/quality/access rather than maximizing clinical value per dollar.
yoni rechtman@yrechtman

Most healthcare “innovation” is zero-sum at best, negative-sum at worst. I want to see more more positive sum HC ideas that grows the surplus rather than shifting it around

English
3
0
13
1.9K
Nikhil Krishnan
Nikhil Krishnan@nikillinit·
“I went to a small accelerator in SF”
English
15
5
256
25.9K
Nisarg Patel, MD
Nisarg Patel, MD@nxpatel·
I think AI surfacing guidelines/most up-to-date treatment options is okay! I’m optimistic that most physicians will either agree with those recs (and honestly, some may be seeing new guidelines for the first time and be better for it) or apply their own judgment as to why a specific patient case/goals differs from those guidelines in a way that merits patient-specific treatment. The way I describe it to trainees is that these tools, at the very least, elevate the floor for medical decision making. There’s still a missing layer in most models (even the medical ones) about guiding clinical decisions with imperfect information, particularly as many of these society guidelines start to unravel as patients become more medically complex, but that’s a model architecture/system prompt conversation.
English
0
0
1
42
Dhruv Vasishtha
Dhruv Vasishtha@dvasishtha·
@nxpatel That would be the system level pitch but isn't the other side of the coin saying to docs we kinda don't trust your discretion and are nudging you to make more standardized decisions no?
English
3
0
0
137
Dhruv Vasishtha
Dhruv Vasishtha@dvasishtha·
I'm really curious whether clinicians feel point of care clinical decision support AI allows them to reduce "care variation". Founders of tech enabled services businesses are definitely pitching their AI platform as a mechanism to productize proprietary "care models".
English
4
2
6
1.2K
Nikhil Krishnan
Nikhil Krishnan@nikillinit·
I went to the dentist recently, and it feels like an excellent place to be building a voice-native operating system so much of the visit is the dentist saying things out loud for the dental assistant to either note down (e.g. perio charting) or asking to pull up and change certain things on the screen so we can walk through it (while stabbing me in the face and asking why I was bleeding) In a setting where virtually everything needs to be hands-free AND dental tends to be way more small practices/owner operated, I would have expected to see more usage of scribes or voice-first tools
English
8
1
22
5.3K
Nisarg Patel, MD
Nisarg Patel, MD@nxpatel·
Specialty-specific instance of course, but a few years ago Epic added the ability to take photos from their mobile app and automatically upload it into a patient chart, which turned out to be invaluable for documenting pre-/postop facial photos and (manually) following oral/facial lesions, scars, bruising, etc. Particularly useful for referencing in cases where multiple residents were following the same patient over time.
English
0
0
2
104
Nikhil Krishnan
Nikhil Krishnan@nikillinit·
I think pictures are underutilized in current healthcare journeys When I went for my last dental checkup, they used an iTero machine to essentially take a picture of each tooth. They then walked me through each tooth, what's happening, and how it compared to the last time they took a picture. This obviously took a long time, but I thought it was pretty interesting. Also made me realize how (in my experience) we use relatively little pictures and ask patients to describe the changes they've seen in areas over time (e.g. stool, changes in skin conditions). I think in specialist settings there are more pictures used (e.g. dermatology), but even that feels unevenly used or underutilized. My guess is the main reasons this hasn't been done historically have been: 1) time taken to read every single image is enormous 2) treatment course probably wouldn't change a ton even with pictures 3) not enough time in the visit to go through the changes in the pictures But today AI tools can probably do the guidance + triaging of these pictures, and only escalate to the doc if it's something particularly interesting and relevant. In fact analyzing pictures of health issues feels like one of the specific areas I go to LLMs for help with. It would be cool if pictures just became part of patient intake forms and attached to the record, and changes tracked over time.
English
5
0
16
3.3K
Nisarg Patel, MD
Nisarg Patel, MD@nxpatel·
@nikillinit Yeah, of course limited by those who can afford to pay, but started in facial plastics and dermatology with elective cosmetic procedures and now moving to orthopedics/sports medicine for certain rehab regimens
English
0
0
0
42
Nikhil Krishnan
Nikhil Krishnan@nikillinit·
not a coherent thought, but wondering if we’re going to see actual alternatives to billing that don’t use CPT codes or traditional claims - More direct contracts from employers - More cash pay transactions - ACCESS model and several of the CMMI models are explicitly trying to move away from the CPT code payment system - Modular contracts (e.g. CARA from CMMI, Clear Contracts from turquoise, cost plus wellness, etc)
English
9
2
23
4.1K
Nisarg Patel, MD
Nisarg Patel, MD@nxpatel·
I've been working on a lecture for medical residents about exactly this tension, i.e. what experiences from traditional physician training that now live in our head differentiate true clinical reasoning from plausible slop? Luckily, medicine has spent 50+ years articulating how doctors actually 'know' things, e.g., 1) hypothesis-driven inquiry (what do we EXPECT if we're right?), 2) causal models/diagnostic schema (not just that symptoms cluster, but *why* they do), and 3) Bayesian updating/calibrated uncertainty (knowing *when* we don't know/when we need more information to *confidently* make a decision). LLMs don't seem to have any of these mental models structurally, although they appear to simulate them statistically. One practical exercise that's helped me get more leverage out of both medicine-specific LLMs like OE and general models is by imposing those same reasoning structures externally (via either system prompt or user prompt). Rather than "Here is patient X's HPI, physical exam, and labs...what's the diagnosis?", I'll include "...what's your leading hypothesis and why? What findings would *contradict* that diagnosis? What are cant-miss diagnoses in this case? What additional test(s) or exam finding would discriminate between suspected diagnoses/change your mind?" This, at the very least, gives the models an epistemological scaffold tailored to what I need and how I think.
English
0
0
3
95
sarah guo
sarah guo@saranormous·
9/9 This is not as a moral stance, but a question on how to get leverage from this strange "knowledge" instead of a lot of self-confident, plausible noise. Epistemology feels like a very practical question. Who's studying it in the age of AI?
English
8
0
25
5.8K
sarah guo
sarah guo@saranormous·
1/ One productivity question stuck in my head: when AI can draft, explain, and propose solutions on demand (across a huge terrain!), what still has to live in your head for that to turn into real work, rather than a pile of plausible slop?
English
25
11
213
29.2K
Nisarg Patel, MD retweetledi
8VC
8VC@8vc·
If you’re starting an autonomy company today, public roads are the hardest place to begin. We sat down with @bsofman (CEO @BedrockRobotics) & @malharhar (Special Projects Head @AppliedInt) to discuss picking the right problem for physical AI.
8VC tweet media
English
4
16
176
70.3K
Nisarg Patel, MD retweetledi
Jaya Gupta
Jaya Gupta@JayaGup10·
AI has given venture capital a new way to repeat an old mistake: kingmaking. The pattern from 2021 is back: a category becomes "obvious," a top-tier firm anoints its winner, and everyone else acts like the decision is final. Sierra for support. Harvey for legal. Applied Compute for RL-as-a-service.
English
68
81
1.1K
419.5K
Nisarg Patel, MD
Nisarg Patel, MD@nxpatel·
My latest in @Health_Affairs on the AI training data challenge health systems shouldn't ignore: Amidst AI labs moving towards model training on chat history with multi-year retention windows, ~2.6% of Claude & 5.7% of ChatGPT conversations contain clinical content, often as PHI within consumer tiers without BAAs. healthaffairs.org/content/forefr…
English
0
0
3
1.4K
Nisarg Patel, MD
Nisarg Patel, MD@nxpatel·
Love that AI labs are prioritizing biomedical research. Between Skills and MCP, the near-term future of computational biology might not necessarily be *better prediction models* but rather *better integration layers* that let general intelligence operate across the full research stack.
Mike Krieger@mikeyk

We’re launching our Claude for Life Sciences initiative today, including new bioinformatics Skills, and new MCPs from @benchling, @BioRender, PubMed, @WileyGlobal, @Sagebio, @10xGenomics and more: anthropic.com/news/claude-fo…

English
0
0
4
1K
Willem
Willem@vanlancker·
If you'd like to read it, reply below and I’ll send you a link.
Willem tweet mediaWillem tweet mediaWillem tweet mediaWillem tweet media
English
260
0
128
7.4K
Willem
Willem@vanlancker·
In addition to @untitleddotnew, I wrote a comprehensive “how-to” guide that covers the art and act of naming. With this process, you can name anything in an afternoon. It is incredibly thorough but can be moved through efficiently. How to Name Anything in an Afternoon
English
304
46
1.5K
133.6K
Nisarg Patel, MD
Nisarg Patel, MD@nxpatel·
@saranormous Just speaking from using it regularly over the last ~year, OE has been an incredible resource to identify (and apply!) the latest medical literature/guidelines on-demand in the hospital and teach junior residents. Excited to see it become more integrated into daily practice.
English
0
0
2
148
sarah guo
sarah guo@saranormous·
The era of superintelligence is here. Didn't predict the medical field would be first. Amazing work by team @EvidenceOpen in scoring a perfect 100% on the US Medical Licensing Exam
sarah guo tweet media
English
50
121
1K
319.3K
Nisarg Patel, MD
Nisarg Patel, MD@nxpatel·
@MichelleM_Mello wrote a nice piece in NEJM last year outlining a risk framework for health AI liability. If doctors assume liability for AI evaluations (e.g. any current CDS tool, vision models for radiology), health systems carrying insurance products tailored for AI-guided decisionmaking could make the relative legal uncertainty (and scaled level of risk/damage associated with AI products treating 10-1000x more patients) more palatable. The level of autonomy for AI to potentially prescribe medications/labs/etc also seems to be a function of the risk appetite among patients. If doctors are completely out of the loop of a medical decisionmaking process, patients would have to consent to being comfortable with *only* AI being used for diagnostic/treatment decisions (and model manufacturers would have to be comfortable taking on liability; Digital Diagnostics/IDx-DR for example takes on liability in negotiated contracts *and* carries insurance for medical error claims). nejm.org/doi/abs/10.105…
Nisarg Patel, MD tweet media
English
1
0
1
133
Nikhil Krishnan
Nikhil Krishnan@nikillinit·
new post - a few thoughts reading these AI papers 1) patients aren't like the vignettes in NEJM, but we should aim to make the average patient encounter look MORE like a vignette. There are ways to do that. 2) we should focus more resources on AI to handle simple cases autonomously vs. more complex cases with physicians in the loop 3) Study design for evaluating healthcare AI is hard and flawed naturally in many ways 4) we hold healthcare AI to a higher bar than humans because we don't have a liability framework full post is in the next tweet
Nikhil Krishnan tweet media
English
2
1
14
2.7K