
Charles Jaffe
58 posts

Charles Jaffe
@cjaffemd
Charles Jaffe, MD, PhD is the CEO of HL7. He has spent over 30 years in healthcare IT, including academia and in leadership roles at Intel Corp and AstraZeneca.








We worry about AI errors in healthcare, but forget that we already accept a certain level of human error. This gives us a clear starting point for deciding if AI is good enough. → AI scribes: we worry that AI will hallucinate things that the patient said… … but forget that humans can mishear or forget what was said. → AI patient summaries: we worry that physicians will just sign off on discharge summaries without going through the charts themselves for accuracy… … but forget that physicians sign off on medical trainee summaries without personally going through the charts. And as much as AI can miss something… how long do you expect busy residents or medical students to spend pouring through every in-patient note and data point? They’re bound to miss something. So of course there is a risk in using AI… as there is a risk in human delivered care. Which is why I find it strange when the media highlights car accidents from autonomous vehicles instead of comparing actual accident rates between AI and human drivers. We should not demand perfection from AI before we use it. Rather we should expect at minimum that AI (or any technology for that matter) is just as safe and effective as the average human clinician before using it at scale. A human doesn’t have a 0% error rate, so it doesn’t make sense for us to hold AI to the same standards either. Now where it gets challenging is how to measure risk when the actual risks may differ between the AI and human approach. E.g. an AI scribe may be more likely to make hallucination errors (e.g. occasionally document a made up detail) whereas a human is more likely to make omission errors (e.g. forget to document a detail). So it’s important to have clear frameworks for how we think about effectiveness and safety for these different use cases. And have a clear baseline expectation that’s set not by perfection, but by the currently acceptable human clinician standard. And of course, aim to get better and better going forwards.










The future of #HL7 #FHIR is now! @GrahameGrieve @FirelyTeam #FHIRDevDays





Sync for Social Needs blog.hl7.org/sync-for-socia…

NEW PAPER! Excited to be part of this important work published in @Nature showing the power of #SARSCoV2 wastewater sequencing across @UCSanDiego and @SanDiegoCounty to detect important variants well before they manifest clinically! 👉nature.com/articles/s4158…




On this InteropNow! Podcast episode, @HL7 talks about expanding the programs to bring healthcare stakeholders into feedback process to better inform the evolution of healthcare APIs. ⭐ @cjaffemd, CEO ⭐ @diegokaminker, Deputy Chief & more! Listen here! lnkd.in/e8_-_fF4





Chemo delayed. Surgery postponed. Treatment deferred. Death from #COVID19 is not the only bad outcome.

