Mark Zobeck

1.2K posts

Mark Zobeck banner
Mark Zobeck

Mark Zobeck

@MarkZobeck

MD, MPH. Peds Hem/Onc. Data nerd using technology to improve healthcare for kids around the world. Love languages: graphs, references, Bayesian statistics

Katılım Kasım 2012
1.4K Takip Edilen979 Takipçiler
Mark Zobeck retweetledi
𝗦𝗵𝗼𝘄 𝗠𝗲 𝗧𝗵𝗲 𝗗𝗮𝘁𝗮
Researchers from MIT, Harvard, UCSF, and Boston University CONFIRM that EPIC SUCKS and is a MONOPOLY, not because of superiority but because of our stupid legislators, and is causing: Stagnation of Innovation Inequities in Data Access Compromised Patient Care Increased cost of Care Epic Systems has established an extraordinarily dominant position in U.S. healthcare IT, controlling over half of acute care hospital beds and dominating academic medical centers. It functions as the de facto gatekeeper for health data belonging to hundreds of millions of Americans. Its market power stems not from superior technology but from structural lock-in mechanisms that make switching (defection) extremely expensive and difficult for hospitals and health systems. The government has enabled the concentration of the “digital backbone” of American healthcare in a single private vendor, and we're paying for it dearly. Link to the study in the next post.
𝗦𝗵𝗼𝘄 𝗠𝗲 𝗧𝗵𝗲 𝗗𝗮𝘁𝗮 tweet media
English
44
63
342
16.9K
Mark Zobeck retweetledi
Machine Learning Street Talk
Machine Learning Street Talk@MLStreetTalk·
New high-effort article "Why Creativity Cannot Be Interpolated" co-written with Dr. Jeremy Michael Budd. Yes the name is a pun on the famous book by @kenneth0stanley! The counterintuitive thesis (corollary of Kenneth's research): - Intelligence and agency are orthogonal to creativity - and sometimes actively hostile to it. - Genuine creativity is impossible without deep understanding and creativity without understanding is "slop". The strangest property of LLMs: within a single frame they seem to comprehend so deeply, yet they possess no perspective of their own. Like the blind men and elephant parable, each report is accurate, yet none integrates. We call this "frame-dependent" understanding, and it will change how you think about AI creativity. We started writing this 2 years ago, and this is our distilled understanding of AI creativity in 2026.
Machine Learning Street Talk tweet mediaMachine Learning Street Talk tweet media
English
4
18
100
6.8K
Pavlos Msaouel
Pavlos Msaouel@PavlosMsaouel·
@kaydaustin @ERPlimackMD @crisbergerot @NazliDizman @BradMcG04 @g_procopio_ @maughanonc @salvolarosa @LabGenovese @JAMouabbi @IamLinghua @OAlhalabiMD @perelli_luigi @rahulshethmd @ChadTangMD @TiansterZhang @QingZhangLab @BCottaMD @Daniel_J_George @DrDanielHeng @priyaraomd @UroDocAsh @drmehrarohit @ED_PhD_ @DanielFrigo @rovingatuscap @katy_beckermann @shilpaonc @AmandaNizamMD @ZiadBakouny 5/5 Clinical evidence: Sacituzumab govitecan in 4 heavily pretreated RMC pts → 1 PR (~5.3 mo) + 2 SD; median PFS 2.9 mo. Early, small-n but proof that biology-guided targeting can bend the curve for RMC.  Next: more RMC-specific trials based on these biological insights.
Pavlos Msaouel tweet media
English
2
6
20
801
Mark Zobeck retweetledi
Africa CDC
Africa CDC@AfricaCDC·
🗞️ Breaking: @AfricaCDC and the Republic of Ghana, in collaboration with @TexasChildrens, have launched “A New Day for Children with Sickle Cell Disease” at #UNGA80. This initiative will integrate SCD into primary health care, strengthen early detection, and expand access to affordable medicines—helping more children survive and thrive. Read more: ow.ly/cqwQ50X1kpJ #AfricaCDC #NewPublicHealthOrder
English
0
13
24
1.3K
Mark Zobeck retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
"AI isn't replacing radiologists" good article Expectation: rapid progress in image recognition AI will delete radiology jobs (e.g. as famously predicted by Geoff Hinton now almost a decade ago). Reality: radiology is doing great and is growing. There are a lot of imo naive predictions out there on the imminent impact of AI on the job market. E.g. a ~year ago, I was asked by someone who should know better if I think there will be any software engineers still today. (Spoiler: I think we're going to make it). This is happening too broadly. The post goes into detail on why it's not that simple, using the example of radiology: - the benchmarks are nowhere near broad enough to reflect actual, real scenarios. - the job is a lot more multifaceted than just image recognition. - deployment realities: regulatory, insurance and liability, diffusion and institutional inertia. - Jevons paradox: if radiologists are sped up via AI as a tool, a lot more demand shows up. I will say that radiology was imo not among the best examples to pick on in 2016 - it's too multi-faceted, too high risk, too regulated. When looking for jobs that will change a lot due to AI on shorter time scales, I'd look in other places - jobs that look like repetition of one rote task, each task being relatively independent, closed (not requiring too much context), short (in time), forgiving (the cost of mistake is low), and of course automatable giving current (and digital) capability. Even then, I'd expect to see AI adopted as a tool at first, where jobs change and refactor (e.g. more monitoring or supervising than manual doing, etc). Maybe coming up, we'll find better and broader set of examples of how this is all playing out across the industry. About 6 months ago, I was also asked to vote if we will have less or more software engineers in 5 years. Exercise left for the reader. Full post (the whole The Works in Progress Newsletter is quite good): worksinprogress.news/p/why-ai-isnt-…
Deena Mousa@deenamousa

In 2016 Geoffrey Hinton said “we should stop training radiologists now" since AI would soon be better at their jobs. He was right: models have outperformed radiologists on benchmarks for ~a decade. Yet radiology jobs are at record highs, with an average salary of $520k. Why?

English
416
1.3K
8.7K
2.3M
Mark Zobeck
Mark Zobeck@MarkZobeck·
This is helpful, but we're still far away from causal reasoning, in the sense of do(x)->y. Grounding reasoning in the real world is a must. RLHF will only ever be a cartoon sketch of causation.
Stephanie Chan@scychan_brains

I still hear people say that LLMs are "just statistical pattern matchers" without grounded understanding. This probably reflects the influential arguments of Bender and Koller, which have a lot of validity. But there are two major reasons we should update these views: 🧵 👇

English
0
0
0
86
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
submitted the final draft of my dissertation last night. feeling extremely proud but also extremely exhausted.
Dr Kareem Carr tweet media
English
54
29
865
42.4K
Mark Zobeck retweetledi
Ben Van Calster
Ben Van Calster@BenVanCalster·
In our latest work, we demonstrate that risk estimates for patients are HUGELY uncertain due to model uncertainty, data uncertainty, and population uncertainty. Even when based on large sample sizes. @laure_wynants @ESteyerberg arxiv.org/abs/2506.17141
English
3
25
77
13.3K
Mark Zobeck
Mark Zobeck@MarkZobeck·
@ErwanLamy1 @eliasbareinboim I'll add that the two are complementary: The FEP depends on an agent's generative model of the data. An agent should use causal inference methods to build a generative model that causally reflects the generative process, which it can use to further minimize free energy.
English
0
0
1
29
Elias Bareinboim
Elias Bareinboim@eliasbareinboim·
In a recent work (causalai.net/r136.pdf), we examined whether LLMs are potential sources of probabilistic knowledge (rung 1 of Pearl's hierarchy), which led to the benchmark at llm-observatory.org. The answer was no, which was surprising and poses fundamental challenges for various downstream tasks and key capabilities (including explanation, decision-making, generalization, safety, and learning) given that inferences about interventions (rung 2) and counterfactuals (rung 3) build on rung-1 knowledge. Many have asked me about the potential of LLMs over the past year or so. In short: they could become extraordinary repositories of knowledge, à la Internet or Wikipedia. But when it comes to the broader ambitions of AI, there’s still a good way to go; the interplay between language and causality remains largely unmapped and only rudimentarily understood. I hope more young researchers will join the effort to tackle these generational challenges. Despite all the hype, we still need foundational principles for a better science of intelligence, one that integrates language, causality, and other essential components.
Judea Pearl@yudapearl

There is some confusion among readers of #Bookofwhy regarding the impressive "causal understanding" LLM's, which seems to defy the theoretical prediction of the Ladder of Causation. The Ladder predicts that, regardless of data size, no learning machine could correctly answer queries about interventions and counterfactuals unless supplemented with causal knowledge, external to the data. LLM programs circumvent this prediction by smuggling causal knowledge into the training data; instead of training themselves on observations obtained directly from the environment, they are trained on linguistic texts written by authors who already have causal models of the world. The programs can simply cite information from the text without attending to any of the underlying data. The result is a sequence of linguistic extrapolations which, in some remote and obscure sense, reflect the causal understanding of those authors. @GaryMarcus @eliasbareinboim @soboleffspaces @geoffreyhinton @DavidDeutschOxf

English
6
34
203
36.9K
Mark Zobeck retweetledi
David Cramer
David Cramer@zeeg·
I cant get over the fact that so many engineers still dont grok the fundamentals of what an LLM is. Repeat after me: its just pattern matching, it doesnt "know" anything
English
699
528
7.6K
912K
Mark Zobeck retweetledi
John B. Holbein
John B. Holbein@JohnHolbein1·
Timeless advice
English
27
461
4K
262.8K
Pavlos Msaouel
Pavlos Msaouel@PavlosMsaouel·
@Soum_Roy_RadOnc @_MiguelHernan @MDAndersonNews @OAlhalabiMD @OncHahn @f2harrell @JadChahoud @MarkZobeck @PGrivasMDPhD @maxinesun @DrJeffreyGraham @ebludmir @UroDocAsh Invaluable on a daily basis & used regularly by our faculty & statisticians. All clinicians should have at least some familiarity with causal inference & most data scientists should be ready to use its tools as needed. Concerns of misuse are similar to concerns for Bayes or freq.
English
1
0
6
191
Pavlos Msaouel
Pavlos Msaouel@PavlosMsaouel·
We are excited to host @_MiguelHernan in Houston @MDAndersonNews next week to give this year’s Melvin L. Samuels lecture (hybrid; zoom info in photo). Looking forward to the synergies that will emerge from his visit to advance rigorous causal inference methods across oncology.
Pavlos Msaouel tweet media
English
1
8
31
1.9K