Dylan A Mordaunt

2.5K posts

Dylan A Mordaunt

Dylan A Mordaunt

@EdithaTogo

Paediatrics. Medical Genetics. Economics and Population health. Rare Insights. Truth not Truthiness. Don't believe the hype...

Pōneke Katılım Kasım 2014
1.5K Takip Edilen367 Takipçiler
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
@virakanda @StuartHameroff @davidchalmers42 Am curious why when LLM uses prediction it's reshuffling, but when a human rearranges🎼, thoughts, 💡, drawings, etc. (i.e. how most progress is made), it's somehow "human". Perhaps humanness defines consciousness, but why does is LLM output any less valuable? Exceptionalism?
English
0
0
0
9
Martin N / 🌊 🧬 🌌
This is wrong thinking - LLM is not replicating anything (unless specifically asked to do so) - it is reshuffling the already existing knowledge and extrapolating what's been reshuffled into a new and non-existent outcome. In this way AI is something similar to a foot in the door - it's not the new knowledge yet but it is already not the old one. The rest is up to us. Dismissing AI a priori is not very wise, especially in Science and Tech where there are already examples of how AI helped to refine, solve and progress beyond previous - human only - failures. This is why AI is already proving very helpful specifically in Consciousness studies where we need to consider and integrate several perspectives from biology, physics, chemistry and math (topology/geometry). AI is a very useful tool, which as every other tool can lead to amazing outcomes or not but dismissing it upfront only because it's new or does not fit a particular idea is wrong. Besides AI does not do (yet ...) anything by itself - it acts on human abstract concepts and ideas - results of ... Consciousness.
English
1
0
0
35
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
@swagyolomlg420 @WKCosmo A lot of senior researchers in large scale groups, probably aren't seeing much of a difference except a need for heightened awareness around verification of claims, facts and citations...
English
0
0
0
6
Cool cat B)
Cool cat B)@swagyolomlg420·
@WKCosmo the academics i know are more concerned with the danger of intellectual atrophy
English
1
0
0
94
Will Kinney
Will Kinney@WKCosmo·
Anybody know of a non-paywalled link to this? Looks like Hiranya is not holding back.
Will Kinney tweet media
English
7
7
104
7.2K
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
@WKCosmo Sort of ironic in the contact of the Nature article bemoaning a lack of reproducibility in social sciences...
English
0
0
0
28
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
@MaMoMVPY It's an odd conclusion. Perhaps what you might be seeing is a lag between frontier use and publications. Both the models themselves, but also harness and conductor techniques, have all improved this issue.
English
0
0
0
123
Lars Christensen
Lars Christensen@MaMoMVPY·
I am increasingly coming to this conclusion - there NO SIGNS the problems with hallucinations are getting solved. In fact if anything it is now spreading to coding and the use of agents where it is hidden and could led to serious problems down the road. Therefore we can't really scale LLMs. LLMs are very useful tools, but you need to know about the limitations. Most of the investments in LLMs today assumes away these limitations.
Merryn Somerset Webb@MerrynSW

What if the whole LLM thing is a false start? If the flaws are inherent systemic problems - if the compounding of hallucinations/errors can't be sorted out? If the capex build out is one of the biggest misallocations of capital ever? Then what? bloomberg.com/news/newslette…

English
30
23
177
8.5K
Dylan A Mordaunt retweetledi
@emilymbender.bsky.social
@emilymbender.bsky.social@emilymbender·
So it's real bummer when the world's most famous linguist writes an op-ed in the NYT* and gets it largely wrong. nytimes.com/2023/03/08/opi… (*NYT famous for publishing transphobia & bad AI coverage, but widely read) >>
English
3
21
131
20.2K
Berci Meskó, MD, PhD
Studies keep coming out bout how much time AI scribes actually help save in clinical documentation. This JAMA study found that "AI scribe adoption was associated with modest decreases in total EHR time and documentation time and with a modest increase in weekly visit volume." In detail, the modest decreases really mean: 1) 13.4 fewer minutes of EHR time per visit 2) 16.0 fewer minutes of documentation time 3) and 0.49 additional weekly visits delivered 4) EHR time outside work hours did not change significantly! What are these if not great results? The study: jamanetwork.com/journals/jama/…
Berci Meskó, MD, PhD tweet media
English
3
6
6
884
Dylan A Mordaunt retweetledi
John Ennis
John Ennis@johnennis·
I think one of the biggest challenges when it comes to going hard into using AI is loneliness I am learning all these awesome things and becoming super capable But the set of people that I can really talk to about it is very small Is anyone else having this experience?
English
777
125
2.6K
97.1K
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
@EricCrampton I don't know what it is. It's like HRMs or payroll systems seem to have hard-coded rules or something. @Xero make it seem so easy.
English
0
0
0
12
Eric Crampton
Eric Crampton@EricCrampton·
Good argument. But I didn't *think* that the cited reasons for US govt payroll system failure, namely a gap between formal rules and de facto practices, applied in NZ. And NZ govt has had terrible problems with payroll system upgrades.
English
3
0
2
286
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
Daily musing. Why do strong women leaders in NZ get given such a hard time, esp. on platforms like X. And I don't just mean in formal roles like @jacindaardern, I mean the bs @aniobrien has had to put up with recently. It seems far worse in NZ than other countries like Aus.
English
0
0
0
21
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
Have a diff take to Erik Monasterio, but am posting here because I think we need to promote discussion. I'd argue the views are orthodox for NZ PH rather than heterodox, but there aren't many MD's talking about this in NZ. rnz.co.nz/national/progr…
English
0
0
0
18
Peri Dwyer Worrell
Peri Dwyer Worrell@dcperi·
@pickover “What can be done?” It takes 15 seconds for me as a copy editor to paste a DOI into a search bar and verify the existence of a reference. What journal accepts submissions that haven’t been copy edited at least once?
English
1
0
0
53
Cliff Pickover
Cliff Pickover@pickover·
AI. LLMs. Hallucinations. "More than 110,000 of the 7 million or so scholarly publications from 2025 contain invalid references." "One analysis of nearly 18,000 papers accepted by three computer-science conferences found a sharp increase in references that cannot be traced to actual scholarly publications." "Now the problem is not just inaccuracy, it’s about fake citations." "Johnston, co-lead editor of the Review of International Political Economy (RIPE)... says that she rejected 25% of some 100 submissions in January 'because of fake references'." Link: nature.com/articles/d4158…
Cliff Pickover tweet media
English
15
10
40
5K
Paleoncologist
Paleoncologist@JOSEPH45075332·
@EdithaTogo Wasn’t calling for any bans Let everyone so inclined run hospitals and may be the best win Just don’t handicap or ban the winners!!
English
1
0
0
19
Paleoncologist
Paleoncologist@JOSEPH45075332·
An MBA-led hospital asks: How do I maximize revenue from my patient mix? A physician-led hospital asks: How do I maximize care for my patients, given the mix and constraints? Those are very different questions, and they lead to very different outcomes.
Paleoncologist@JOSEPH45075332

x.com/ResisttheMS/st… Elon says it best here In order to make the best decisions, you have to understand the process Time and again, non-MD CEOs demonstrate that they don’t. Physician-Led Hospitals Deliver Better Care. Non-physician led hospitals are just not oriented to make the best healthcare decisions. Yes, they can analyze revenue streams and identify what's most profitable, but that is not the right lens. Of course you have to pay the bills. A bankrupt hospital helps no one. But a hospital run as a business optimizing for profit is fundamentally different from one optimizing to care for the patients in its network. An MBA-led hospital asks: How do I maximize revenue from my patient mix? A physician-led hospital asks: How do I maximize care for my patients, given the mix and constraints? Those are very different questions, and they lead to very different outcomes. The data backs this up. Studies show that physician-led hospitals tend to have greater patient satisfaction, lower costs, and equal or better outcomes. Critics argue that patient mix explains these differences, but the gap holds even when controlling for it. And even if a physician specialty hospital, say, one focused on orthopedics, outperforms a general hospital partly due to patient mix, that's fine from the patient's perspective. Patients want to go where they get the best care. They shouldn't have to settle for sub-optimal care in order to cross-subsidize other parts of the hospital. Thanks to Obamacare, non-physician-run hospitals scored a major victory, and legislatively blocked physician-run competitors. The result: worse care at higher cost. It's time to reverse that. If community hospitals need financial support after losing certain patient subgroups, despite their higher billing rates, non-tax status, and other structural advantages, we can still direct healthcare dollars their way. Just not at the expense of destroying what actually works. { References in the first comment }

English
10
19
104
5.9K
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
@DrDiGiorgio The analogy I draw is the six minute increments lawyers bill in, for phone calls, meetings, reading emails etc. Particularly for clinicians managing people with complex conditions and chronic diseases, there's a lot of unpaid work.
English
0
0
13
810
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
@CatoInstitute @AdamOmaryPhD @dr4liberty I feel like this is one situation where AI makes a meaningful difference- intensification causing mental illness combined with better ability to describe and classify symptoms (through the use of AI)...
English
0
0
0
9
Cato Institute
Cato Institute@CatoInstitute·
Psychiatric diagnoses in the US are rising—but why? Genetics, environment, and better screening don’t fully explain it. A key driver may be incentives: the healthcare system often pays more when providers diagnose more, Cato’s @AdamOmaryPhD and @dr4liberty explain. ow.ly/Kxfn50YClOM
Cato Institute tweet media
English
5
8
25
2.8K
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
@ahall_research In child development terms, AI has splinter skills inherent to model and harness design. It makes generative intelligence cheap, but it doesn't necessarily make verificatory intelligence any cheaper, and like with human splinter skills, may actually make verification harder.
English
0
0
0
15
Andy Hall
Andy Hall@ahall_research·
Amidst understandable concerns of AI dystopia, no one is offering a positive vision for how we can use AI to remake our institutions and reinvent how we govern. That’s what I try to offer today. My argument is that we need an explicit research agenda to build “political superintelligence.” Here’s my case: AI makes intelligence cheap and widely available, just as the printing press made information cheap and widely available—and that earlier revolution ultimately reshaped governance and society to our benefit. To capture this benefit quickly, we need to build political superintelligence: a set of tools that help citizens, representatives, and institutions perceive the world more accurately, understand tradeoffs, contest power, and act more effectively. I divide this research agenda into three layers: 1. The information layer: AI can make voters and governments dramatically smarter, but only if we fix political bias in models, improve the quality of sources AI draws on, and build trust through better performance. 2. The representation layer: AI can serve as tireless delegates acting on our behalf in political processes—monitoring government, filing comments, flagging decisions—but only if we solve preference drift, adversarial vulnerability, and the fundamental problem that we don't own our own agents today. 3. The governance layer: Even if we get the first two layers right, the infrastructure sits inside privately controlled companies. We need binding constitutional frameworks that distribute power, constrain companies, and ensure political superintelligence serves citizens rather than executives or shareholders. Each of these layers has a concrete, tractable set of research questions: better evals, geopolitical forecasting as a test case, governance experiments at small scale, agentic simulations, and institutional designs modeled on centuries of constitutional thought. The window for building these structures is narrow, and the right response is not to slow AI down but to speed up how fast we build the institutions that keep us free as AI grows more powerful. As Thomas Paine wrote in 1776, “We have it in our power to begin the world over again.” I hope you’ll read the full piece (linked below), which serves as a sort of manifesto for the Free Systems Lab, and that you’ll join me in the defining political economy research question of our time.
Andy Hall tweet media
English
40
39
250
59.7K
Dylan A Mordaunt
Dylan A Mordaunt@EdithaTogo·
@rohanpaul_ai Amongst other things, it misses the point that radiologists often see pathology maybe of us don't often see. So aside from reporting, facilitate diagnostic and therapeutic networks, often leading to better treatment. Image labelling/classification is only one part of the role.
English
0
0
1
88
Rohan Paul
Rohan Paul@rohanpaul_ai·
Futurism: New York’s largest public hospital NYC Health + Hospitals CEO Mitchell Katz, MD, said AI could replace a significant portion of radiology work, citing potential cost savings and expanded access. “We could replace a great deal of radiologists with AI at this moment, if we are ready to do the regulatory challenge,” Dr. Katz said He pointed to the increasing use of AI in interpreting mammograms and X-rays and said the technology could help meet rising imaging demand. Under one model, AI could perform initial reads, with radiologists reviewing abnormal findings. Radiology is hard because the job is not spotting one obvious mark, but linking subtle image patterns to anatomy, context, prior scans, and uncertainty. That is why current FDA-cleared radiology tools mostly stay assistive, helping with image quality, triage, and abnormality flags instead of owning the final read. --- futurism .com/artificial-intelligence/hospital-ceo-ai-radiology
Rohan Paul tweet media
English
14
15
80
7.6K