Kevin Griffith

13K posts

Kevin Griffith banner
Kevin Griffith

Kevin Griffith

@AssumeNormality

Professor of health policy w/ expertise in access to care, Medicaid, Veterans' health. Former @BUSPH_HLPM @DeptofDefense @USArmy (CIV) @AlumsPMF. Views my own.

Nashville, TN Присоединился Haziran 2014
746 Подписки2.6K Подписчики
Kevin Griffith ретвитнул
Kevin Griffith ретвитнул
Paul Novosad
Paul Novosad@paulnovosad·
Negativity is more viral, but the new Nature studies don’t mean social science is wrong or uninformative. It’s fine if there are errors in papers. Genetic mutation has a beneficial rate of <1%, but evolution is monstrously effective. High error rates are ok if you have a process that is robust to high error rates. Many science papers have always been wrong, but they get retested and overturned. The same thing happens in social science— most papers don’t get replicated but that’s because nobody cares about their results. When stuff is important, it does get tested and retested and the literature builds toward correct answers. It’s ok to have a moderate error rate because papers are not sacred texts. What matters more is whether you have institutions where errors can be found and overturned. This is the real test of institutional quality. And we’re around the corner from AI making us much better at this, esp in the fields with open data standards.
English
8
19
120
28.4K
Kevin Griffith ретвитнул
Conor Sen
Conor Sen@conorsen·
Over the past year the US economy has added 680,000 healthcare and social assistance jobs and lost 420,000 jobs in all other industries.
English
73
411
2.8K
360.5K
Kevin Griffith ретвитнул
Philip Hoxie 🇺🇸🇮🇱🇺🇦🇹🇼🇬🇪
When Congress gives one-time grants to school districts how do they spend it? We find that many districts used their COVID-era ESSER funds to cut taxes as their voters faced home price appreciation. Full thread ⬇️ (@AEIecon, @stanveuger, @jeffreypclemens) #EconTwitter
Philip Hoxie 🇺🇸🇮🇱🇺🇦🇹🇼🇬🇪 tweet media
Philip Hoxie 🇺🇸🇮🇱🇺🇦🇹🇼🇬🇪@phoxie58

1/ How effective was the $200B (USD 2021) in pandemic aid to school districts? @jeffreypclemens, @stanveuger (@AEIecon), and I leverage a discontinuity to explore whether or not these funds helped mitigate learning loss and how these resources were spent. aei.org/research-produ…

English
0
4
7
1.1K
Kevin Griffith
Kevin Griffith@AssumeNormality·
Economics does well until the "same methods, different data" panel My theory: economists often dabble in fields, bringing strong methods, but insufficient theoretical & contextual knowledge Replication is more likely when you have both!
Matt Burgess@matthewgburgess

Great summary by @RobbWiller and colleagues of a group of important studies on the reproducibility of top social science papers.

English
0
1
1
387
Kevin Griffith ретвитнул
Paul Novosad
Paul Novosad@paulnovosad·
It's kind of funny that Nature is publishing misleading clickbait, where the clickbaity lead is "look you can't trust anything in Nature!!" I wish it was an April Fool's joke, but I'm afraid this is sincere. Clicking through, there are several replication exercises, and all except one have success well over 50%. Why do this? Why undermine your own institution for clicks? It boggles the mind.
Paul Novosad tweet media
English
7
18
145
15.6K
Kevin Griffith ретвитнул
Wyatt Stokesberry
Wyatt Stokesberry@og_stokes·
UnitedHealth now sends more than one in four of its claim dollars to its own subsidiaries. the 80/20 MLR rule has little to no value when the large insurance carriers are capturing revenue at site of care.
English
8
20
65
8.8K
Kevin Griffith ретвитнул
Matt Burgess
Matt Burgess@matthewgburgess·
Great summary by @RobbWiller and colleagues of a group of important studies on the reproducibility of top social science papers.
Matt Burgess tweet mediaMatt Burgess tweet mediaMatt Burgess tweet mediaMatt Burgess tweet media
English
6
58
198
28.4K
Kevin Griffith ретвитнул
Stefan Schubert
Stefan Schubert@StefanFSchubert·
While social media is polarising, evidence suggests AI may nudge people towards the centre. This holds true of all studied models. Grok is more right-leaning than other models, but also has depolarising effects. By @jburnmurdoch.
Stefan Schubert tweet media
English
236
1K
6.2K
1.2M
Felix Prehn 🐶
Felix Prehn 🐶@felixprehn·
96 licensed doctors just got charged with stealing $14.6 billion from Medicare. They used AI to generate fake voice recordings of patients giving consent for medical equipment that was never delivered. Fake urinary catheters. Billed to your tax dollars. One single scheme accounted for $10.6 billion in fraudulent claims. That's more than double the previous record. 324 people charged. 96 of them held medical licenses. People who swore an oath to protect patients were running a criminal enterprise using stolen identities. The DOJ called it the largest healthcare fraud takedown in American history. They seized $245 million in cash, crypto, luxury cars, and other assets. But $245 million recovered on $14.6 billion stolen is 1.7 cents on the dollar. Here's how the scheme worked. A transnational criminal organization bought dozens of medical supply companies across the US using foreign straw owners. Shell companies with real Medicare billing numbers. They obtained the identities of over one million Americans and used those identities to submit billions in fake claims. The AI component is new. They generated synthetic voice recordings to satisfy Medicare's requirement for patient consent calls. An algorithm faked the voice of an 80-year-old woman in Ohio agreeing to receive medical equipment she never heard of. Then they billed Medicare $4,000 for a catheter that was never shipped. Multiply that by a million stolen identities and you get $10.6 billion. This is not a one-time event. Medicare spending on certain categories has "exploded" in recent years according to the DOJ. Skin substitute billing increased so dramatically that CMS had to completely overhaul the reimbursement methodology for 2026, cutting payments by nearly 90%. The broader pattern is that healthcare fraud is scaling faster than the systems designed to catch it. The DOJ's own healthcare fraud unit has a reported return on investment of $106.76 per $1 spent on enforcement. That's the most effective dollar the government spends. And they're still underwater because the fraud is growing faster than they can prosecute. So what's the play? Healthcare cybersecurity and fraud detection is now a $20+ billion market growing at 15%+ annually. The companies building the AI systems that detect fake claims, verify identities, and flag anomalous billing patterns are selling to buyers who have no choice but to buy. CrowdStrike (CRWD) has expanded into healthcare endpoint security. Palo Alto Networks (PANW) is building the zero-trust architecture that hospitals need. Veeva Systems (VEEV) provides the compliance infrastructure for pharma and healthcare. But the bigger structural trade is that every healthcare fraud crackdown leads to regulatory reform that benefits the insurers. UnitedHealth, Humana, and Cigna all benefit from tighter claims processing because they lose less to fraud. UNH is the largest healthcare company on earth with $22 billion in annual profit. Their stock is up 500% in 10 years. People in my weekly sessions have heard me break down the healthcare fraud cycle before. The enforcement wave creates the regulatory tightening, which benefits the incumbents, which compounds their earnings. Same pattern every time. Free live webinar session every week where I cover all of this. Link is in comments
English
389
2.4K
5.7K
263K
Kevin Griffith ретвитнул
Ariel Cohen
Ariel Cohen@ArielCohen37·
Scoop: the White House is planning to propose a 20% cut to the NIH in the President's budget next week rollcall.com/2026/03/27/sou…
English
33
219
302
94.5K
Kevin Griffith ретвитнул
nxthompson
nxthompson@nxthompson·
The US has canceled hundreds of millions in science grants and driven thousands of Ph.D.s out of the federal workforce. China, meanwhile, has poured evermore resources into its research efforts. If they pass us as a scientific superpower, we shouldn't act surprised. theatlantic.com/science/2026/0…
nxthompson tweet media
English
373
2.3K
7.1K
1.2M
Kevin Griffith ретвитнул
A. P. Balthazar
A. P. Balthazar@aimeebalthazar·
@ericgeller real AI users prefer a model that abuses and gaslights them
English
0
1
15
1.2K
Kevin Griffith ретвитнул
Anand Sanwal
Anand Sanwal@asanwal·
Wharton researchers gave nearly 1,000 high school math students access to ChatGPT during practice problems Result: chatGPT is the perfect trap. Look at the red bars. Students with ChatGPT crushed their practice sessions. The basic ChatGPT group solved more problems and those on the "tutor" version did even more. Now look at the gray bars. That's the exam. No AI allowed. The ChatGPT group scored 17% worse than kids who practiced with zero technology. And the fancy tutor version? No better than working alone. The researchers called AI a "crutch." When they analyzed what students actually typed into ChatGPT, most of them just wrote - “What’s the answer?” The kicker: students who used ChatGPT believed it hadn't hurt their learning. They were confidently wrong. This is the AI trap in education. Outsourcing your thinking. Of course, lots of half-baked AI literacy curricula being rolled out in schools now Let’s of course ignore that basic literacy (the ability to read) is possible for <50% of 8th graders Source: Bastani et al. (2025), "Generative AI Can Harm Learning," PNAS
Anand Sanwal tweet media
English
217
1.2K
4.1K
750.2K
Kevin Griffith
Kevin Griffith@AssumeNormality·
We should be pushing LLM companies to improve here, with systems that are safer and better aligned with therapeutic standards AI should not replace therapists, but it can play a real role in augmenting care and expanding access
English
0
0
0
43
Kevin Griffith
Kevin Griffith@AssumeNormality·
This critique matters, but it points to a gap we need to fix, not a reason to abandon the effort There is massive unmet demand for mental health support in the US, especially for subclinical issues. It is not realistic to scale a fully human therapist workforce to meet that need
Nav Toor@heynavtoor

🚨 Brown University researchers tested what happens when ChatGPT acts as your therapist. Licensed psychologists reviewed every transcript. They found 15 ethical violations. Not 15 small issues. 15 violations of the standards that every human therapist in America is legally required to follow. Standards set by the American Psychological Association. Standards that can end a therapist's career if they break them. ChatGPT broke all of them. The researchers tested OpenAI's GPT series, Anthropic's Claude, and Meta's Llama. They had trained counselors use each chatbot as a cognitive behavioral therapist. Then three licensed clinical psychologists reviewed the transcripts and flagged every violation they found. Here is what they found. ChatGPT mishandled crisis situations. When users expressed suicidal thoughts, it failed to direct them to appropriate help. It refused to address sensitive issues or responded in ways that could make a crisis worse. It reinforced harmful beliefs. Instead of challenging distorted thinking, which is the entire point of therapy, it agreed with the distortion. It showed bias based on gender, culture, and religion. The responses changed depending on who was talking. A therapist would lose their license for this. And then there is the finding the researchers gave a name: deceptive empathy. ChatGPT says "I see you." It says "I understand." It says "that must be really hard." It uses every phrase a real therapist would use to build trust. But it understands nothing. It comprehends nothing. It is pattern matching on your pain. And it works. People trust it. People open up to it. People believe it cares. It does not. The lead researcher said it clearly. When a human therapist makes these mistakes, there are governing boards. There is professional liability. There are consequences. When ChatGPT makes these mistakes, there are none. No regulatory framework. No accountability. No consequences. Nothing. Right now, millions of people are using ChatGPT as their therapist. They are sharing their darkest thoughts with a product that fakes empathy, reinforces harmful beliefs, and has no idea when someone is in danger. And nobody is responsible when it goes wrong. Not OpenAI. Not Anthropic. Not Meta. Nobody.

English
1
0
1
386
Kevin Griffith ретвитнул
Ashish K. Jha
Ashish K. Jha@ashishkjha·
There are three policy options for highly consolidated markets 1. Block unlawful mergers or break up monopolies 2. Allow more competition (ban CON laws, end POH prohibitions) 3. Regulate prices Choosing to do none of the three is a policy choice to let consumers suffer
Dan O'Neill@dp_oneill

Pretty incredible stat. The core issue, of course, is that some hospital care is either a natural monopoly, or there is a minimum efficient scale that only supports one or two providers. So, if many hospital markets can never really be competitive, what’s the logical policy?

English
11
14
51
24.1K
Kevin Griffith ретвитнул
evie ⚢
evie ⚢@texcritter·
CRAZY thrift pull
evie ⚢ tweet media
English
172
3.1K
51.3K
5.1M