Dan Caron⚡

843 posts

Dan Caron⚡ banner
Dan Caron⚡

Dan Caron⚡

@dancaron

Tweets about Health AI, AGI, longevity. 🛸 CEO Health Universe (Kleiner Perkins) 🚀 Co-Founder @arrivehealth (Raised 32M) 🤖 Hacked insulin pump. FDA approved.

Latent Space Katılım Aralık 2008
802 Takip Edilen438 Takipçiler
Bo Wang
Bo Wang@BoWang87·
Today we’re announcing X-Cell — Xaira’s first step toward a virtual cell. 🧬 A foundation model that predicts how gene expression changes under causal perturbations — across cell types, conditions, and even unseen biology. This is not trained on observational atlases. It is trained on interventions. 🧵👇
English
42
144
940
148.1K
Dan Caron⚡ retweetledi
Denis Wirtz
Denis Wirtz@deniswirtz·
Federal funding for US biomedical research is moribund. Since October 1 2025, NIH is -80% in new grants and -70% in values (total dollars). Labs are closing down and researchers are leaving science. To what end?
Denis Wirtz tweet media
English
43
346
737
175.6K
Healthcare AI Guy
Healthcare AI Guy@HealthcareAIGuy·
Are world models the future of healthcare AI vs LLMs? What do you think?
English
7
2
38
5K
Dan Caron⚡
Dan Caron⚡@dancaron·
@WeidiXie @Nature Would love to host/implement this on Health Universe. Are y'all open to that?
English
0
0
0
104
Weidi Xie
Weidi Xie@WeidiXie·
Glad to share DeepRare, published on @Nature ! nature.com/articles/s4158… Super grateful for the news & views from Prof. Timo Lassmann, it really gives the best views on DeepRare. nature.com/articles/d4158… This is the first agentic system of its kind, designed to solve the complex puzzle of rare disease diagnosis. ✅ Outperforms current best methods by ~24% ✅ 95.4% expert agreement on reasoning chains ✅ Handles clinical notes, HPO terms & genetic data
Weidi Xie tweet mediaWeidi Xie tweet mediaWeidi Xie tweet mediaWeidi Xie tweet media
English
13
39
239
13.5K
Dan Caron⚡
Dan Caron⚡@dancaron·
Of course Pearl is generally correct. However, models learn to generalize after seeing enough data, which makes them more than basic correlation machines. Deep neural networks form higher level structures that start to imply more generalized intelligence. Dario Amodei spoke to this in a recent podcast. That said, causal modeling, once the AI crowd gets a hold of it, will be a much more powerful approach.
English
0
0
1
157
Bo Wang
Bo Wang@BoWang87·
Professor Judea Pearl — the pioneer who invented causal reasoning in AI — says scaling won't save us. "Mathematical limitations that are not crossable by scaling up." The brutal truth: LLMs aren’t learning how the world works. They are learning how we describe the world. This resonates with most biologists: Drug discovery is hitting the same wall. We have mountains of genomic data, but most AI models just find patterns in published papers — not in the raw biology itself. They're learning what scientists think causes disease, not what actually does. Pearl's causal revolution? That's how we move from "this gene correlates with cancer" to "this gene causes cancer" — and finally design drugs that work. Until then, we're building very expensive parrots.
English
470
2.1K
7.5K
795.4K
Joshua Liu
Joshua Liu@joshuapliu·
Think AI will reduce demand for doctors? Consider this: Claude writes 100% of its own code, yet Anthropic's engineering team is exploding. We aren't looking at the end of the physicians, but the birth of Medical Orchestrators. When asked why Anthropic is still aggressively hiring, Boris Cherny, the creator of Claude Code, said: “Someone has to prompt the Claudes, talk to customers, coordinate with other teams, decide what to build next. Engineering is changing and great engineers are more important than ever.” The need for software engineers isn’t going away, but the role of an engineer is evolving. This offers a blueprint for how to think about the future physician. Even as Ambient AI evolves from scribing to assessing and ordering, this doesn’t mean health systems need fewer physicians. But it does mean that physicians will need to evolve to become “Medical Orchestrators” - doctors who can direct AI clinical agents, validate AI clinical outputs, and have good judgment on when and how AI outputs should be integrated into direct patient care - while STILL also talking to patients and dealing with the people dynamics of collaborating with the broader care team. In some ways, being an Orchestrator is not a new concept to physicians. Academic physicians experience this daily - they lead a team of fellows, residents and medical students to care for a large number of patients. In the same way that you orchestrate a team with varying levels of skill and experience, you’ll do the same managing a crew of AI clinical agents with varying levels of reliability and utility. Soon the AI scribe will be near perfect and you’ll treat it autonomously like a fellow, but AI draft orders are still in the early phases and you’ll review the output very closely, just as you would closely review the work of a 3rd year medical student. In other words, the value of a physician will increasingly shift from the ability to perform every task, to the wisdom of knowing when and how to integrate the AI output into direct patient care. Now it may feel as if AI is reducing work through all this automation, but actually, the limitations of AI in the current state will reveal the next white space that clinicians will finally have time and mental bandwidth to focus on. For decades, physicians have been unable to optimize patient care because they had been buried under doing the basic requirements for each patient. But when AI automates documentation, chart summarization, discharge summaries and paperwork, this creates bandwidth for physicians to tackle new and important challenges that AI can’t - e.g. complex care coordination, re-engineering care delivery models and making the most difficult clinical and ethical judgments for patients. Even if AI further evolves from automating administrative tasks to automating diagnosis, we will only see an explosion in demand for physicians to not only grow actual care delivery, but to play a critical role in elevating the care we deliver.
Joshua Liu tweet media
English
20
49
136
19K
Dan Caron⚡
Dan Caron⚡@dancaron·
@SebastianCaliri Agree. Industry working groups can fill the void, and is an approach used many times in the past.
English
0
0
0
34
Sebastian Caliri
Sebastian Caliri@SebastianCaliri·
None of our institutions that surround medicine - journals, state boards, accreditors, CPT Editorial Panel - were designed for a world where medicine changes at the speed of software development. It’s a cycle time mismatch. Service innovation and testing is slow. AI innovation just keeps accelerating. So it is no surprise that journals are publishing on long-obsolete models. The same dynamic is also why we don’t have and will never get CPT codes for AI. I am sure the Joint Commission will soon fall behind accrediting hospitals that use AI in novel ways. This is not a comment about the inherent goodness or badness of the NEJM or other legacy institutions. It is simply a recognition that they were built for a world that no longer exists. The right posture to meet this moment is deregulatory. Emphasize safety but otherwise let the innovation happen. It’s a mistake to regulate that which does not yet exist. Europe tried this approach with the tech sector and now they have no tech sector.
Kevin Roose@kevinroose

i am begging academics to study AI capabilities using frontier models. the models used in this study (which is going to be cited for years as proof that "AI is bad at health advice") are GPT-4o, Llama 3, and Command R+, two obsolete models and one i've never heard of.

English
12
14
64
13.7K
Dan Caron⚡
Dan Caron⚡@dancaron·
@kevinroose We help researchers in the health space publish using state of the art AI tools. It's hard for them to keep up.
English
0
0
0
33
Kevin Roose
Kevin Roose@kevinroose·
i buy the explanation that the publication process is slow. still, if researchers want their work to be relevant, they should find a different way to get it out! publish outside the journals, start a substack, something! x.com/masterlongevit…
masterlongevity.@masterlongevity

@kevinroose @KelseyTuoc Part of the problem is process time for academic publications. 2 months research (min), 2 months writing, 6 months for peer review. All models change dramatically in 10 months

English
29
7
293
36.1K
Kevin Roose
Kevin Roose@kevinroose·
i am begging academics to study AI capabilities using frontier models. the models used in this study (which is going to be cited for years as proof that "AI is bad at health advice") are GPT-4o, Llama 3, and Command R+, two obsolete models and one i've never heard of.
Kevin Roose tweet mediaKevin Roose tweet media
English
111
111
1.6K
329.3K
Akari Asai
Akari Asai@AkariAsai·
Thrilled to share: OpenScholar - our work on scientific deep research agents for reliable literature synthesis -has been accepted to Nature! 🎉 Huge thanks to collaborators across institutions who made this possible!
Akari Asai tweet media
English
35
230
1.3K
125.3K
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
This will be a very important study! I believe more frontier AI labs or AI health startups should do these kinds of randomized studies periodically to assess the impact of AI in medicine. We should have the first AI clinics by 2027, where patients are first seen by AI doctors!
Google Research@GoogleResearch

Does medical AI really work in the real-world? It needs to be assessed carefully and responsibly. We will be launching a first-of-its-kind nationwide randomized study with Included Health to evaluate AI in real-world virtual care to better understand its capabilities & limitations. This study is informed by years of foundational research across Google, investigating the capabilities required for a helpful & safe medical AI.

English
11
44
300
29.7K
Sebastian Caliri
Sebastian Caliri@SebastianCaliri·
Negotiating with health plans to cover clinical AI is a type of pain I would not wish upon my worst enemies. This is not a serious way to get AI into American healthcare. Rather, government needs to lead the way and encourage and incentivize private plans to follow suit. If we can align payment models across government and private payers, we have the conditions necessary for resources to flow into clinical AI. I want the best people in the country working on fixing our healthcare system. OpenAI can pay them a lot of money to work at OAI instead. We won't get the investment and talent we need in healthcare if we do not make reimbursement - the rewards for innovation - very clear to the technology and VC community.
Sebastian Caliri@SebastianCaliri

Cost-plus contracting in the Department of Defense created the bloated, ineffective defense primes. Cost-plus contracting is exactly how we pay for healthcare today through AMA-defined CPT codes and RVUs. We cannot repeat sins of the past in designing payments for clinical AI. Our CMS Innovation Center director, Abe Sutton, believes cost-plus is the wrong path for the future of American healthcare. New payment models like ACCESS pay for outcomes. When we have AI that can manage heart failure can we link payment to making people healthier? Doing so would unleash the might of American techno-capitalism on exactly the thing we care about in society. Reimbursement policy for clinical AI sets the stage for the next decades of American healthcare. Investment flows to enterprises that generate returns, returns requires revenue, and deciding what we award revenue for in the age of medical AI is of enormous consequence.

English
13
9
96
12.1K
DeepLearning.AI
DeepLearning.AI@DeepLearningAI·
OpenAI and Anthropic both launched healthcare-focused AI products, each aimed at a different audience: - OpenAI introduced ChatGPT Health for consumers to understand medical data and care instructions (with some features for practitioners) - Anthropic released Claude for Healthcare, a set of tools designed to help clinicians search medical databases and reduce administrative work (with some features for consumers) Learn more in The Batch: deeplearning.ai/the-batch/open…
DeepLearning.AI tweet media
English
25
51
386
31.2K
Health Universe
Health Universe@healthuniverse_·
The full patient story, delivered in seconds. We've partnered with @Kno2 to bring nationwide record retrieval + AI clinical intelligence to health systems. Records from any connected provider → structured insights in minutes. linkedin.com/feed/update/ur…
English
1
1
4
192
Dan Caron⚡
Dan Caron⚡@dancaron·
I built some causal models directly from LLM data earlier this year. People have no idea how powerful this technique is. There are only a handful of folks in the world thinking about this rn. Huge opportunity for those that understand it. Will definitely change the world.
Connor Davis@connordavis_ai

Holy shit… this paper might be the most important shift in how we use LLMs this entire year. “Large Causal Models from Large Language Models.” It shows you can grow full causal models directly out of an LLM not approximations, not vibes actual causal graphs, counterfactuals, interventions, and constraint-checked structures. And the way they do it is wild: Instead of training a specialized causal model, they interrogate the LLM like a scientist: → extract a candidate causal graph from text → ask the model to check conditional independencies → detect contradictions → revise the structure → test counterfactuals and interventional predictions → iterate until the causal model stabilizes The result is something we’ve never had before: a causal system built inside the LLM using its own latent world knowledge. Across benchmarks synthetic, real-world, messy domains these LCMs beat classical causal discovery methods because they pull from the LLM’s massive prior knowledge instead of just local correlations. And the counterfactual reasoning? Shockingly strong. The model can answer “what if” questions that standard algorithms completely fail on, simply because it already “knows” things about the world those algorithms can’t infer from data alone. This paper hints at a future where LLMs aren’t just pattern machines. They become causal engines systems that form, test, and refine structural explanations of reality. If this scales, every field that relies on causal inference economics, medicine, policy, science is about to get rewritten. LLMs won’t just tell you what happens. They’ll tell you why.

English
0
0
1
57
Dan Caron⚡
Dan Caron⚡@dancaron·
@awxjack People have no idea how much work it takes. Great story, thanks for sharing.
English
0
0
4
71
Jack Zhang
Jack Zhang@awxjack·
Stripe offered to acquire us for $1.2 billion when we had $2M in revenue. Today, we've raised $330M at an $8B valuation and reached $1B ARR. We could've died three times during this journey. This is the story I've never told anyone before:
Jack Zhang tweet media
English
1.5K
2K
29.9K
46.9M
Marlon
Marlon@drmarlonperalta·
I’ve bought from 10+ peptide vendors the last 2 months. I wrote up a peptide sourcing guide and a review of 10 vendors (some with pretty cheap reta) If you want it, reply below and I’ll send it. (This isn’t automated so it might take me a min)
English
727
7
319
39.3K
Andrew Ng
Andrew Ng@AndrewYNg·
Releasing a new "Agentic Reviewer" for research papers. I started coding this as a weekend project, and @jyx_su made it much better. I was inspired by a student who had a paper rejected 6 times over 3 years. Their feedback loop -- waiting ~6 months for feedback each time -- was painfully slow. We wanted to see if an agentic workflow can help researchers iterate faster. When we trained the system on ICLR 2025 reviews and measured Spearman correlation (higher is better) on the test set: - Correlation between two human reviewers: 0.41 - Correlation between AI and a human reviewer: 0.42 This suggests agentic reviewing is approaching human-level performance. The agent grounds its feedback by searching arXiv, so it works best in fields like AI where research is freely published there. It’s an experimental tool, but I hope it helps you with your research. Check it out here: paperreview.ai
Andrew Ng tweet media
English
248
1.1K
6.3K
1.1M
Bo Wang
Bo Wang@BoWang87·
🚀 AI meets CRISPR: a new era of genome editing is here. We’re excited to announce the publication of our review, “Harnessing artificial intelligence to advance CRISPR-based genome editing technologies” in @NatureRevGenet ! In this work, we and many thought-leaders explore how AI-driven methods—from deep learning to language models—are powering the next generation of genome editing: optimizing guide RNAs, engineering novel enzymes, and ultimately accelerating therapeutics and functional biology. 🔬 Highlights: --AI-powered design and prediction tools for CRISPR/Cas systems (nuclease, base, prime editing) ---Virtual cell/organ models to guide target selection and outcome prediction ---Roadmap for integrating AI into editing pipelines—from tool discovery to clinical translation 📘 Full paper → nature.com/articles/s4157… Proud to contribute to this intersection of computational biology, machine learning, and gene therapy — the future of precision medicine is being built today. Shoutout to all the co-authors for their valuable contributions! Particularly my friend and colleague @BowenLi_Lab for his leadership! @UHN @UofT
Bo Wang tweet mediaBo Wang tweet mediaBo Wang tweet media
English
7
91
339
37.6K
David Finsterwalder | eu/acc
David Finsterwalder | eu/acc@DFinsterwalder·
I vibecoded this neural network visualization for my students and open sourced it. It shows a simple MLP trained on MNIST handwritten digits at several training steps. The visualization is using @threejs and it comes with training code in @PyTorch . Link + repo 👇
English
151
788
8.5K
482.5K