Louis Metzger, PhD

404 posts

Louis Metzger, PhD banner
Louis Metzger, PhD

Louis Metzger, PhD

@LouisMetzger_

Founder and CEO, dGenThera, making targeted radiotherapeutics 2.0 for cancer -- safer, more scalable, and more effective than 1.0. #Biotech #DeepTech

San Francisco, CA Katılım Kasım 2018
310 Takip Edilen339 Takipçiler
Louis Metzger, PhD retweetledi
Boze Herrington, Library Owl 😴🧙‍♀️
Kids are going to schools where no one assigns them books, and where computers come pre-loaded with AI that writes for them. The effects on developing brains will be catastrophic. In ten years we’ll see what a horrendous mistake this was, but by then the damage will be done.
Zito@_Zeets

They’re not even giving kids a chance to develop their brains or their social world. Hooking them as customers as early as possible.

English
28
576
2.2K
45.1K
Louis Metzger, PhD
Louis Metzger, PhD@LouisMetzger_·
@AlexanderKalian Yes, these Dunning-Kruger-afflicted hype influencers are peddling the nonsense that "biology is coding" and inspiring people to do dangerous experiments on themselves. Tinkering and well-controlled therapeutic R&D are different things. Therapeutic R&D has uncuttable corners.
English
1
0
1
18
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
Exactly - and I believe you saw Parm's earlier post about how LLM-empowered vibe coders are attempting to design new therapeutics, then asking how they can be quickly tested. The naivety of the AI enthusiast crowd, when it comes to some of the deepest problems in biology, is a perfect demonstration of the Dunning-Kruger Effect in action.
English
1
1
3
93
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
I hear a lot of "AI will reverse ageing!" rhetoric. Ageing is not a singular disease, but rather the cumulative wear and tear on every nano, micro, and macro system in the human body. Reversing ageing would require a huge number of different therapeutics and treatments across all these systems. The development of this would require a near-complete understanding of every single degrading system in the human body, every affected protein and pathway, plus the ability to reliably design treatments for each (we are nowhere near this yet). To give you an idea of the complexity, reversing ageing would require: - Reversing the shortening of telomeres in cells across every tissue in the human body. - Revitalising extracellular matrices between cells, for every single tissue in the human body. - Regrowing irreplenishable cardiac cells, brain neurons, auditory hair cells etc. in safe, effective, and non-intrusive ways. - Reversing a lifetime of radiation-induced, oxidative, and mutagenic chemical-induced wear and tear on DNA - in cells across every tissue in the human body. - Revitalising degraded DNA repair systems - in cells across every tissue in the human body. And this is before we even discuss vastly different cell types, tissues, organs, genomes, epigenetic profiles, lifestyle factors, and environmental variables. We lack almost all the relevant high-quality data required to actually enable AI to "reverse ageing". There will definitely be major innovation within our lifetimes for tackling particular age-related conditions - but "reversing ageing" is not a realistic target for now. It is a truly monumental challenge, which most commentators in this space are severely underestimating. Throwing ChatGPT or other LLMs at this vast set of poorly understood problems is not going to cut it.
English
22
10
67
3.2K
Louis Metzger, PhD retweetledi
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
@parmita People are vibe coding drugs and then wanna test them - without a basic understanding the $50m+ it typically costs big pharma for clinical trials? These people should not be developing drugs at all. Their lack of basic understanding of drug development is dangerous.
English
0
1
10
254
Louis Metzger, PhD retweetledi
Dr Kareem Carr
Dr Kareem Carr@kareem_carr·
This is on the right track but the randomness is likely the symptom not the cause. AI systems are probabilistic constructs. What they provide are plausible extrapolations, and what we're collectively discovering is that given any set of evidence, many extrapolations are usually plausible. What is needed is a way to sort what is merely plausible from what is actually true. The tech industry is basically rediscovering the reason science was invented, which is that pure reason is not sufficient for decoding the causal structure of the world. Validation could in theory be done with real-world experiments, but experiments are expensive relative to stress-testing by domain experts.
Mark Cuban@mcuban

I’m coming to the conclusion that the biggest challenge for Enterprise AI, and AI in general , as of now, is that it’s still impossible to make sure that everyone gets the same answer to the same question, every time. Which is a great response to the doomers. AI doesn’t know the consequences of its output. Judgement and the ability to challenge AI output is becoming increasingly necessary, and valuable. Which makes domain knowledge more valuable by the second. Am I wrong ?

English
7
11
64
6.9K
Louis Metzger, PhD
Louis Metzger, PhD@LouisMetzger_·
@adamfeuerstein What is your rationale for that thesis? As an early-stage founder, I've often heard VCs say that "funding will improve for preclinical biotech when M&A picks up."
English
0
0
3
2.3K
Louis Metzger, PhD
Louis Metzger, PhD@LouisMetzger_·
@NeuroAI_Nexus There's so much to learn in that corner of biology; there are innumerable "unknown unknowns" yet to be elucidated. This endeavor requires sparks of human, non-derivative creativity.
English
1
0
1
27
@BioAI_Neuro
@BioAI_Neuro@NeuroAI_Nexus·
@LouisMetzger_ Glad you brought of this point . Biochemistry and biophysics interactions were not well explored. This is where my interests lies in as a neuronal biophysicist!!!
English
1
0
1
32
Louis Metzger, PhD
Louis Metzger, PhD@LouisMetzger_·
@endpointarena @adamfeuerstein @adamfeuerstein is correct. Prediction markets have no place intersecting with clinical trials. The perverse incentives created by this are legion, including trial sabotage (inactivation of test articles, "strategic" data leaks, etc.). Not helpful.
English
0
0
3
85
Michael
Michael@endpointarena·
i hesitate to "dump" on @adamfeuerstein. i'm sure he means well! but this post confirms his core skill is mistaking condescension for expertise. he knows very little about prediction markets, incentive design, or what biotech desperately needs. old man, please reconsider.
Adam Feuerstein ✡️@adamfeuerstein

I hesitate to dump on @endpointarena. I'm sure he means well. But this post confirms he knows very little about biotech, drug development or clinical trial design/endpoints. Yet, for some reason, he's building a clinical trial betting platform that he claims will improve drug development. Young man, reconsider.

English
12
0
14
18.3K
Louis Metzger, PhD retweetledi
NonsparseOncologist
❗️ Hot take Friday: Precision oncology is the biggest narrative scam in cancer medicine. “Find the right gene and we’ll cure cancer.” We’ve been hearing this for 30 years. Meanwhile, surgery and radiation are still the only things that actually cure solid tumors. Let’s go 🧵
English
9
11
73
14.8K
Louis Metzger, PhD
Louis Metzger, PhD@LouisMetzger_·
@AlexanderKalian Null data are required to make any worthwhile model. Sprinkling in "likely inactives" is key for SAR building. Especially useful are inactive isomers or other close analogs of active molecules.
English
1
0
1
10
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
@LouisMetzger_ This might explain why big pharma people I meet, sometimes discuss that they have null data and would be open to a collaboration. Big pharma plays a different game to academia. We of course need open-source data on this too!
English
1
0
1
33
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
AI needs null results. During my PhD, my AI models of protein-ligand binding (critical for drug discovery) faced issues with a lack of published null results. Without balanced data on molecules not binding well to target proteins, models risk overfitting on positive examples. This overfitting would mean AI learning a biased view that "most molecules bind successfully" - with every molecule looking like a viable drug candidate. Null results are rarely published because academia and journals heavily reward positive outcomes while quietly penalising or ignoring negative ones. Yet historically, some of the most important scientific breakthroughs came from null results - such as the Michelson-Morley experiment disproving the luminiferous aether. In reality, most random molecules either don't bind meaningfully to a given protein or only show weak, non-specific interactions. The majority of wet-lab binding assays likely produce null results - but these are discarded and go unpublished. We need a major cultural shift in academia, journals, and conferences: make null results great again.
English
21
18
129
7.5K
Louis Metzger, PhD retweetledi
Martin Picard
Martin Picard@MitoPsychoBio·
The idea that sequencing more genomes would lead to better medicine and better health was a good hypothesis in 2000. But 26 years later, evidence has quite convincingly disproven that hypothesis. The answer to most common chronic illnesses that plague us isn't written in genes. Personalized medicine likely cannot come from sequences of nucleic acids. There is more to life's dynamic nature. Why do we cling onto that hypothesis/dogma like it is truth.
Max Marchione@maxmarchione

The cost of sequencing a human genome dropped from $100M to less than $100 in about 25 years. That's a million-fold decrease, which outpaces even Moore's Law. We're about to enter the era of personalized medicine.

English
84
82
635
126.3K
Louis Metzger, PhD retweetledi
Prof. Lee Cronin
Prof. Lee Cronin@leecronin·
Drug Discovery should be renamed Drug Creation. This is because chemical space is so big you cannot search it, you must instead create.
English
16
19
150
12.2K
Louis Metzger, PhD retweetledi
BURKOV
BURKOV@burkov·
If you don't understand this, you will not understand why LLM-based agents are irreparably failing for a general-purpose problem solving. An agent (by the way it was the topic of my PhD 20 years ago) to be useful, must be rational. Being rational means to always prefer an outcome that results in the maximal expected utility to its master/user. Let’s say an agent has two actions they can execute in an environment: a_1 and a_2. If the agent can predict that a_1 gives its user an expected utility of 10, and a_2 gives an expected utility of -100, then a rational agent must choose a_1 even if choosing a_2 seems like a better option when explained in words. The numbers 10 and -100 can be obtained by summing the products of all possible outcomes for each action and their likelihoods. Now here is the problem with LLM-based agents. The LLM is not optimizing expected utility in the environment. It is optimizing the next token, conditioned on a prompt, a context window, and a training distribution full of examples of what helpful answers are supposed to look like. Those are not the same objective. So when we wrap an LLM in a loop and call it an “agent,” we have not created a rational decision-maker. We have created a text generator that can imitate the surface form of deliberation. It may say things like: “I should compare the expected outcomes.” “The best action is probably a_1.” “I will now execute the optimal plan.” But the internal mechanism is not selecting actions by maximizing the user’s expected utility. It is generating a continuation that is statistically appropriate given the prompt and prior context. This distinction matters enormously. For narrow tasks, the imitation can be good enough. If the environment is constrained, the actions are simple, and the success criteria are close to patterns seen in training, the system can appear agentic. But for general-purpose problem solving, the gap becomes fatal. A rational agent needs stable preferences, calibrated beliefs, causal models of the world, the ability to evaluate consequences, and the discipline to choose the action with maximal expected utility even when that action is boring, non-linguistic, or unlike the examples in its training data. An LLM-based agent has none of that by default. It has fluency. It has pattern completion. It has a remarkable ability to compress and recombine human text. But fluency is not rationality, and a plausible plan is not an expected-utility calculation. This is why these systems so often fail in strange, brittle, and irreparable ways when given open-ended responsibility. They are not failing because the prompts are insufficiently clever. They are failing because we are asking a simulator of rational agency to be a rational agent.
English
175
273
1.6K
198.4K
Louis Metzger, PhD retweetledi
Louis Metzger, PhD retweetledi
Dr Alexander D. Kalian
Dr Alexander D. Kalian@AlexanderKalian·
Here's a (non-exhaustive) list of near-impossible tasks AI would need to accomplish to truly "solve" biology: - Predict folded protein structures with ~99.9%+ accuracy, including fine details like side chains and conformations. - Perfectly predict the best drug out of 10^63 possible small molecules that safely cures a disease with near-zero failure rate. - Do the same for antibodies, gene therapies, nanoparticles, and complex drug delivery systems. - Reliably map brain activity to exact thoughts and future actions. - Scan an embryo's genome and perfectly predict phenotype, IQ, disease risks, lifespan, etc. - Design microbes or plants with arbitrary new capabilities (gold filtration, plastic degradation, hyper-nutritious crops, etc.). - Perfectly engineer safe genomes for long-lived, hyper-resilient humans. This is just a small sample. Biology's combinatorial complexity is mind-boggling, our current data is near hopeless, and buildling the vast training data required is beyond human civilisation's capacity. The gap between current AI capabilities and actually "solving" it remains enormous.
English
25
8
73
4.2K
Louis Metzger, PhD
Louis Metzger, PhD@LouisMetzger_·
@MicrobiomDigest Your work has always been important (and, I daresay, under-appreciated), but now it is more impactful than ever, as waves of fakery threaten to inundate the work of those who still do careful and properly controlled science.
English
0
0
1
173