Daniel Bryan Goodman

2.5K posts

Daniel Bryan Goodman banner
Daniel Bryan Goodman

Daniel Bryan Goodman

@dbgoodman

Immuno/Synth Bio/ML. PI @PennMedicine/@parkerici. Using DNA synth, multiplex assays & generative models to understand & engineer immune cells. @geochurch alum.

Harvard/MIT ➡️ UCSF ➡️ UPenn 가입일 Mayıs 2009
2.2K 팔로잉2.3K 팔로워
Daniel Bryan Goodman 리트윗함
Justin Eyquem
Justin Eyquem@j_eyquem·
I am so excited to share our new paper in @Nature: the first programmable, site-specific integration of a large DNA payload into T cells in vivo. A single IV injection results in therapeutic levels of TRAC-targeted CAR T cells in multiple models. #Ack1" target="_blank" rel="nofollow noopener">nature.com/articles/s4158… a 🧵
Justin Eyquem tweet media
English
21
143
537
30.5K
Daniel Bryan Goodman 리트윗함
Linda Goodman
Linda Goodman@LindaBGoodman·
Excited to share this milestone from @FaunaBio - the first Target Designation in our obesity collaboration with Lilly. Efforts spanning AI platform design, target selection, and in vivo validation uncovered genuinely novel biology from animals to treat human disease 🚀 businesswire.com/news/home/2026…
English
0
3
6
763
Daniel Bryan Goodman 리트윗함
Aaron Ring
Aaron Ring@aaronmring·
How specific are therapeutic monoclonal antibodies, really? In our new paper, @Yile_Dai led a collaboration with Adimab to profile 174 FDA-approved and clinical-stage mAbs against 6,172 human extracellular proteins. What we found surprised us.🧵 sciencedirect.com/science/articl…
English
16
116
399
56K
Daniel Bryan Goodman
Daniel Bryan Goodman@dbgoodman·
@hhlee I've always wanted to give my kids the Young Lady's Illustrated Primer from the Diamond Age. Maybe we're almost there!
English
0
0
3
126
Henry Lee
Henry Lee@hhlee·
It’s time to homeschool
Henry Lee tweet media
English
1
0
0
316
Daniel Bryan Goodman 리트윗함
Egan Peltan
Egan Peltan@EganPeltan·
Ok this is ridiculous. Everything here could have been done without ChatGPT 1. The dog is on conventional immunotherapy with the mRNA vax 2. It appears the mRNA vaccine started WITH ICI, so we can’t know if the vax had ANY additional effect 3. The team can’t say what ~~AI~~ identified the neoantigens (no, not AF3). It sounds like they used existing sequence homology workflows. 4. No evidence of antigen-specific effect from mRNA vax - the authors need to prove this before anyone can believe the neoantigen selection had any effect 5. The in-kind contributions here are ~$20-50k. Custom cancer vax isn’t cheap Custom mRNA cancer vaccines have been in development for years! None have been clear, resounding successes (yet). Once we have Phase 3 PFS/OS, not N=1 anecdotes, we can start having arguments about people being denied access to effective treatments by unnecessary regulation Right now, the only thing people are denied are the opportunity to fork over $50k for hope and dreams. Should we really make it easier for people exploiting desperation to sell unproven remedies? If you don’t want to think before you tweet, ask the fucking AGI if any of these claims are remotely plausible and what evidence you’d need to believe them
vittorio@IterIntellectus

this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to drug targets > design a custom mRNA cancer vaccine from scratch > genomics professor is “gobsmacked” that some puppy lover did this on his own > need ethics approval to administer it > red tape takes longer than designing the vaccine > 3 months, finally approved > drive 10 hours to get rosie her first injection > tumor halves > coat gets glossy again > dog is alive and happy > professor: “if we can do this for a dog, why aren’t we rolling this out to humans?” one man with a chatbot, and $3,000 just outperformed the entire pharmaceutical discovery pipeline. we are going to cure so many diseases. I dont think people realize how good things are going to get

English
49
57
411
71.8K
Daniel Bryan Goodman
Daniel Bryan Goodman@dbgoodman·
@hhlee @Penn @ChildrensPhila 8 months in and I’m still floored by the brilliant, humble, mission-focused scientists at Penn and CHOP. It’s the most collaborative and collegial environment I’ve seen. Everyone is pulling together for patients and the future of medicine. Excited to be part of what comes next.
English
0
1
3
89
Daniel Bryan Goodman 리트윗함
Gennady Gorin
Gennady Gorin@GorinGennady·
The empty drops you threw out in your single-cell RNA sequencing analysis might be hiding mysterious things 👀 check out our bioRxiv preprint! The empty drops do not contain cells. Yet we can still use them to learn interesting things about biology and technology. 1/
bioRxiv Bioinfo@biorxiv_bioinfo

Empty drops in scRNA-seq uncover the surprising prevalence of sequestered neuropeptide mRNA and pervasive sequencing artifacts biorxiv.org/content/10.648… #biorxiv_bioinfo

English
1
20
79
10.5K
Daniel Bryan Goodman 리트윗함
tobi lutke
tobi lutke@tobi·
OK this thing is totally insane. Before going to bed I... * used try to make a new qmdresearcher directory * told my pi to read this github repo and make a version of that for the qmd query-expansion model with the goal of highest quality score and speed. Get training data from tobi/qmd github. * woke up to +19% score on a 0.8b model (higher than previous 1.6b) after 8 hours and 37 experiments. I'm not a ML researcher of course. I'm sure way more sophisticated stuff is being done by real researchers. But its mesmerizing to just read it reasoning its way through the experiments. I learned more from that than months of following ml researchers. I just asked it to also make a new reranker and its already got higher base than the previous one. Incredible.
Andrej Karpathy@karpathy

I packaged up the "autoresearch" project into a new self-contained minimal repo if people would like to play over the weekend. It's basically nanochat LLM training core stripped down to a single-GPU, one file version of ~630 lines of code, then: - the human iterates on the prompt (.md) - the AI agent iterates on the training code (.py) The goal is to engineer your agents to make the fastest research progress indefinitely and without any of your own involvement. In the image, every dot is a complete LLM training run that lasts exactly 5 minutes. The agent works in an autonomous loop on a git feature branch and accumulates git commits to the training script as it finds better settings (of lower validation loss by the end) of the neural network architecture, the optimizer, all the hyperparameters, etc. You can imagine comparing the research progress of different prompts, different agents, etc. github.com/karpathy/autor… Part code, part sci-fi, and a pinch of psychosis :)

English
120
243
4.8K
794.1K
Sri Kosuri
Sri Kosuri@srikosuri·
“There is a tide in the affairs of men Which, taken at the flood, leads on to fortune; Omitted, all the voyage of their life Is bound in shallows and in miseries. On such a full sea are we now afloat; And we must take the current when it serves, Or lose our ventures.”
English
4
3
23
2.3K
Daniel Bryan Goodman 리트윗함
Martin Borch Jensen
Martin Borch Jensen@MartinBJensen·
Finding new medicines is getting more and more expensive, and AI won't help much unless we can generate physiological data at scale. In our new preprint, @GordianBio extends the progress of the functional genomics community to run pooled in vivo screens at scale, in a way that answers questions about physiology and therapeutic potential. We show screens in mice and horses, fibrotic and degenerative disease, with a framework for physiological predictions validated in human ex vivo tissues. Very proud of @v_sontake, @vkartha88, Neety and the rest of the team. Tweetorial follows:
Martin Borch Jensen tweet media
English
9
58
200
37.5K
Daniel Bryan Goodman 리트윗함
John Connolly
John Connolly@John_E_Connolly·
Aldehyde and seek: Toxic aldehydes drive exhaustion in tumor-infiltrating T cells | Science Immunology science.org/doi/10.1126/sc…
English
0
2
12
931
Daniel Bryan Goodman 리트윗함
Niko McCarty.
Niko McCarty.@NikoMcCarty·
Sometimes, when you think you are ten years "ahead" in biotechnology, you are actually ten years behind. DARPA recently announced a ~$40M program to synthesize DNA directly in living cells, using pulses of light. This is a great idea! DNA synthesis is way too expensive ($0.07 per base, or thereabouts) and is the limiting ingredient for most experiments. If we could coax cells to make arbitrary sequences in situ, then we could skip the ~5 days it currently takes to order DNA from a company and clone it into cells. We could run biology experiments much faster. But the more you look into this idea, the more you'll discover that biohackers have been talking about it for, like, several decades. And not just light-based approaches, either, but also really nice ideas centered around electrical pulses or other ways to make proteins without using DNA or RNA whatsoever. We should be funding more "out of the box" ideas. Biotechnology has followed quite a linear, status quo funding cycle for the last 10+ years at least. I'm sure many of these ideas won't work out, but this whole website is a goldmine. I'm feeling inspired!: diyhpl.us/wiki/polymeras…
Niko McCarty. tweet mediaNiko McCarty. tweet mediaNiko McCarty. tweet mediaNiko McCarty. tweet media
English
16
90
667
40.6K
Daniel Bryan Goodman 리트윗함
Andrej Karpathy
Andrej Karpathy@karpathy·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
English
1.6K
4.8K
37.3K
5.1M
Daniel Bryan Goodman 리트윗함
Alex Bradu
Alex Bradu@alex_bradu·
I'm really thrilled to share what I've been working on the past year! We hope this method represents a step towards democratizing single-cell screens, and helps to make it possible to run genome-wide scale experiments routinely. Check out our tweetorial and the preprint here 👇
Rahul Satija@satijalab

Excited to share VIPerturb-seq! New tech from my lab which aims to improve the cost, data quality, and efficiency of single-cell CRISPR screens so that they are accessible to any lab - even at genome-wide scale Preprint and 🧵 (1/): biorxiv.org/content/10.648…

English
0
8
28
4.2K