Vali Barsan, MD 🇺🇸

1.8K posts

Vali Barsan, MD 🇺🇸 banner
Vali Barsan, MD 🇺🇸

Vali Barsan, MD 🇺🇸

@vbarsan

Physician scientist

Palo Alto, CA Katılım Mart 2011
2.1K Takip Edilen558 Takipçiler
Vali Barsan, MD 🇺🇸 retweetledi
Jeff Huber 🇺🇸
Jeff Huber 🇺🇸@jhuber·
I've always been kind of annoyed I wasn't better at Rubik's cube solving. So a Gemma 4 design session while on a plane w/ no internet + a Claude Code ~one shot yielded this Rubik's solver/trainer. Create your cube or paste a flat-net image of your scramble → it reads the colors, finds a 22-or-fewer-move solution, and animates the 3D cube through every turn. No backend. 100% in the browser. Run it: jeffhuber.github.io/rubiks-solver Code: github.com/jeffhuber/rubi…
English
32
143
1.1K
266.9K
Vali Barsan, MD 🇺🇸 retweetledi
NEJM
NEJM@NEJM·
A recent study provides proof of principle that a single genetic edit can overcome the effect of nonsense variants in different genes, akin to a one-size-fits-many model. Learn more in the Clinical Implications of Basic Research article “Editing tRNA Genes to Broaden Nonsense Therapeutics” by John D. Lueck, PhD (@littlelueck), from the University of Rochester School of Medicine and Dentistry (@URochester_SMD): nejm.org/doi/full/10.10…
NEJM tweet media
English
2
31
83
11.3K
Brian Armstrong
Brian Armstrong@brian_armstrong·
Some of the most underinvested areas in frontier biology that could accelerate civilizational progress: - Cheap, large-scale DNA synthesis (writing entire chromosomes or full organisms) - Real-time, non-destructive RNA sequencing in living cells - Highly accurate AI-powered polygenic scores for complex traits (disease risk, cognition, longevity) → enabling full genome design - Ultra-precise, multiplex genome editing (far beyond CRISPR) with minimal off-target effects, scalable across millions of cells - Safe, efficient, tissue-specific in vivo delivery systems - Safe and effective human germline engineering - Accelerated clinical trials via testing on decedents (with consent) - Next-gen human enhancement: muscle, cognition, mood — beyond GLP-1s - Ectogenesis / artificial wombs Who’s actually building in these areas? Drop names, companies, or researchers below 👇
English
394
288
2.3K
319.7K
Personalis, Inc.
Personalis, Inc.@PersonalisInc·
🧬 Ultrasensitive #ctDNA detection predicts early RCC recurrence after nephrectomy. Dr. Amaral & Dr. @alantanmd used NeXT Personal® to monitor high-risk localized kidney cancer post-nephrectomy: ✅ All ctDNA-negative patients stayed disease-free during follow-up. ✅ Every recurrence caught ahead of radiologic detection. ✅ 67% of initial detections were sub-20 PPM. Low-shedding tumors benefit from the use of ultrasensitive ctDNA testing technology. #PrecisionOncology #PersonalizedMedicine #KidneyCancer #ASCO #ASCOGU #RCC #RenalCellCarcinoma
English
2
3
21
3.4K
Vali Barsan, MD 🇺🇸 retweetledi
Justin Eyquem
Justin Eyquem@j_eyquem·
I am so excited to share our new paper in @Nature: the first programmable, site-specific integration of a large DNA payload into T cells in vivo. A single IV injection results in therapeutic levels of TRAC-targeted CAR T cells in multiple models. #Ack1" target="_blank" rel="nofollow noopener">nature.com/articles/s4158… a 🧵
Justin Eyquem tweet media
English
21
143
541
32K
Vali Barsan, MD 🇺🇸 retweetledi
Gaurav Ahuja
Gaurav Ahuja@gauravahuja·
One of these two groups is mispriced Private AI labs: OpenAI valued around $840B, Anthropic north of $600B on secondaries. Both at 30x+ ARR. Public giants: Microsoft at ~$3T on 23x forward earnings. Amazon at ~$2.3T on 28x. Microsoft likely owns ~25% of OpenAI. Amazon likely owns ~15% of Anthropic and ~5% of OpenAI If private investors are pricing these labs for a $5T+ venture-style outcome then… Microsoft’s implied stake in a $5T OpenAI is $1.25T embedded inside a $3T company. Amazon’s combined stakes embed roughly $1T inside a $2.3T company. Publics too cheap on Al exposure? Or privates/secondaries in bubble territory? Which breaks first?
English
64
41
929
181.4K
Vali Barsan, MD 🇺🇸 retweetledi
Aaron Ring
Aaron Ring@aaronmring·
How specific are therapeutic monoclonal antibodies, really? In our new paper, @Yile_Dai led a collaboration with Adimab to profile 174 FDA-approved and clinical-stage mAbs against 6,172 human extracellular proteins. What we found surprised us.🧵 sciencedirect.com/science/articl…
English
16
118
405
57.3K
Vali Barsan, MD 🇺🇸 retweetledi
Satya Nadella
Satya Nadella@satyanadella·
We’ve trained a multimodal AI model to turn routine pathology slides into spatial proteomics, with the potential to reduce time and cost while expanding access to cancer care.
English
460
1.9K
11.2K
2.8M
Vali Barsan, MD 🇺🇸 retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
Andrej Karpathy tweet media
English
962
2.1K
19.5K
3.6M
Vali Barsan, MD 🇺🇸 retweetledi
Bruce Booth
Bruce Booth@LifeSciVC·
With the approval of tovorafenib by $DAWN, it’s worth celebrating the long journey of R&D in our business. Others have written on this better (@JacobPlieth @SFBIZronleuty), but here’s the short story: The molecule was discovered in a collaboration between Sunesis & Biogen that started 20 years ago. Biogen deprioritized oncology under CEO Scangos and stopped working on RAF in 2010, reverting rights back to Sunesis. But tovo was covered by key patents from the collaboration, which were assigned to Sunesis. Fwiw, two of the co-inventors of tovo later became Atlas entrepreneurs: Gnanasambandam Kumar Kumaravel (Padlock) and Alexey Lugovsky (Diagonal). Small world of both the Biogen diaspora and the @atlasventure network. Takeda licensed it from Sunesis in 2011, and over next 6-7 years its clinical work in melanoma failed to deliver. Physician-scientist/pediatric oncologist @drsam co-founded Day One with @JuliePapanek in 2019 to focus on pediatric cancer. They license tovo from Takeda in 2020, and my partner @MikeNGladstone convinced Atlas to join Canaan Partners in the $60M Series A, as did Access Industries. Company goes public in a 2021 IPO. Sunesis retained some economics in tovo. It did a reverse merger in 2021, and became Viracta. It sold its tovo milestones/royalties to Xoma later that year. After the success of Firefly-1 in pediatric LGG, tovo (Ojemda) was granted accelerated approval, with the ongoing Phase 3 Firefly-2 study underway. Amazing journey, with many teams participating along the way. It takes a village... or at least a collaborative ecosystem... to bring drugs to patients.
Bruce Booth tweet media
English
7
16
151
34.2K
Vali Barsan, MD 🇺🇸
I am deeply saddened by the passing of Phil Low. I saw firsthand that he wasn't just a brilliant innovator, he was a true visionary who never lost sight of the patient. Grateful for the time we spent together and for his immense contributions to our field. purdue.edu/newsroom/2026/…
English
0
0
1
61
Vali Barsan, MD 🇺🇸 retweetledi
Vali Barsan, MD 🇺🇸 retweetledi
Senior Official Jeremy Lewin
Senior Official Jeremy Lewin@UnderSecretaryF·
For the avoidance of doubt, the OpenAI - @DeptofWar contract flows from the touchstone of “all lawful use” that DoW has rightfully insisted upon & xAI agreed to. But as Sam explained, it references certain existing legal authorities and includes certain mutually agreed upon safety mechanisms. This, again, is a compromise that Anthropic was offered, and rejected. Even if the substantive issues are the same there is a huge difference between (1) memorializing specific safety concerns by reference to particular legal and policy authorities, which are products of our constitutional and political system, and (2) insisting upon a set of prudential constraints subject to the interpretation of a private company and CEO. As we have been saying, the question is fundamental—who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems. It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here 🇺🇸
Senior Official Jeremy Lewin@UnderSecretaryF

The axios article doesn’t have much detail and this is DoW’s decision, not mine. But if the contract defines the guardrails with reference to legal constraints (e.g. mass surveillance in contravention of specific authorities) rather than based on the purely subjective conditions included in Anthropic’s TOS, then yes. This, btw, was a compromise offered to—and rejected by—Anthropic.

English
134
218
1.6K
908.4K
Vali Barsan, MD 🇺🇸 retweetledi
Under Secretary of War Emil Michael
Timeline of events: Today at 9:04pm. No response yet to my calls or messages to @DarioAmodei. Today at 825pm, @AnthropicAI writes “we have not received direct communication from the Department of War.” Today: 5:14pm SecWar tweets supply chain risk designation. Today: I call Dario’s business partner at 5:02pm asking to speak to Dario because he hasn’t gotten back to me. She is typing while we speak and likely has lawyers in the room with no notification to me (that’s a guess! CA is a two party consent law so am hoping no laws were broken!). Says she “will try to locate him.” Today: I call Dario at 5:01. No Answer. I message Dario asking to talk as well. Today: Dario sends an email with redlines 3:51pm Today: @POTUS sends Truth at 3:47pm Yesterday: I email Dario comments at 10:54pm. Does this sound like GOOD FAITH?
Anthropic@AnthropicAI

A statement on the comments from Secretary of War Pete Hegseth. anthropic.com/news/statement…

English
456
466
4.7K
1.1M
Vali Barsan, MD 🇺🇸 retweetledi