Jake Silberg

115 posts

Jake Silberg

Jake Silberg

@JakeSilberg

Biomedical Data Science PhD student @Stanford

Katılım Eylül 2020
539 Takip Edilen133 Takipçiler
The Book Club
The Book Club@bookclubpodhq·
🚨NEW EPISODE🚨 🇺🇸THE GREAT GATSBY🇺🇸 Tabby & @dcsandbrook discuss: 📚Why it's considered the greatest American novel 🎷Was the jazz age really so glamorous? 🤔What is Gatsby really up to?
The Book Club tweet media
English
9
8
41
8.1K
Jake Silberg retweetledi
Nitya Thakkar
Nitya Thakkar@nityathakkar_·
Excited to share that our paper has been published in Nature Machine Intelligence! We conducted a randomized controlled trial at ICLR 2025 with 20,000+ reviews to test whether LLM feedback improves peer review quality. Link: nature.com/articles/s4225…
English
3
24
115
33.1K
Jake Silberg
Jake Silberg@JakeSilberg·
@arpitrage Doesn't upzoning square the circle? Allowing 4 units on a SFH lot means each individual unit is cheaper for new buyers/renters, but the plot as a whole has higher value for the seller?
English
0
0
3
147
Arpit Gupta
Arpit Gupta@arpitrage·
One way to square the circle of wanting higher house prices, but lower rents, would be to try to adjust cap rates (ie the discount rate): Lower prop taxes, maintenance, interest rates, insurance costs; or higher resell expectations compatible with lower rents and higher prices
English
5
1
17
4.2K
Jake Silberg
Jake Silberg@JakeSilberg·
@DdelAlamo I have a pet theory that a GAN-style discriminator auxiliary head for post-training a diffusion model could be helpful, given some of the differences between generated and natural proteins (see the distances on page 5 arxiv.org/pdf/2506.08365) but haven't tested this yet
English
0
0
1
31
Jake Silberg retweetledi
Caleb Lareau
Caleb Lareau@CalebLareau·
To make a long story short, we uncover dozens of regions of our genome that control whether the virus persists or is cleared quickly. Further, we show that persistent EBV may serve as a biomarker of complex diseases-- from respiratory disease to autoimmunity.
English
2
4
32
3.1K
Jake Silberg
Jake Silberg@JakeSilberg·
@CalebLareau @MSKCancerCenter Congrats on the awesome work! This is a fascinating read. I see you found associations with RA and SLE. Just curious, did you look for an association with Celiac as well?
English
0
0
1
176
Fred Zhangzhi Peng
Fred Zhangzhi Peng@pengzhangzhi1·
PAPL is accepted by ICLR2026! A simple tweak to ur DLM training that allows it to learn the generation order that you will use in the sampling, with ONE line of code change. shoutout to Zach, Anru, @ShuibaiZ69721 @jarridrb @AlexanderTong7 @mmbronstein @bose_joey #ICLR2026
Fred Zhangzhi Peng@pengzhangzhi1

🚨 New paper! We introduce a planner-aware training tweak to diffusion language models. ⚡ One-line-of-code change to the loss 💡 Fixes training–inference mismatch 📈 Strong gains in protein, text, and code generation arxiv.org/abs/2509.23405 (1/n)

English
4
8
45
18.3K
Jake Silberg
Jake Silberg@JakeSilberg·
@NielsRogge @ericzakariasson Do you notice a pattern of when this happens? My proposal is after every compacting it should re-read its claude.md where I tell it what env to use, or something like that. I find it will randomly forget env name later into long conversations
English
0
0
1
292
Niels Rogge
Niels Rogge@NielsRogge·
Opus 4.5 forgot to activate the virtual environment 3 times today :/ cc @ericzakariasson
English
5
0
25
7.9K
Jake Silberg
Jake Silberg@JakeSilberg·
@bcherny Does Claude Code re-read it's Claude.md (or some equivalent) after compacting? I find it might forget some odd things during a long convo (e.g., what conda env it should be using)
English
0
0
0
24
Boris Cherny
Boris Cherny@bcherny·
👋 Hi I'm Boris and I work on Claude Code. I am going to start being more active here on X, since there are a lot of AI and coding related convos happening here. Feel free to tag me with Claude Code feedback or bug reports. Love to hear how y'all are using Claude Code, and what we can do to make it even better.
English
994
231
9.3K
1.2M
Jake Silberg
Jake Silberg@JakeSilberg·
@ludocomito Really nice visualizations, especially the walkthrough of the N and O codes!
English
1
0
1
134
Jake Silberg
Jake Silberg@JakeSilberg·
@sedielem Another fun tweak here is intentionally biasing the training distribution, e.g., SolubleMPNN only trained on soluble proteins, so that "natural" structures passed through the model intentionally come out more soluble than the original input
English
0
0
1
63
Sander Dieleman
Sander Dieleman@sedielem·
Generative modelling used to be about capturing the training data distribution. Interestingly, this stopped being the case when we started actually using them🤔 We tweak temps, use classifier-free guidance and post-train to get a distribution better than the training data.
English
17
14
270
44.2K
Marianne Arriola
Marianne Arriola@mariannearr·
🚨In our NeurIPS paper, we bring encoder-decoders back.. for diffusion language models! ⚡️Encoder-decoders make diffusion sampling fast: a small (fast) decoder denoises tokens progressively and a large (slower) encoder represents clean context.
English
11
36
263
31.5K
Stefano Ermon
Stefano Ermon@StefanoErmon·
Tired of chasing references across dozens of papers? This monograph distills it all: the principles, intuition, and math behind diffusion models. Thrilled to share!
Chieh-Hsin (Jesse) Lai@JCJesseLai

Tired to go back to the original papers again and again? Our monograph: a systematic and fundamental recipe you can rely on! 📘 We’re excited to release 《The Principles of Diffusion Models》— with @DrYangSong, @gimdong58085414, @mittu1204, and @StefanoErmon. It traces the core ideas that shaped diffusion modeling and explains how today’s models work, why they work, and where they’re heading. 🧵You’ll find the link and a few highlights in the thread. We’d love to hear your thoughts and join some discussions! ⚡ Stay tuned for our markdown version, where you can drop your comments!

English
13
132
1.1K
126.5K
Brian L Trippe
Brian L Trippe@brianltrippe·
🚨New paper! Generative models are often “miscalibrated”. We calibrate diffusion models, LLMs, and more to meet desired distributional properties. E.g. we finetune protein models to better match the diversity of natural proteins. arxiv.org/abs/2510.10020 github.com/smithhenryd/cgm
English
3
45
202
20.2K