Mladen Stojanovic, PhD
1.8K posts

Mladen Stojanovic, PhD
@Xydja
Highly educated idiot

So sick, fair play man When beginning, what do you focus on and in what order? Like first start with avatars (how many?), then initial angles (how many?), batch of creatives (mostly vids or mostly statics?), and just one PDP? What daily budget did you begin with and how much did it increase?

Today’s Figma MCP update makes it one of the strongest integrations with Claude Code I’ve seen. You can now use Claude Code to design in Figma with the the full context of your design systems.











A dual-branch transformer predicts how drugs reshape gene expression Drug discovery is shifting from "one drug, one target" to "one drug, multiple targets." But mapping how a compound ripples through the transcriptome—across different doses, exposure times, and cellular contexts—remains experimentally prohibitive. Most cell-drug combinations have never been measured. Yue Guo and coauthors introduce XPert, a transformer-based model that predicts drug-induced transcriptional changes by separately encoding pre-perturbation cellular states (via self-attention) and post-perturbation effects (via cross-attention). This dual-branch design lets the model disentangle intrinsic gene-gene interactions from the regulatory shifts triggered by chemical perturbation. A key innovation is bridging chemical and biological spaces. Because structurally similar drugs don't always produce similar effects, XPert builds a heterogeneous knowledge graph connecting drug-target interactions, protein-protein interactions, and structural similarity. The result: drugs with the same mechanism of action cluster in the learned embedding space, even when their chemical structures diverge. The model also encodes dose and time as learnable condition tokens, capturing nonlinear pharmacodynamic relationships that one-hot encoding misses entirely. On the L1000 benchmark, XPert achieves 36.7% higher correlation and 78.2% lower error than the next-best model when generalizing to unseen cell lines. The authors trace this gap to a fundamental limitation of VAE-based approaches: the denoising that helps with reconstruction erases the cellular context needed for out-of-distribution prediction. When pretrained on large-scale preclinical screens and fine-tuned on clinical data, XPert improves patient-specific response predictions by up to 15%—and identifies resistance biomarkers invisible to standard differential expression analysis. The upshot: by combining attention mechanisms with biological prior knowledge, it's now possible to predict transcriptional responses to drugs that have never been tested in a given cellular context, opening a path toward in silico pharmacodynamics and mechanism-of-action discovery at scale. Paper: nature.com/articles/s4225…



@DTCMidas Do you have any google sheet templates?







