Clayton Mellina

590 posts

Clayton Mellina banner
Clayton Mellina

Clayton Mellina

@pumpikano

chronicling the phenomenology of information superfluidity \ cto @transcriptabio — hyperscaled transcriptomics

San Francisco, CA, 🌎 가입일 Ocak 2010
1.3K 팔로잉217 팔로워
Clayton Mellina
Clayton Mellina@pumpikano·
@metaphdor Have you checked out marimo? I'm using both but tending towards marimo lately due to this
English
1
0
1
12
Stacey S10a (🌎/accelebrate)
basic Claude ergonomics tip if you're working with notebooks a lot: ask for just the contents of one cell in terminal. Also less context drift/slopcode, compared to the whole notebook in scope (quality/clarity of code in cells is much weaker than in scripts IME)
English
1
0
0
98
Clayton Mellina 리트윗함
Fil Aronshtein
Fil Aronshtein@FilArons·
Lately I've been seeing a very interesting major shift. Large, man-made things that used to be designed and build-planned like they’re architecture are being moved to be designed and built like manufactured products: Ships and data centers. Historically, these systems were "architected". What does that mean? For the sake of brevity, I'm going to be overly reductive. There are 4 major "CAD" companies that people use to design and plan "big assemblies with lots of parts". 3 focus on manufacturing (Siemens, PTC, Dassault -- actual CAD), 1 focuses on architecture (Autodesk -- called BIM). Historically, ships were "architected". To this day, the person who is responsible for the design and manages the build of a ship and submarine is called a "Naval Architect". When software came along, ships mostly either stayed on paper (ouch!) or made their way into the same software as buildings -- architecture-oriented CAD (BIM). Similarly, the way data centers have been designed and planned were as buildings. This is somewhat understandable if you consider them to be one-offs, as they've often historically been. Thus, they too have lived entirely in the BIM/architecture world -- until now. We're seeing two massive surges occur simultaneously: the AI boom demanding more more more data centers, and the defense boom demanding more more more ships. To go from bespoke build (architecture) to modular, repeatable, scaled production, I've been seeing data center companies and maritime companies make a massive push: All of them are migrating all of their designs away from BIM/architecture software (Autodesk) and onto manufacturing software (Siemens, PTC, Dassault). We're seeing a migration away from a "bespoke, architected" built world to a more "modular, repeatable, scalable" built world. To achieve the scale of product volume that their customers now demand, companies building ships and data centers have now moved to standardize and modularize their products so they can achieve economies of scale, allowing their systems and subsystems to be mass manufactured with consistency and reliability across different locations. This is needed so that they can be built quickly, repeatably, with the expectation that their subsystems have reliable interoperability and composability.
Fil Aronshtein tweet mediaFil Aronshtein tweet media
Manhattan, NY 🇺🇸 English
73
249
2.4K
229.1K
Clayton Mellina 리트윗함
Halo CME
Halo CME@halocme·
Gorgeous eruption that has become a halo CME on its way to Earth (likely arriving on December 9). Quite strong shock, given that proton flux has started to rise,
English
14
219
917
133.7K
Dr. Xja
Dr. Xja@xja1993·
@burkov Is there a git with examples ?
English
2
0
1
973
BURKOV
BURKOV@burkov·
When you train a model on one dataset, it usually performs poorly on data from a different source - even if the task is the same. This paper shows how to train models that automatically learn features that work across different datasets by forcing the network to be unable to tell which dataset any given example came from. The method is simple enough that you can add it to almost any neural network with just a few lines of code, yet it consistently improves performance when you need your model to work on data it wasn't originally trained on. This paper is an exemplary work on machine learning: well-motivated, original, well-explained, with both theoretical and practical results. Reading it was a pure pleasure after three years of "we tried this, we tried that, here's a 2% improvement on some benchmarks." Read it by talking to it on ChapterPal: chapterpal.com/s/386d57f4/dom… Download the PDF as our forefathers did: arxiv.org/pdf/1505.07818
BURKOV tweet media
English
10
75
811
63.6K
Clayton Mellina
Clayton Mellina@pumpikano·
@kchonyc Are you vibe coding an app to explore your medical records?
English
1
0
0
10.3K
Clayton Mellina 리트윗함
Michael Levin
Michael Levin@drmichaellevin·
New paper with @robertchisciure ! "Cognition all the way down 2.0: neuroscience beyond neurons in the diverse intelligence era" link.springer.com/article/10.100… "This paper formalizes biological intelligence as search efficiency in multi-scale problem spaces, aiming to resolve epistemic deadlocks in the basal “cognition wars” unfolding in the Diverse Intelligence research program. It extends classical work on symbolic problem-solving to define a novel problem space lexicon and search efficiency metric. Construed as an operationalization of intelligence, this metric is the decimal logarithm of the ratio between the cost of a random walk and that of a biological agent. Thus, the search efficiency measures how many orders of magnitude of dissipative work an agentic policy saves relative to a maximal-entropy search strategy. Empirical models for amoeboid chemotaxis and barium-induced planarian head regeneration show that, under conservative (i.e., intelligence-underestimating) assumptions, even ‘simple’ organisms are from two-hundred- to sextillion-fold more efficient in problem space exploration. In this sense, the deep insights of neuroscience are not about neurons per se, but about the policies and patterns of physics and mathematics that function as a kind of “cognitive glue” binding parts toward higher levels of collective intelligence in wholes of highly diverse composition and origin. Therefore, our synthesis argues that the “mark of the cognitive” is perhaps better sought in the measurable efficiency with which living systems, from single cells to complex organisms, traverse energy and information gradients to tame combinatorial explosions-one problem space at a time."
English
37
101
586
42.5K
Clayton Mellina
Clayton Mellina@pumpikano·
@srikosuri As a transplant from tech to bio, I totally agree. The root of the similarity is whether you're running experiment loops to understand something, and many more analogies blossom from there.
English
0
0
0
31
Sri Kosuri
Sri Kosuri@srikosuri·
It’s kind of interesting, but in some ways AI is making CS much more like biology. We can observe and tweak what it does, but we really don’t know how it’s doing it. A science and engineering discipline rooted in empiricism over theory. I wonder if this trend continues.
English
27
53
460
38.8K
Clayton Mellina 리트윗함
Anshul Kundaje
Anshul Kundaje@anshulkundaje·
This a really exciting leap forward for genomic sequence to activity gene regulation models. It is a genuine improvement over pretty much all SOTA models spanning a wide range of regulatory, transcriptional and post-transcriptional processes. 1/
Pushmeet Kohli@pushmeet

Happy to introduce AlphaGenome, @GoogleDeepMind's new AI model for genomics. AlphaGenome offers a comprehensive view of the human non-coding genome by predicting the impact of DNA variations. It will deepen our understanding of disease biology and open new avenues of research.

English
2
48
268
42.3K
Clayton Mellina 리트윗함
Bo Wang
Bo Wang@BoWang87·
🚀 Xaira Therapeutics has just dropped a game-changer for AI-driven biology. Today, we unveiled X-Atlas/Orion, the largest publicly available genome-wide Perturb-seq dataset to date—spanning 8.4 million single cells with perturbations across all ~20,000 human protein-coding genes. This release is not just about scale—it’s about enabling a new era of causal, mechanistic foundation models for biology. 📝 Preprint on bioRxiv: biorxiv.org/content/10.110… 📂 Dataset on Figshare: doi.org/10.25452/figsh… 🔍 What makes X-Atlas/Orion special: 📈 Unprecedented scale & quality: Each cell profiled with deep (~16k UMIs) transcriptomics and rich metadata 🧪 Quantitative dose-response modeling: Thanks to high-fidelity sgRNA detection and ~4 guides per gene, allowing continuous modeling of genetic effects 🧬 FiCS platform: A fully industrialized single-cell perturbation system enabling rapid, reproducible experiments at massive throughput 🧠 This isn’t just “data.” It’s the biological substrate for building virtual cell models that can generalize, predict, and ultimately power AI-native drug discovery. 💬 My final take: This is a foundational moment for the field. The ability to model how genes affect cell state—quantitatively, causally, and at scale—is what we need to unlock predictive biology. Kudos to the incredible team at Xaira for open-sourcing this resource so the entire community can build on it. #PerturbSeq #SingleCell #Genomics #VirtualCell #FoundationModels #AIForBiology #Xaira #DrugDiscovery #SyntheticBiology #CausalAI More press release: Press release : 🔗 GEN article : genengnews.com/topics/artific… 🔗 BusinessWire: businesswire.com/news/home/2025…
Bo Wang tweet mediaBo Wang tweet mediaBo Wang tweet mediaBo Wang tweet media
English
17
130
506
126.7K
Clayton Mellina 리트윗함
Sergey Levine
Sergey Levine@svlevine·
I always found it puzzling how language models learn so much from next-token prediction, while video models learn so little from next frame prediction. Maybe it's because LLMs are actually brain scanners in disguise. Idle musings in my new blog post: sergeylevine.substack.com/p/language-mod…
English
51
170
1.3K
314.4K
Clayton Mellina 리트윗함
Keller Cliffton
Keller Cliffton@Keller·
Most people have no idea that the @WhiteHouse recently published a groundbreaking Exec Order to massively accelerate manufacturing and scale of drones in the US. Here what's going to change 🧵 whitehouse.gov/presidential-a…
English
18
48
466
62.5K
Clayton Mellina
Clayton Mellina@pumpikano·
with tools like cursor, coding tasks actually take about as long as I think they will for the first time ever
English
0
0
0
16
Clayton Mellina
Clayton Mellina@pumpikano·
if you say aspirin like aspirin' it sounds like a modern drug brand name
English
0
0
0
27
Clayton Mellina 리트윗함
Dr. Tamitha Skov
Dr. Tamitha Skov@TamithaSkov·
This eruption opened up the Sun like a zipper! Between the big flare from Region 4100 and the entangled filament launch that slithered like a dark snake above the region and curving southwest, the sheer length of this #solarstorm launch is as wide as the Sun itself. Along with the previous solar storm eruption from an M-flare earlier today, we have likely TWO, if not three structures headed towards Earth.
GIF
English
44
562
2K
136.1K
Clayton Mellina 리트윗함
Jure Leskovec
Jure Leskovec@jure·
Announcing Biomni — the first general-purpose biomedical AI agent. Biomni is a free web platform where biomedical scientists can immediately delegate their tasks to Biomni, starting today! Biomni automates literature reviews, hypothesis generation, protocol design, bioinformatics analysis, clinical reasoning, and much more — scaling biomedical expertise for 100× the number of discoveries. Key results: ➡️ Designed a cloning experiment with real-world wet-lab validation; on par with 5+ year expert in a blind test ➡️ Ran 458-file wearable bioinformatics analysis in 35 minutes vs. 3 weeks (800x faster) for human expert ➡️ Uncovered novel hypothesis: new TFs regulating skeletal lineages on a large scRNA+scATAC data ➡️ Human-level performance on LAB-bench DbQA and SeqQA, with SOTA at Humanity’s Last Exam and across 8 new biomedical tasks—ranging from GWAS and rare disease diagnosis to microbiology and drug repurposingPowered by: ➡️ Biomni-E1 – the first unified environment designed for a biomedical agent—encompassing 150 tools, 59 databases, 106 software—systematically curated from 2,500+ bioRxiv papers ➡️ Biomni-A1 – a generalist agent with retrieval, planning, and code as action Biomni is an open-source initiative: we invite the community to build on it and advance biomedical research at scale. - Try it now: biomni.stanford.edu - Paper: biomni.stanford.edu/paper.pdf - Code: github.com/snap-stanford/… - Join the community: tinyurl.com/biomni-slackWi… Amazing team and collaborators @StanfordAILab @StanfordMed @StanfordCancer @genentech @arcinstitute @UCSF @UW @PrincetonAInews @KexinHuang5 @serena2z @hcwww_ @YuanhaoQ @mintaylu @yusufroohani @RyanLi0802 @LinQiu0128 Gavin Junze Di Shruti Jennefer Xin Zhou @MWheelerMD Jon Bernstein @MengdiWang10 @PengHeAtlas @SnyderShot @lecong Aviv Regev
English
9
86
370
57.8K
Clayton Mellina 리트윗함
Kexin Huang
Kexin Huang@KexinHuang5·
📢 Introducing Biomni - the first general-purpose biomedical AI agent. Biomni is built on the first unified environment for biomedical agent with 150 tools, 59 databases, and 106 software packages and a generalist agent design with retrieval, planning, and code as action. This enables Biomni to perform a wide range of research tasks - from literature review, hypothesis generation, protocol design, data analysis, clinical reasoning, and much more - across subfields like genomics, microbiome, physiology, and beyond. Some key results: 🔬 Designed a molecular cloning experiment validated in wet lab, matching the performance of a >5-year expert in a blinded test 📊 Completed a wearable bioinformatics analysis across 458 messy files in 35 min vs. 3 weeks by a human 🧠 Uncovered novel transcription factor hypotheses driving skeletal lineage regulation We built a web platform where biomedical scientists can immediately delegate their tasks to the agent today, completely free! 🧪 Try it now: biomni.stanford.edu 📄 Paper: biomni.stanford.edu/paper.pdf 💻 Code: github.com/snap-stanford/… (will be fully open-sourced very soon!) 💬Join the community: tinyurl.com/biomni-slack Biomni is an open-source initiative: we invite the community to build on it and advance biomedical research at scale. With amazing collaborators @StanfordAILab @StanfordMed @StanfordCancer @genentech @arcinstitute @UCSF @UW @PrincetonAInews @serena2z @hcwww_ @YuanhaoQ @mintaylu @yusufroohani @RyanLi0802 @LinQiu0128 Gavin Junze Di Shruti Jennefer Xin Zhou @MWheelerMD Jon Bernstein @MengdiWang10 @PengHeAtlas @SnyderShot @lecong Aviv Regev @jure
English
25
118
444
74.7K
Captain Pleasure, Andrés Gómez Emilsson
Boss-level computational functionalists meeting at the CIMC (California Institute for Machine Consciousness). Discussing what it would be like to have a bigger brain. Wolfram: the big surprise of LLMs is that they're able to do things we thought were uniquely human. People think we need new physics to understand how human minds work. For me, LLMs put the last "nail in the coffin" to the idea you need new physics. Observational evidence is that you don't need anything different beyond digital computation. Joscha: New physics might only give you more speed. But that's not what is on the way to understand or model consciousness.
Captain Pleasure, Andrés Gómez Emilsson tweet mediaCaptain Pleasure, Andrés Gómez Emilsson tweet mediaCaptain Pleasure, Andrés Gómez Emilsson tweet mediaCaptain Pleasure, Andrés Gómez Emilsson tweet media
English
9
6
119
8.8K
Clayton Mellina 리트윗함
Nic Fishman
Nic Fishman@njwfish·
🚨 New preprint 🚨 We introduce Generative Distribution Embeddings (GDEs) — a framework for learning representations of distributions, not just datapoints. GDEs enable multiscale modeling and come with elegant statistical theory and some miraculous geometric results! 🧵
Nic Fishman tweet media
English
6
136
756
80.8K