Andrew Leduc

1.9K posts

Andrew Leduc banner
Andrew Leduc

Andrew Leduc

@_AndrewLeduc

Post-doc @slavovlab Interests: Non-canonical protein sequences Protein degradation Single cell analysis

Boston, MA Katılım Şubat 2020
317 Takip Edilen790 Takipçiler
Sabitlenmiş Tweet
Andrew Leduc
Andrew Leduc@_AndrewLeduc·
The big one is finally out!! In this paper, we set out to provide insight into the fundamental question; How do the individual cells from complex tissues regulate their proteomes? Brief summary of our findings 👇 biorxiv.org/content/10.110…
English
6
39
203
41K
Andrew Leduc
Andrew Leduc@_AndrewLeduc·
@arjunrajlab In each scenario you described, aren't you dictating what you would like to do just the same? 😅
English
0
0
2
374
Arjun Raj
Arjun Raj@arjunrajlab·
Basically, LLMs have gotten good enough that for most things, if I can tell an LLM to do it, it means I can also tell an LLM to tell an LLM to do it. The abilities that are enabled are profound.
English
3
4
43
8.9K
Andrew Leduc
Andrew Leduc@_AndrewLeduc·
@anshulkundaje This makes me sick, introspection is the most important traits I value in people I associate with
English
0
0
0
81
Anshul Kundaje
Anshul Kundaje@anshulkundaje·
Pathological megalomania may provide some kind of LFG advantage ... until it doesn't. At some point, u start making stupid decisions if u don't introspect. I think this is becoming quite clear with many self-proclaimed "great men" in the tech and VC space.
David Senra@davidsenra

Great men of history had little to no introspection. The personality that builds empires is not the same personality that sits around quietly questioning itself. @pmarca and I discuss what we both noticed but no one talks about: David: You don't have any levels of introspection? Marc: Yes, zero. As little as possible. David: Why? Marc: Move forward. Go! I found people who dwell in the past get stuck in the past. It's a real problem and it's a problem at work and it's a problem at home. David: So I've read 400 biographies of history’s greatest entrepreneurs and someone asked me what the most surprising thing I’ve learned from this was [and I answered] they have little or zero introspection. Sam Walton didn't wake up thinking about his internal self. He just woke up and was like: I like building Walmart. I'm going to keep building Walmart. I'm going to make more Walmarts. And he just kept doing it over and over again. Marc: If you go back 400 years ago it never would've occurred to anybody to be introspective. All of the modern conceptions around introspection and therapy, and all the things that kind of result from that are, a kind of a manufacture of the 1910s, 1920s. Great men of history didn't sit around doing this stuff. The individual runs and does all these things and builds things and builds empires and builds companies and builds technology. And then this kind of this kind of guilt based whammy kind of showed up from Europe. A lot of it from Vienna in 1910, 1920s, Freud and all that entire movement. And kind of turned all that inward and basically said, okay, now we need to basically second guess the individual. We need to criticize the individual. The individual needs to self criticize. The individual needs to feel guilt, needs to look backwards, needs to dwell in the past. It never resonated with me.

English
4
6
48
4.1K
Andrew Leduc
Andrew Leduc@_AndrewLeduc·
@minilek This account you are replying to is clearly some bot account / troll account designed to sow discourse and piss people off
English
0
0
0
93
Andrew Leduc
Andrew Leduc@_AndrewLeduc·
@lpachter I don't even understand engaging with the original account. Its clearly some troll account designed to sow discourse
English
0
0
2
76
Lior Pachter
Lior Pachter@lpachter·
This list of accomplishments by Berkeley profs is impressive but I think the most impressive stat is 32% of students are Pell grant recipients and they get to take classes and do research with profs thanks to federal funding. BTW more Pell students than Ivy League combined.
Jelani Nelson@minilek

OP says UC Berkeley 'fleeced' the taxpayer out of $419 million. Here's what some of that fleecing has gotten our country lately ... 1) CRISPR (gene-editing therapy invented at Berkeley) used to provide personalized medical care to a baby with a rare and fatal genetic disease. 1/

English
1
13
74
10.7K
Andrew Leduc
Andrew Leduc@_AndrewLeduc·
I will be at CSHL Global Regulation of Gene Expression meeting next week! Looking forward to presenting on our results the crucial aspects of gene expression that occur after transcription :) Feel free to reach out if also attending at would like to chat!
English
0
3
7
1.2K
Eduardo T Jurado-Cobena
Eduardo T Jurado-Cobena@JuradoCobena·
@slavov_n From the same thread: "The simulations did not have size effects, so normalization overcorrected for a factor that was not there." For instance, if size effect is there, normalization is granted...
English
1
0
1
88
Gennady Gorin
Gennady Gorin@GorinGennady·
The correlations get scrambled, even the highest-magnitude ones, and many of them flip! This finding is profoundly troubling. Standard procedures, used in thousands of papers, tremendously distort one of the strongest signals in the data. 9/
Gennady Gorin tweet media
English
4
2
17
10.9K
Gennady Gorin
Gennady Gorin@GorinGennady·
After years of work, the centerpiece of my PhD is published in @NatureMethods! Read it to learn about the biophysical insights we can get from single-cell data! But first, I would like to talk a bit about RNA velocity and normalization. 1/
Nature Methods@naturemethods

Monod fits biophysically motivated models to single-cell transcriptomics data, providing insights into gene expression dynamics. @goringennady @lpachter @mariacarilli @johnjvastola nature.com/articles/s4159…

English
2
39
199
24.1K
Andrew Leduc retweetledi
Gennady Gorin
Gennady Gorin@GorinGennady·
I am looking for a new industry role in computational biology! Check out my portfolio of genomics, statistics, ML, and biophysics work at gennadygorin.github.io, and reach out if you have any suggestions or open roles!
English
5
16
75
16.4K
Andrew Leduc
Andrew Leduc@_AndrewLeduc·
@anshulkundaje Totally. I think I have just been seeing so many vague warnings that I thought it would be useful to highlight a very concrete example of a failure mode :)
English
0
0
1
9
Anshul Kundaje
Anshul Kundaje@anshulkundaje·
I get that this is a huge competitive advantage at the moment. But it would be a mistake to not rapidly impart this knowledge to others in your field. There is true value for science (which is a team sport) to lift all boats. 7/7
English
2
0
13
1.3K
Anshul Kundaje
Anshul Kundaje@anshulkundaje·
The lack of calibrated confidence of AI output & a lack of understanding of how this confidence changes in different domains with every update is one of the biggest risks of using AI for science at scale especially by those without deep expertise. 1/
Andrew Leduc@_AndrewLeduc

Just encountered a fun example of primitive LLM reasoning. I have been doing some proteogenomic analysis of CPTAC data. quantifying missense mutations. The proteomics samples are multiplexed so that ~10 different samples are run together... 🧵

English
2
5
66
16.2K
Andrew Leduc retweetledi
Anshul Kundaje
Anshul Kundaje@anshulkundaje·
Also want to encourage AI scientist companies to actively accumulate and frequently publish failures, what guard rails they provide (when they fail), guides for calibration of confidence wrt different use cases etc 1/
Anshul Kundaje@anshulkundaje

The lack of calibrated confidence of AI output & a lack of understanding of how this confidence changes in different domains with every update is one of the biggest risks of using AI for science at scale especially by those without deep expertise. 1/

English
2
7
42
5.2K
Andrew Leduc retweetledi
Anshul Kundaje
Anshul Kundaje@anshulkundaje·
There is a sliver of junior & senior scientists who have figured out the intuition & strategies to calibrate to the unreliability and truly maximize the capability in each domain of science. I encourage them to write more about their process with ample examples of failures. 6/
English
3
1
12
1.5K
Andrew Leduc
Andrew Leduc@_AndrewLeduc·
To my surprise, it gave a totally wrong answer about how the mutations caused the peptides to ionize poorly causing reduced detectability. This is the standard answer you would generally expect for a question of "detectability" but very clearly was not the case here!
English
0
0
12
713
Andrew Leduc
Andrew Leduc@_AndrewLeduc·
In the next prompt, I asked it to contrast the overall detection rate of mutations with the WT sequences and then explain why our detection rate for missense mutations was lower (clearly knowing the answer related to the lower fraction of the mutations compared to the WT).
English
1
0
4
728
Andrew Leduc
Andrew Leduc@_AndrewLeduc·
Just encountered a fun example of primitive LLM reasoning. I have been doing some proteogenomic analysis of CPTAC data. quantifying missense mutations. The proteomics samples are multiplexed so that ~10 different samples are run together... 🧵
English
1
3
15
14.4K