Aaron Jesse O'Hare-Lewis 🏳️‍🌈

65 posts

Aaron Jesse O'Hare-Lewis 🏳️‍🌈 banner
Aaron Jesse O'Hare-Lewis 🏳️‍🌈

Aaron Jesse O'Hare-Lewis 🏳️‍🌈

@aaron_jesse_

Consultant @BCG, previously PhD @Cambridge_Uni @MRC_LMB, BS @Yale MB&B.

London, England Katılım Aralık 2012
472 Takip Edilen338 Takipçiler
Sabitlenmiş Tweet
Aaron Jesse O'Hare-Lewis 🏳️‍🌈
As it turns out, the secretion channel Sec61αβγ has a fourth subunit, RAMP4—long overlooked because it falls off in detergent. It holds the pore open and completes the lumenal funnel. This plus exciting structures of TRAP, uL22, & more are now online: biorxiv.org/content/10.110…
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 tweet media
English
2
28
112
16K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
Fèlix Campelo
Fèlix Campelo@felixmendu·
Do read the @elife assessment (and the paper too!) of this new story by @aaron_jesse_ et al. "Structural analysis of the dynamic ribosome-translocon complex" doi.org/10.7554/eLife.… Now, tell me you prefer to be judged by where you publish rather than by the quality of your work
English
1
2
8
1.1K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈
We've expanded our RAMP4/TRAP/uL22/etc study with new data confirming biochemically that RAMP4 is a part of the secretory translocon (thanks Frank Zhong & @keenan_lab!). Read it at eLife with an insightful commentary from @FriedrichForst1 & D Vismpas: #x7d9359a9" target="_blank" rel="nofollow noopener">elifesciences.org/articles/98548…
English
2
1
9
811
Haoxi Wu
Haoxi Wu@Haoxi_xiaozzxi·
@aaron_jesse_ A series of careful and in-depth analyses! Very insightful and impressive. 🔥
English
1
0
2
323
Aaron Jesse O'Hare-Lewis 🏳️‍🌈
As it turns out, the secretion channel Sec61αβγ has a fourth subunit, RAMP4—long overlooked because it falls off in detergent. It holds the pore open and completes the lumenal funnel. This plus exciting structures of TRAP, uL22, & more are now online: biorxiv.org/content/10.110…
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 tweet media
English
2
28
112
16K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
LMBLib
LMBLib@LMB_Library·
NEW #LMBpublications EMC rectifies the topology of multipass membrane proteins ift.tt/WqNr4QA
English
0
2
3
1K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
Harry Low’s Lab
Harry Low’s Lab@thelowlab·
The missing piece in the bacterial #ESCRT-III story.. #Vipp1 forms dynamic spiral filaments on membrane! Spirals are springs that drive 3D ring formation in the spiral centre! 😎😵‍💫 Wonderful collaboration with @Colom_D @RouxLab… 1/2 biorxiv.org/content/10.110…
English
18
136
562
108.6K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
AK
AK@_akhaliq·
Language Modeling Is Compression paper page: huggingface.co/papers/2309.10… It has long been established that predictive models can be transformed into lossless compressors and vice versa. Incidentally, in recent years, the machine learning community has focused on training increasingly large and powerful self-supervised (language) models. Since these large language models exhibit impressive predictive capabilities, they are well-positioned to be strong compressors. In this work, we advocate for viewing the prediction problem through the lens of compression and evaluate the compression capabilities of large (foundation) models. We show that large language models are powerful general-purpose predictors and that the compression viewpoint provides novel insights into scaling laws, tokenization, and in-context learning. For example, Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively. Finally, we show that the prediction-compression equivalence allows us to use any compressor (like gzip) to build a conditional generative model.
AK tweet media
English
41
368
1.9K
774.7K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
davidad 🎇
davidad 🎇@davidad·
Remember when `gzip` and kNN bested an LLM at classifying text in a previously unseen language? It turned out the defeated “large language model” was a BERT with ~100M params. Chinchilla-70B can beat PNG at compressing photos. And FLAC at compressing audio.
AK@_akhaliq

Language Modeling Is Compression paper page: huggingface.co/papers/2309.10… It has long been established that predictive models can be transformed into lossless compressors and vice versa. Incidentally, in recent years, the machine learning community has focused on training increasingly large and powerful self-supervised (language) models. Since these large language models exhibit impressive predictive capabilities, they are well-positioned to be strong compressors. In this work, we advocate for viewing the prediction problem through the lens of compression and evaluate the compression capabilities of large (foundation) models. We show that large language models are powerful general-purpose predictors and that the compression viewpoint provides novel insights into scaling laws, tokenization, and in-context learning. For example, Chinchilla 70B, while trained primarily on text, compresses ImageNet patches to 43.4% and LibriSpeech samples to 16.4% of their raw size, beating domain-specific compressors like PNG (58.5%) or FLAC (30.3%), respectively. Finally, we show that the prediction-compression equivalence allows us to use any compressor (like gzip) to build a conditional generative model.

English
9
24
333
69.3K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
VanniLab
VanniLab@LabVanni·
Let’s start the week in style: just out in Biorxiv one of the most important results from my lab, with the Reinisch lab @YaleMed: “Lipid scrambling is a general feature of protein insertases”. The title pretty much sums it up. Short 🧵(1/13). biorxiv.org/content/10.110…
English
7
47
143
39.3K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
Sami Chaaban
Sami Chaaban@sami_chaaban·
Some simple EM processing tasks can be overly complicated, so I wrote starparser to help me out. Sharing it here in case others find it useful. For example, you can remove particles with specific orientations, among 40+ other command-line options. github.com/sami-chaaban/s…
English
0
28
118
10.6K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
Markus Höpfler
Markus Höpfler@MarkusHopfler·
We usually think of mRNAs carrying the information needed to make a protein (the central dogma...). But how can the encoded protein control the fate of its encoding mRNA? 🤔 Check out our review on the topic 👇 cell.com/molecular-cell…
English
2
46
155
14.8K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
Diego del Alamo
Diego del Alamo@DdelAlamo·
How did I miss this? "A conserved ribosomal protein has entirely dissimilar structures in different organisms" from september of last year biorxiv.org/content/10.110…
Diego del Alamo tweet media
English
2
24
148
19.8K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
c0nc0rdance
c0nc0rdance@c0nc0rdance·
No-one has ever been able to replicate Gregor Mendel's observations of pea plants. They're a little "too perfect", lacking even random statistical noise that would have been expected from small sample sizes. Was it scientific fraud?
DNA&RNA Universe 𝕏@DNA_RNA_Uni

Happy Birthday, Johann Gregor Mendel! Your groundbreaking work in genetics has paved the way for countless discoveries in the field. Your legacy continues to inspire generations of scientists today. 🫛🧬 #FatherOfGenetics #CzechRepublic #Science 20 July 1822 – 6 January 1884

English
44
174
1.5K
985.7K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
The Zhou Lab
The Zhou Lab@zhou_lab·
Check out our cryoEM result of mRNA editosome in Trypanosoma brucei! Awesome collaboration with Aphasizhev lab from Boston U. Many congrats to Bruce @ShihengLiu⁩ and other coauthors for another wonderful piece of work! science.org/doi/10.1126/sc…
English
3
17
63
8.8K
Aaron Jesse O'Hare-Lewis 🏳️‍🌈 retweetledi
Miao Gui
Miao Gui@miao_gui·
I’m delighted to share our work @alanbrownhms on the structures of axoneme, one of the largest macromolecular machine in nature! This is really a dream coming true! Now we have an atomic view of the core of cilia skeleton. nature.com/articles/s4158…
Miao Gui tweet media
English
19
94
475
95.1K