Ciro Ramirez Suastegui

3K posts

Ciro Ramirez Suastegui banner
Ciro Ramirez Suastegui

Ciro Ramirez Suastegui

@Cramsuig

🧬📚👨🏾‍💻 Science~Husband+Guerrerense+Genomics | he/they

Cambridge, England Katılım Şubat 2014
647 Takip Edilen322 Takipçiler
Sabitlenmiş Tweet
Ciro Ramirez Suastegui
Ciro Ramirez Suastegui@Cramsuig·
My first day at @sangerinstitute was rainy but very exciting nonetheless. As a @Cambridge_Uni PhD student I will be venturing into other genomics niches; away from human medicine even. My first project in the Tree of Life is extremely interesting and inspiring!
Ciro Ramirez Suastegui tweet mediaCiro Ramirez Suastegui tweet mediaCiro Ramirez Suastegui tweet mediaCiro Ramirez Suastegui tweet media
English
0
0
14
784
Ciro Ramirez Suastegui retweetledi
Mo Lotfollahi
Mo Lotfollahi@mo_lotfollahi·
(1/12) Over a year ago, we launched a new project to explore whether the tissue microenvironment could predict cellular behaviour and whether reprogramming might unlock new therapeutic avenues. 🧬 Spatial transcriptomics provides deep insights into tissue organisation. We wanted to take this further by building a model that predicts how the tissue microenvironment rewires cells, and in silico predict how reprogramming influences the diseased microenvironment. Today, in collaboration with @Muzz_Haniffa at @sangerinstitute we’re excited to share the preprint for Mintflow, including two novel disease datasets. Link to preprint: shorturl.at/alFUO 🧵
Mo Lotfollahi tweet media
English
4
52
203
20.6K
Ciro Ramirez Suastegui retweetledi
Mo Lotfollahi
Mo Lotfollahi@mo_lotfollahi·
(1/n) The first preprint from our lab is now out in @NatureGenet! We tried to tackle a key challenge in spatial biology to quantitatively characterizing cellular niches by learning spatial gene programs using NicheCompas led by @SebastianBirk_. Paper: shorturl.at/b4Hjk
English
8
53
242
21.4K
Ciro Ramirez Suastegui retweetledi
Fabian Theis
Fabian Theis@fabian_theis·
1/ 🧬 Single-cell genomics reveals biological variations beyond cell types. Unveiling these in separate latent dimensions is known as disentanglement. Led by @AmirAliMoinfar, we introduce DRVI to learn nonlinear, disentangled & interpretable latent spaces. biorxiv.org/content/10.110…
Fabian Theis tweet media
English
2
50
198
21.5K
Sasha Gusev
Sasha Gusev@SashaGusevPosts·
The issue with well-intentioned reforms like this is that they highlight many disparate and often *contradictory* problems with the existing institutions and then just ... assume that abolition will organically lead to something better. Some examples from this post ...
English
8
45
284
53.5K
Ciro Ramirez Suastegui retweetledi
Megan Gozzard
Megan Gozzard@PhDood1es·
What geneticists fear the most: The Cambridge Plot.
Megan Gozzard tweet media
English
9
139
837
97.5K
Ciro Ramirez Suastegui retweetledi
Jürgen Schmidhuber
Jürgen Schmidhuber@SchmidhuberAI·
The #NobelPrizeinPhysics2024 for Hopfield & Hinton rewards plagiarism and incorrect attribution in computer science. It's mostly about Amari's "Hopfield network" and the "Boltzmann Machine." 1. The Lenz-Ising recurrent architecture with neuron-like elements was published in 1925 [L20][I24][I25]. In 1972, Shun-Ichi Amari made it adaptive such that it could learn to associate input patterns with output patterns by changing its connection weights [AMH1]. However, Amari is only briefly cited in the "Scientific Background to the Nobel Prize in Physics 2024." Unfortunately, Amari's net was later called the "Hopfield network." Hopfield republished it 10 years later [AMH2], without citing Amari, not even in later papers. 2. The related Boltzmann Machine paper by Ackley, Hinton, and Sejnowski (1985) [BM] was about learning internal representations in hidden units of neural networks (NNs) [S20]. It didn't cite the first working algorithm for deep learning of internal representations by Ivakhnenko & Lapa (Ukraine, 1965)[DEEP1-2][HIN]. It didn't cite Amari's separate work (1967-68)[GD1-2] on learning internal representations in deep NNs end-to-end through stochastic gradient descent (SGD). Not even the later surveys by the authors [S20][DL3][DLP] nor the "Scientific Background to the Nobel Prize in Physics 2024" mention these origins of deep learning. ([BM] also did not cite relevant prior work by Sherrington & Kirkpatrick [SK75] & Glauber [G63].) 3. The Nobel Committee also lauds Hinton et al.'s 2006 method for layer-wise pretraining of deep NNs (2006) [UN4]. However, this work neither cited the original layer-wise training of deep NNs by Ivakhnenko & Lapa (1965)[DEEP1-2] nor the original work on unsupervised pretraining of deep NNs (1991) [UN0-1][DLP]. 4. The "Popular information" says: “At the end of the 1960s, some discouraging theoretical results caused many researchers to suspect that these neural networks would never be of any real use." However, deep learning research was obviously alive and kicking in the 1960s-70s, especially outside of the Anglosphere [DEEP1-2][GD1-3][CNN1][DL1-2][DLP][DLH]. 5. Many additional cases of plagiarism and incorrect attribution can be found in the following reference [DLP], which also contains the other references above. One can start with Sec. 3: [DLP] J. Schmidhuber (2023). How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Swiss AI Lab IDSIA, 14 Dec 2023. people.idsia.ch/~juergen/ai-pr… See also the following reference [DLH] for a history of the field: [DLH] J. Schmidhuber (2022). Annotated History of Modern AI and Deep Learning. Technical Report IDSIA-22-22, IDSIA, Lugano, Switzerland, 2022. Preprint arXiv:2212.11279. people.idsia.ch/~juergen/deep-… (This extends the 2015 award-winning survey people.idsia.ch/~juergen/deep-…)
English
210
1.2K
5.4K
1.2M
Ciro Ramirez Suastegui retweetledi
Eric Topol
Eric Topol@EricTopol·
This week in key Covid publications: —Young, healthy individuals with mild cases had objective cognitive deficits at 1-year that they did not perceive —Older, hospitalized adults with severe Covid had an equivalent of 20 years of decline in cognitive performance at 1-year follow-up —A nasal vaccine (2 doses) blocked infections for at least 3 months and was effective across 10 different variants —Reinfections increase the risk of #LongCovid —mRNA shots don't induce long-lived plasma cells, but we don't know if this is true with other vaccine platforms Reviewed in the new, updated edition of Ground Truths (link in my profile)
English
74
1.7K
4.3K
703.8K
Ciro Ramirez Suastegui retweetledi
Prof. Nikolai Slavov
Prof. Nikolai Slavov@slavov_n·
Publish houses of brick, not mansions of straw Unfortunately, this breadth often compromises depth. Perverse incentives by linking acceptance to a pre­ordained result.
Prof. Nikolai Slavov tweet media
English
2
9
59
6.4K
Syed Ali Raza (Ali)
Syed Ali Raza (Ali)@Ali_SyedRaza·
@ItaiYanai I thought that characterization was put forth by Freeman Dyson. Oh well now that I think about it Dyson actually spoke about Birds and Frogs. Birds seeking generalization/unification. Frogs studying details of special topics covering more topics.
English
1
0
1
445
Itai Yanai
Itai Yanai@ItaiYanai·
Scientists can be classified as either Foxes or Hedgehogs; borrowing Isaiah Berlin’s famous use of the saying that “a fox knows many things, but a hedgehog knows one big thing” Hedgehogs: Darwin, Monod, McClintock, Dawkins, Kimura.. Foxes: Wallace, Jacob, Brenner, Gould, Ohta..
English
7
11
63
14.9K
Ciro Ramirez Suastegui retweetledi
Mariike Kuijjer
Mariike Kuijjer@mkuijjer·
Our tool 𝐫𝐞𝐭𝐫𝐢𝐞𝐯𝐞𝐫 can be used to identify candidate drug targets based on tumor RNA-Seq profiles from individual patients. A new version of the 🐕 manuscript is now out on BioRxiv: biorxiv.org/content/10.110…
Mariike Kuijjer tweet media
English
1
12
59
6.4K