Samuel G. Huete

1.7K posts

Samuel G. Huete banner
Samuel G. Huete

Samuel G. Huete

@MicroBioMol

🧪 Molecular microbiologist 🧫 at @einlabryc ☣️ Infectious diseases and 🧬 evolution 🔬 PhD from @InstitutPasteur 🌐 Head of @JISEM_SEM 🎶 Composer & ⛰️hiker!

En algún lugar de La Mancha... Katılım Ekim 2015
604 Takip Edilen466 Takipçiler
Sabitlenmiş Tweet
Samuel G. Huete
Samuel G. Huete@MicroBioMol·
"Democracy cannot succeed unless those who express their choice are prepared to choose wisely..."
English
2
0
3
0
Samuel G. Huete retweetledi
SEIMC
SEIMC@SEIMC_·
📩 🦠 @el_pais publica estos días la siguiente carta al director, en la que una doctora se autodefine como "infectóloga sin título", en relación con la ausencia de una especialidad MIR de #EnfermedadesInfecciosas en nuestro país 🗣 "En #España no existe —otra cosa por la que somos únicos—", explica Elisa Ruiz Arabí #SíEspecialidadEI #SpainhasnoIDspeciality
SEIMC tweet media
Español
1
20
21
1.7K
Samuel G. Huete
Samuel G. Huete@MicroBioMol·
Un estudio interesante pero que, a mi juicio, sobre-interpreta en su abstract (lo único que he leído de momento) el concepto de "vida bacteriana". ¿Qué es una bacteria "viva"/"muerta"? No lo sabemos. Por tanto, no sabemos si puede "resucitar" o no. ¡Ojo al uso de los conceptos!
Jose Ramos Vivas@joseramosvivas

😱😱¿Ciencia ficción o realidad? 🧪🧟‍♂️ ¡Científicos han logrado "revivir" bacterias muertas 🦠✝️! ​Un nuevo estudio del J. Craig Venter Institute marca un hito en la biología sintética. No es solo edición genética, es "reanimación" celular. ​🧵 ↘️ ​1️⃣ Células "Zombie": Los investigadores mataron bacterias (M. capricolum) desactivando su ADN con químicos, dejando solo la estructura celular intacta 🧫. 2️⃣ Trasplante Total: Insertaron un genoma sintético completo en estas bacterias muertas 🧬. 3️⃣ El Despertar: Al recibir el nuevo ADN, la bacteria "muerta" volvió a la vida, empezó a crecer y a dividirse como una especie nueva ⚡. 4️⃣ Sin Antibióticos: A diferencia de métodos antiguos, este sistema no necesita marcadores de selección, lo que lo hace mucho más eficiente para crear organismos a la carta 🛠️. ​Esto abre la puerta, por ejemplo, a crear microbios diseñados para producir combustibles limpios o medicamentos específicos. 🌍💊 ​🔗 @SEMicrobiologia @COSCEorg @ANIH_1 @microBIOblog#BiologíaSintética #Ciencia #Genética #Innovación #CraigVenter #Biotecnología #Microbiología 👇 biorxiv.org/content/10.648…

Español
0
0
0
62
Samuel G. Huete retweetledi
JISEM-SEM
JISEM-SEM@JISEM_SEM·
¡Hola! JISEM tiene el placer de presentar, junto con Conexión-MICROBIOMA CSIC, MICROPATHS 2026 🦠🔬 Si eres un joven científico estudiando el microbioma, o conoces a alguien a quien le interesa, en este hilo tienes toda la información de este evento que no te puedes perder 🧵
JISEM-SEM tweet media
Español
1
15
17
2K
Samuel G. Huete retweetledi
Ruslan Rust
Ruslan Rust@rust_ruslan·
I currently have three papers in review at "high impact" journals. One of them has been sitting there for two years. In that time my daughter was born and learned how to walk, but apparently publishing a PDF was still not possible for me. For another one, after four months in review the editor told me they cannot find a second reviewer and asked me to suggest more reviewers. A third one sent me a message in 2026 saying the PDF I uploaded was larger than 10 MB and that I should please reupload everything to make the file smaller. All of this just to eventually pay between 7,000 and 12,000 USD per paper so someone can officially approve that the science we do is "legitimate". Reminder: not a single reviewer will be compensated here. I still don't understand how we as scientists can collectively be so smart when doing science and still tolerate a system like this when it comes to sharing our findings. We should move to preprints plus open review, whether human or AI, asap. So frustrated about it. I'd suggest sharing your work on bioRxiv or medRxiv, reading and reviewing preprints when you can, and highlighting good research, especially if it is still a preprint. Try platforms like ResearchHub (that pay for peer review) and experiment with AI based reviewers for faster feedback. Instead I read this as a proposed "revolutionary" measure:
Ruslan Rust tweet media
English
61
183
1.2K
194.7K
Samuel G. Huete retweetledi
Jaime Huerta-Cepas
Jaime Huerta-Cepas@jhcepas·
📢 We are hiring! PhD Position in Metagenomics at the CGMLab We are looking for a motivated researcher to join our newly launched project exploring the Unknown Microbial Biosphere, focusing on uncultivated eukaryotic microbes and novel functions. linkedin.com/feed/update/ur…
Jaime Huerta-Cepas tweet media
English
1
21
26
1.5K
Samuel G. Huete retweetledi
Emilio Palomares
Emilio Palomares@PalomaresEj·
Writing now a ERC PoC I have some thoughts I would like to share here and open ( why not?) a discussion.👇🏼
English
3
17
46
8.7K
Samuel G. Huete
Samuel G. Huete@MicroBioMol·
@BalbontinLab Sure! I agree, but great experiments must come from great ideas. Or better: a great idea (in research) is not such if it cannot be validated experimentally!
English
0
0
1
8
Bacterial Genetics and Evolution Lab
@MicroBioMol In experimental sciences, ideas get you max 50% of the way; the other 50% relies on their experimental confirmation/refutation. Great ideas "validated" by sloppy experiments that turn out to be wrong slow down scientific progress way more than peer-review does.
English
1
0
1
26
Samuel G. Huete
Samuel G. Huete@MicroBioMol·
"Papers are judged less by the originality of the idea and more by the volume of data, the sophistication of statistics, and the beauty of the figures. Science risks becoming data-rich but idea-poor." Ideas, fascinating and great ideas, are the true motor of scientific progress
Iñigo San Millán@doctorinigo

For decades, peer review has been treated as the gold standard of scientific validation. Yet many scientists know the reality: the system is far from perfect. Peer review is broken and sometimes even corrupted. The process can be slow, inconsistent, and vulnerable to bias. Reviewers are sometimes asked to judge work outside their true expertise. In other cases, they may be evaluating ideas that challenge the very paradigm in which they were trained. And occasionally, reviewers are simply competitors. Ironically, the most prestigious journals can also be the most conservative. Truly new ideas are often met with skepticism, while safer work that fits the current narrative moves more easily through the system. Increasingly, papers are judged less by the originality of the idea and more by the volume of data, the sophistication of statistics, and the beauty of the figures. Science risks becoming data-rich but idea-poor. But there is an important reality to remember: journals do not ultimately decide the impact of scientific work. Impact is decided later, by the community. By the scientists who read it, test it, debate it, and cite it. In the end, citations and ideas determine the legacy of a paper, not the impact factor of the journal that first published it. Science has always advanced by questioning assumptions. Perhaps it is time we also question the system that filters scientific ideas.

English
2
0
1
93
Samuel G. Huete
Samuel G. Huete@MicroBioMol·
Update March 2026: the grant got funded so I guess I'm not that bad at selling my ideas. Or were they good enough so they didn't need much cosmetics? I guess I'll never know!
English
0
0
0
13
Samuel G. Huete
Samuel G. Huete@MicroBioMol·
The perennial temptation of the researcher is to forget this: our job is thinking, thinking well, thinking high-quality. So, if we ain't thinking, we ain't doing our job!! Grab then your coffee mug and sit for a while in silence, no screens, just Science. And it will show up.
English
1
0
0
32
Samuel G. Huete
Samuel G. Huete@MicroBioMol·
I'm currently writing a scientific grant and sometimes wonder how much effort we invest making our research ideas "attractive" to reviewers instead of focusing on having good ideas. I wish we could get funded after a good and calm conversation with an evaluator over some coffee
English
1
0
0
59
Samuel G. Huete
Samuel G. Huete@MicroBioMol·
¡Recordad esta convocatoria! ¡Es una gran oportunidad de la @SEMicrobiologia para la formación de pre- y post-doctorales! @JISEM_SEM
JISEM-SEM@JISEM_SEM

📢 ¡Atención! 🔬 Recordad que está abierta la convocatoria del Programa de Ayudas de Movilidad César Nombela 2026 de la @SEMicrobiologia 👩‍🔬👨‍🔬 Una gran oportunidad para estancias cortas para pre y postdocs🦠 - Deadline: 8 de Marzo!!! 🔗 Toda la info aquí: semicrobiologia.org/becas/programa…

Español
0
2
0
415
Samuel G. Huete retweetledi
JISEM-SEM
JISEM-SEM@JISEM_SEM·
📢 ¡Atención! 🔬 Recordad que está abierta la convocatoria del Programa de Ayudas de Movilidad César Nombela 2026 de la @SEMicrobiologia 👩‍🔬👨‍🔬 Una gran oportunidad para estancias cortas para pre y postdocs🦠 - Deadline: 8 de Marzo!!! 🔗 Toda la info aquí: semicrobiologia.org/becas/programa…
Español
0
7
4
1.2K
Samuel G. Huete retweetledi
Prof. Nikolai Slavov
Prof. Nikolai Slavov@slavov_n·
An important reminder: Academia is not about publishing papers. It’s about creating knowledge and teaching it to the world. It’s about asking big bold questions and mentioning the next generation of intellectual leaders. The difference makes a huge difference.
English
17
21
160
9.7K
Samuel G. Huete retweetledi
Quant Beckman
Quant Beckman@quantbeckman·
Complexity is overrated
Quant Beckman tweet media
English
6
6
32
2.5K
Samuel G. Huete retweetledi
Miguel A. Méndez-Rojas
Miguel A. Méndez-Rojas@nanoprofe·
Dice Guillermo Lopez Lluch sobre la investigación: Cuando aún era predoctoral comenzó la gran presión de las administraciones para computar el "exito" y la "rentabilidad" de la investigación científica. Habíamos entrado en la tecnocracia de los números y, por tanto, de los factores de medición de la "calidad" científica y de los rankings. Por aquel tiempo me contaron la historia de un científico muy mayor de una universidad inglesa que estaba en su despacho. A esto que llega el administrativo para averiguar qué estaba haciendo el científico a lo que éste contestó "Estoy pensando". El administrativo se marchó tal vez pensando en la profundidad de la respuesta. Al tiempo, tal vez poco, el administrativo vuelve, puede que en el mismo mes o al siguiente, y vuelve a preguntar qué está haciendo el científico a lo que éste contesta "Sigo pensando". Hoy en día no se nos deja pensar. Informes de productividad, CVs en diversos formatos, número de proyectos, artículos, conferencias por año, trienio, quinquenio, etc... Nos asaetean con repetir lo mismo una y otra y otra vez para aparecer en el informe anual, trienal, quinquenal de nuestras instituciones mientras lo mas importante, tener tiempo para pensar, meditar, enfocar en los resultados y sus consecuencias, se nos hurta entre evidencias cuantificables. Aún no se han enterado que el científico siempre está trabajando revisando el mundo para descubrir la gravedad al ver como cae una manzana, o inventar la PCR conduciendo por una carretera desértica y encontrar la forma de editar genes pensando en bacterias que crecen en lugares muy inhóspitos. El valor de pensar no importa a la tecnocracia que te llena de informes, correos, meetings o line, encuestas sobre cualquier proceso, servicio, sistema.
Miguel A. Méndez-Rojas tweet media
Español
11
216
439
23.6K
Samuel G. Huete
Samuel G. Huete@MicroBioMol·
No estoy en contra @OIntegridadEsp pero ¿no deberíamos atacar la raíz del problema, i.e. la presión enorme por publicar a cualquier coste, en lugar de sus síntomas? Eliminemos a MDPI/Frontiers y otros saldrán que seguirán aprovechándose del publish or perish para hacer dinero.
Manuel Ansede@manuelansede

"Los números especiales de MDPI y Frontiers son un enorme coladero. La ANECA debe excluir explícitamente estos trabajos. Universidades y OPI deben identificar las carreras construidas mediante números especiales y penalizarlas”, afirma @isidroaguillo elpais.com/ciencia/2026-0…

Español
7
14
45
6K
Samuel G. Huete retweetledi
Microbiología Clínica RyC 🧫🔬🦠
El Monitor de Reputación Sanitaria evalúa anualmente la reputación de los principales agentes del SNS. Este año incluye como novedad la especialidad de #Microbiología, donde nos han situado en el 3er puesto, un reconocimiento que nos impulsa a seguir apostando por la excelencia
Microbiología Clínica RyC 🧫🔬🦠 tweet media
Español
0
9
15
1.3K
Samuel G. Huete retweetledi
Niko McCarty.
Niko McCarty.@NikoMcCarty·
There's a recent blog from @OpenAI where they used GPT-5 to optimize a common biology experiment, called Gibson Assembly. I've seen criticisms online from people who say things like, "Who cares? A human totally could have done that" or whatever. And that's true. But I still think this blog is nice for a couple reasons. First, faster iterations / more reliable experiments is one of the best ways to accelerate biotechnology progress more broadly! Experiments take much too long, and are often much too unreliable, for scientists to move quickly. Therefore, we should invest more resources toward optimizing and improving common methods that seem "mundane". Second, this is a simple experimental system in which to test AI; indeed, that's the whole point! Gibson Assembly has been around for nearly two decades, is widely-used, and only requires three enzymes. It is therefore a natural fit for AI companies to benchmark their models on biological questions. (The parameter space is not too large!) To understand what OpenAI actually did, I first need to tell you about Gibson Assembly, a common method biologists use to stitch DNA molecules together. Originally developed in 2009, most scientists use Gibson because it's dead simple: Everything works at one temperature (50°C) and it requires only three enzymes. The DNA molecules to be joined together are designed such that they have 15-40 nucleotides, at either end, which overlaps with the other DNA molecule. All the DNA is then added to a tube and an enzyme, exonuclease, "chews back" several dozen nucleotides from the 5' ends of each molecule, leaving behind long single-stranded "arms." These arms float around in the liquid, collide with a matching arm in another DNA sequence, and hug each other tightly. A second enzyme, DNA polymerase, runs along these touching DNA strands and fills in parts of the arms that don't overlap or are still single-stranded. Finally, DNA ligase seals the "nick" and heals the strands, thus forming a newly assembled, double-stranded piece of DNA. OpenAI collaborated with a new biosecurity startup, Red Queen Bio (co-founded by Hannu Rajaniemi, an excellent science fiction writer), to build the evaluation framework. The metric they settled on is called cloning efficiency, which just means this: For a fixed amount of input DNA (like one picogram) transformed into cells, how many colonies successfully grow and contain the correctly assembled DNA molecule? By the end of their blog post, the OpenAI team claims that they were able to boost this number 79x relative to a "baseline protocol" from New England Biolabs, or NEB, a common purveyor of the Gibson enzymes. An important note is that OpenAI says no humans were involved in optimizing the reaction; all the humans did was carry out protocols generated by GPT-5, and also upload experimental results back into the model. They repeated this several times, coaxing the model to iterate each time. Their Gibson Assembly was remarkably simple, involving just two DNA molecules: a gene encoding a fluorescent protein and a plasmid to hold the gene. (The OpenAI team, intriguingly, also set up a set up a robot to automate the Gibson Assembly and transformation, but couldn't get it to work as well as a human. "We compared the robot's work to human-performed experiments at each step. The robot successfully handled the transformation process…When compared directly with human-performed transformations, the robot generated similar quality data with equivalent improvements over baseline, showing early potential for automating and accelerating biological experiment optimization." However. "while the fold-changes between the robot and human experiments were similar, absolute colony counts from the robot were approximately ten-fold lower than manual execution.") After several rounds of iteration, the model made two notable proposals: First, it added two additional enzymes to the normal Gibson Assembly reaction. Specifically, it added "the recombinase RecA from E. coli, and phage T4 gene 32 single-stranded DNA–binding protein (gp32)." The blog continues: "Working in tandem, gp32 smooths and detangles the loose DNA ends, and RecA then guides each strand to its correct match." This tweak improved the "cloning efficiency" metric by 14x over the standard NEB protocol. Second, it made a subtle change to how the assembled DNA molecules were inserted into living cells. Specifically, GPT-5 told the humans to spin down cells in a centrifuge, thus forming a pellet, prior to transforming them. This is typically not recommended because competent cells are "fragile," but the OpenAI team writes that "the cells tolerated concentration well and the increased molecular collisions boosted transformation efficiency substantially (>30-fold on final validation)." Now, recall that at the start of this little blog I said I really liked this experiment! (Do not crucify me, ye AI optimists.) But no internet commentary is truly complete without some nitpicking, so here goes. One criticism is that the largest improvement made by the model was not related to Gibson Assembly at all! It was related to how the DNA gets delivered into cells. And, indeed, prior studies have shown something similar. (This research paper, for example, says that one of the best ways to improve transformation is to concentrate cells beforehand. Fair play to the OpenAI team for linking to this in their blog post.) And if you are a human reading this blog, and you are planning to spin down your competent cells before transformation, just be sure to aliquot everything into small tubes first; repeated spins will, over time, kill everything. Another issue is that adding RecA and gp32 to a Gibson Assembly reaction complicates things quite a bit. For a normal Gibson reaction, everything comes in a single kit from NEB with the enzymes, and the whole experiment is done at one temperature: 50°C! But doing a Gibson Assembly this way would require one to buy purified RecA and gp32, and also change incubation temperatures to get everything working (RecA and gp32 work best at 37°C.) This is more expensive and more complicated, but maybe worthwhile in some cases. And lastly, the selected metric — namely, how many colonies one gets from a given amount of DNA — doesn't actually seem all that useful in most scenarios. A scientist stitching together two strands of DNA doesn't actually care if they only get five colonies because, often, they only need to get ONE colony that works, and then they can grow up those cells in large beakers and extract a huge amount of the plasmid. A more useful metric might be to increase the total number of unique DNA strands that can be joined together in a single Gibson Assembly reaction, without reducing overall quality, instead. Still, I liked this blog post as a whole. I'm glad people are optimizing the "small" things, and I don't blame OpenAI for not trying to solve cancer, in its overwhelming magnitude of manifestations, on their first attempt! Gibson Assembly is a much better starting point.
Niko McCarty. tweet media
English
19
52
354
71.8K
Samuel G. Huete retweetledi
Kiko Llaneras
Kiko Llaneras@kikollan·
🎯 ¿Qué define a un “cuñado”? Hablan con rotundidad y sin conocimiento. Son el arquetipo de algo humano: Somos más rotundos cuanto menos sabemos. Ahora un estudio en Science ha medido algo similar. Analizaron a personas con ideas anticientíficas (sobre vacunas, transgénicos, homeopatía). Y mirad el gráfico: ↓↓ La gente con más ideas controvertidas, tiene MENOS conocimiento en general. ↑↑ Pero creen saber más. Es el Dunning-Kruger empírico. Y lo contrario de lo lógico: deberíamos ser cautos al principio y categóricos cuanto más estudiamos. Montaigne lo resumió hace siglos: “Nada se cree tan firmemente como aquello que menos se conoce”.
Kiko Llaneras tweet media
Español
39
359
853
85K