Philippe A. Robert

3.5K posts

Philippe A. Robert

Philippe A. Robert

@PRobertImmodels

Computational Immunologist + Real-World QSP modeller

Basel, Switzerland Katılım Eylül 2019
1.5K Takip Edilen1.1K Takipçiler
Sabitlenmiş Tweet
Philippe A. Robert
Philippe A. Robert@PRobertImmodels·
Very happy to see our work published! Synthetic data power! Thanks for @NatComputSci for their support and to the 5 positive reviewers for feedback! Thanks to @chevaliersf for writing a very cool News & Views about it too! Now, Enjoy!
Nature Computational Science@NatComputSci

. @pandaisikit, @probertimmodels, @victorgreiff and colleagues introduce the Absolut! framework, which can generate synthetic 3D-antibody-antigen structures to assist machine learning and dataset construction for antibody design. nature.com/articles/s4358… 👉rdcu.be/c1UcJ

English
5
2
30
4K
Philippe A. Robert retweetledi
Cthulhu President
Cthulhu President@Cthulhu4Prez·
Bon courage pour envahir le Groenland les gars.
Français
203
1.9K
17.1K
540.2K
Philippe A. Robert retweetledi
Jorge Bravo Abad
Jorge Bravo Abad@bravo_abad·
Evaluating AI-generated drug molecules: when 3D shape matters as much as chemistry AI models can now design drug-like molecules from scratch, placing atoms in 3D space to fit snugly inside a protein's binding pocket. But there's a catch: many generated molecules adopt physically impossible shapes—atoms crammed too close together, rings twisted unnaturally, or bonds frozen in high-energy orientations. These aren't minor cosmetic issues; they mean the molecule couldn't actually exist as designed. How do you check millions of AI-generated structures for physical plausibility? Geometric rules (measuring distances and angles) are fast but miss anomalies nobody thought to define. Quantum-mechanical calculations are accurate but far too slow. We need something in between. Fan and coauthors propose an elegant solution: use AI to evaluate AI, through two complementary tools. The first, HEAD (High-Energy Atom Detector), computes the energy of every individual atom using a machine learning force field. The logic is powerful: any geometric anomaly—clashes, distortions, misplaced atoms—shows up as abnormally high atomic energy, even if that specific error type was never explicitly defined. Comparing each atom's energy against thresholds from known valid molecules, HEAD catches problems that geometric checks miss, 30× faster. The second, TED (Torsional Energy Descriptor), examines each rotatable bond after basic structural cleanup. Even a geometrically sound molecule can have bonds stuck in energetically costly orientations—something drug designers care about deeply. TED predicts full rotational energy profiles using a neural network trained on millions of molecular fragments, achieving near-quantum-mechanical accuracy at a fraction of the cost. Testing five recent generative models across 102 protein targets reveals that no model excels at everything, and only ~20% of molecules from the best models survive all quality filters. Strengths are complementary: some models build better shapes, others handle bond rotations more realistically. The takeaway extends beyond drug design: as generative AI enters more scientific domains, we need evaluation tools as sophisticated as the generators—fast, physically grounded, and precise enough to tell developers exactly what to fix. Paper: nature.com/articles/s4146…
Jorge Bravo Abad tweet media
English
5
19
96
4.7K
Philippe A. Robert retweetledi
Neil Renic
Neil Renic@NC_Renic·
Flight attendant: “Is there a doctor on the plane?” PhD: “yes, are you hiring?”
English
51
583
5.5K
314.2K
Philippe A. Robert
Philippe A. Robert@PRobertImmodels·
To celebrate the 1st November, I hope Sarkozy will also stay 1 more day per year in jail like the "solidarity day" where he forced people to work 1 day more for free every year!
English
0
0
0
51
Philippe A. Robert retweetledi
Jorge Bravo Abad
Jorge Bravo Abad@bravo_abad·
When protein design meets differentiable programming Most protein design tools assume a stable 3D fold. But many biologically critical proteins are intrinsically disordered: they never adopt a single structure, instead flickering across vast ensembles. Standard ML approaches that predict a folded structure don’t apply. Ryan K. Krueger, Michael P. Brenner, and Krishna Shrinivas present a differentiable framework that inverts molecular simulations. The key idea: represent a sequence as a continuous probability distribution over amino acids, run coarse-grained simulations to model its ensemble properties, and then backpropagate gradients directly through the physics. This turns sequence design into an end-to-end optimization problem, where objectives can be tuned for size, flexibility, responsiveness, or binding. With this method, the authors design disordered proteins that act as loops or linkers, remain compact yet disordered, or function as sensors that expand or contract in response to salt, temperature, or phosphorylation. They even extend it to create candidate binders for other disordered targets—long viewed as one of the hardest problems in protein engineering. The result is a general recipe for physics-grounded differentiable design: keep the molecular simulator, make it differentiable, define the right loss, and let optimization explore sequence space efficiently. For applied ML, it’s a blueprint showing how simulation and differentiable programming can merge to tackle problems beyond text or images—pushing generative design into the messy, high-dimensional space of biology. Paper: nature.com/articles/s4358…
Jorge Bravo Abad tweet media
English
1
19
144
9.3K
Philippe A. Robert
Philippe A. Robert@PRobertImmodels·
correlation, association, hazard or causality? Gipsies ; copper stolen ; no signal on railway ; train blocked in a cow field ; big delay :p and it's even not Deutsche Bahn!
English
0
0
0
134
Philippe A. Robert retweetledi
Kyle Tretina, Ph.D.
Kyle Tretina, Ph.D.@AllThingsApx·
DL affinity model inter-protein scoring noise problem: When comparing binding affinity predictions across proteins, scores aren’t calibrated on the same scale. Example: On IVS (1,001 actives / 14 proteins), Boltz-2’s top-1 target ID ≈ random, with strong pocket bias.
Kyle Tretina, Ph.D. tweet media
English
1
7
42
2.8K
Philippe A. Robert
Philippe A. Robert@PRobertImmodels·
@DdelAlamo @GSK oh wow! congrats!! sad we didn't manage to catch a coffee before! wishing you good luck!
English
0
0
1
59
Diego del Alamo
Diego del Alamo@DdelAlamo·
This upcoming week is my last week at @GSK. I’m going to move on to greener pastures but first a few weeks off to … close on a house! Go for some bike rides! Paint 40k models! Etc.!
English
6
0
22
3.8K
Philippe A. Robert retweetledi
Mathieu
Mathieu@miniapeur·
Mathieu tweet media
ZXX
9
82
1K
35.1K
Philippe A. Robert retweetledi
Marios Georgakis
Marios Georgakis@MariosGeorgakis·
A very comprehensive effort to develop a single-cell atlas of human atherosclerosis based on 79 plaque samples from 3 vascular beds and >250K cells👇
Marios Georgakis tweet media
English
2
37
181
12.2K
Philippe A. Robert retweetledi
Biology+AI Daily
Biology+AI Daily@BiologyAIDaily·
A new benchmark for deep learning based affinity prediction: Solving the inter-protein scoring noise problem 1. Researchers from the University of Münster have introduced a novel benchmark for evaluating deep learning models' ability to predict protein-ligand binding affinities, addressing the inter-protein scoring noise issue that plagues classical scoring functions. This benchmark aims to assess whether models can accurately identify the correct protein target for a given active molecule, a critical capability for reliable binding affinity prediction. 2. The study utilized the LIT-PCBA dataset, which contains known active compounds for various protein targets, to create an inverse virtual screening benchmark. By applying the Boltz-2 model to predict binding affinities across all targets, the researchers tested its ability to distinguish the correct target among decoys. Surprisingly, despite Boltz-2's claims of high accuracy, it failed to consistently identify the correct protein target, highlighting the ongoing challenge of generalizing protein-ligand interactions. 3. The results showed that Boltz-2's predictions were not significantly better than random selection, with only a few targets showing enrichment of correct predictions. This suggests that the model may struggle with the variability in binding pockets across different proteins, a problem analogous to the inter-protein scoring noise in traditional scoring functions. The findings emphasize the need for more robust and generalizable models in the field of protein-ligand docking. 4. The study also explored various settings and parameters of the Boltz-2 model, including different diffusion sampling steps and molecular weight corrections, but none of these adjustments led to substantial improvements in the benchmark performance. This indicates that the issue may be rooted in the model's fundamental approach to predicting binding affinities, rather than specific computational settings. 5. The authors concluded that while deep learning models like Boltz-2 have shown promise in certain applications, there is still much work to be done to achieve accurate and generalizable binding affinity predictions. The proposed benchmark provides a valuable tool for evaluating and improving future models, emphasizing the importance of addressing protein flexibility, desolvation, and other factors that influence protein-ligand binding. 📜Paper: doi.org/10.26434/chemr… #DeepLearning #ProteinLigandDocking #Benchmarking #BindingAffinityPrediction
Biology+AI Daily tweet media
English
0
12
59
3.9K
Philippe A. Robert
Philippe A. Robert@PRobertImmodels·
Julia language could have been great... If they didn't let people make packages with 2000 incompatible versions like in python. Even ChatGPT gets confused in syntaxes that were only valid once when pluto aligned with mars.
English
0
0
1
111
Philippe A. Robert retweetledi
Biology+AI Daily
Biology+AI Daily@BiologyAIDaily·
TorchANI-Amber: Bridging neural network potentials and classical biomolecular simulations 1. The introduction of TorchANI-Amber marks a significant stride in integrating machine learning potentials with traditional molecular dynamics simulations. This interface seamlessly incorporates ANI-style neural network potentials into the widely-used Amber software suite, supporting both sander and pmemd engines. It not only implements all published ANI models but is also extensible to other energy predicting potentials through a simple mechanism, requiring no knowledge of Amber’s codebase. 2. One of the core innovations of TorchANI-Amber is its optimized CUDA implementation for computing the ANI models’ feature vectors. This enables simulations of systems with hundreds of thousands of atoms at the neural network’s level of theory, approaching DFT accuracy. The interface is designed to be fully compatible with all Amber capabilities, allowing users to leverage ANI potentials instead of traditional force fields for large biomolecular systems. 3. The study demonstrates the stability and energy conservation of ANI potentials through molecular dynamics simulations on various biomolecular systems, including ubiquitin and Trp-cage proteins in explicit solvent. The results show that ANI models can maintain molecular connectivity and structural stability throughout the simulations, with comparable energy conservation to classical force fields. 4. TorchANI-Amber also showcases its versatility in enhanced sampling techniques. The T-REMD simulations of Met-enkephalin in vacuo illustrate the compatibility of ANI potentials with well-established enhanced sampling methods. The increased diversity of sampled conformational space under T-REMD dynamics highlights the potential of ANI potentials for exploring complex biomolecular systems. 5. Performance benchmarks reveal that TorchANI-Amber achieves high performance on large systems, with ANI ensembles capable of running at up to 1 ns/day for systems of 25,000 atoms. The interface also supports the Amber-NB scheme, which offloads inter-molecular terms to the underlying MD engine, maintaining comparable performance while addressing the short-range nature of NN-IPs. 6. Future work on TorchANI-Amber will focus on refining the ANI architecture to include long-range interactions and extending the interface to support multi-GPU evaluation. This will further enhance the capabilities of ANI-style potentials within Amber and expand their utility for simulating large biomolecular systems. 📜Paper: doi.org/10.26434/chemr… #TorchANI #Amber #NeuralNetworkPotentials #MolecularDynamics #BiomolecularSimulations #MachineLearning
Biology+AI Daily tweet media
English
0
9
28
2K
Philippe A. Robert retweetledi
Ming "Tommy" Tang
Ming "Tommy" Tang@tangming2005·
Analysis of 10,478 cancer genomes identifies candidate driver genes and opportunities for precision oncology nature.com/articles/s4158…
Ming "Tommy" Tang tweet media
English
3
28
191
14.9K
Philippe A. Robert retweetledi
Biology+AI Daily
Biology+AI Daily@BiologyAIDaily·
LABind: Identifying Protein Binding Ligand-Aware Sites via Learning Interactions Between Ligand and Protein @NatureComms 1. LABind is a novel structure-based method that predicts protein-ligand binding sites in a ligand-aware manner, utilizing a graph transformer and cross-attention mechanism to capture binding patterns and learn distinct interactions between proteins and ligands. This approach significantly improves the prediction accuracy for both seen and unseen ligands compared to existing methods. 2. The study addresses the limitations of traditional experimental methods and existing computational approaches by proposing a unified model that incorporates ligand information explicitly. LABind outperforms other advanced methods on multiple benchmark datasets, demonstrating its superior ability to generalize to new ligands and maintain robust performance even with predicted protein structures. 3. LABind’s architecture includes a graph converter module that encodes protein structures into graphs, capturing spatial features essential for binding site prediction. The cross-attention mechanism allows the model to effectively integrate ligand properties, enhancing its ability to distinguish between different ligands and their corresponding binding sites. 4. The application of LABind extends beyond binding site prediction to tasks like binding site center localization and molecular docking. It shows strong potential in improving docking accuracy and can be applied to proteins without experimentally determined structures by leveraging predicted structures from tools like ESMFold. 5. The study includes comprehensive experiments and ablation studies that validate the importance of each component in LABind, such as the protein representation and ligand features. The visualization of residue representations highlights how LABind captures crucial information about protein-ligand interactions, leading to more accurate predictions. 6. LABind demonstrates practical applicability by accurately predicting binding sites for the SARS-CoV-2 NSP3 macrodomain with unseen ligands, showcasing its potential in real-world scenarios. This method provides a valuable tool for understanding protein functions and aiding drug design efforts. 📜Paper: nature.com/articles/s4146… 💻Code: github.com/ljquanlab/LABi… #ProteinLigandBinding #ComputationalBiology #MachineLearning #DrugDiscovery #Bioinformatics
Biology+AI Daily tweet media
English
1
14
57
3.9K
Philippe A. Robert retweetledi
Samuel Hume
Samuel Hume@DrSamuelBHume·
Donidalorsen just got FDA-approved for hereditary angioedema – it becomes the 10th approved antisense oligonucleotide! (the first for hereditary angioedema) It targets and degrades prekallikrein mRNA to interrupt the cascade that causes swelling attacks in hereditary angioedema
Samuel Hume tweet mediaSamuel Hume tweet media
English
2
7
54
4.4K
Philippe A. Robert retweetledi
Daniel Gould, MD, PhD
Daniel Gould, MD, PhD@DJGould94·
Everyone is welcome to criticise, and legitimate criticism is legitimate criticism, but I've often found the harshest and most vocal critics of research are those who do the least of it
Daniel Gould, MD, PhD tweet media
English
1
4
27
917