sk

355 posts

sk

sk

@compchemm

Computational Structural Biologist |

Katılım Nisan 2017
440 Takip Edilen484 Takipçiler
Sabitlenmiş Tweet
sk
sk@compchemm·
Rebuilt the latest PyRosetta4(v2026-02-06), finally fixed the GIL(Global Interpreter Lock) github.com/ullahsamee/PyR…
sk tweet media
English
4
4
70
3K
༈༈
༈༈@Shirinsmit·
What is extremely unhygienic but everyone seems to do it anyway???
English
2.1K
304
26.7K
7.6M
sk
sk@compchemm·
Here's an illustration from Brian explaining why Contact Molecular Surface(CMS) is better than Rosetta SASA or Shape Complementarity. You can install it using: pip install py-contact-ms
sk tweet mediask tweet media
English
1
0
4
144
sk
sk@compchemm·
@AsimovPress @btnaughton AdaptvybBio competition i think used ipsaemin not ipsae(dunbracks) right?
English
2
0
2
718
Asimov Press
Asimov Press@AsimovPress·
A few years ago, designing an antibody on the computer was extremely difficult. Today, there are several open-source tools which allow anyone to design antibodies from home. Out today: A step-by-step guide to antibody design. By @btnaughton.
English
6
75
305
46.5K
sk
sk@compchemm·
@novikoff @zerocam_app On P9Pro i see only 1x and 2x. Thats it! No telephoto😂
sk tweet media
English
1
0
1
91
Dimitri Novikov 🇺🇦
Dimitri Novikov 🇺🇦@novikoff·
Zerocam II for Android is all about pure camera power and the absence of complexity. Available very soon.
Dimitri Novikov 🇺🇦 tweet mediaDimitri Novikov 🇺🇦 tweet mediaDimitri Novikov 🇺🇦 tweet mediaDimitri Novikov 🇺🇦 tweet media
English
5
1
48
3.4K
Daniel
Daniel@kustom_ai·
Yo @zerocam_app just released 2.0 for Android, excited to give it a try on my @nothing Phone (3) 🔥
Daniel tweet media
English
6
1
49
4.6K
sk
sk@compchemm·
PDB ID: 7HORSE has been withdrawn due to its low atomic resolution😂
sk tweet media
English
0
0
1
176
sk
sk@compchemm·
@BioGeek Thank you! Just running some more accurate benchmarks and preparing docs so will upload it asap.
sk tweet media
English
0
0
1
58
sk
sk@compchemm·
Rebuilt the latest PyRosetta4(v2026-02-06), finally fixed the GIL(Global Interpreter Lock) github.com/ullahsamee/PyR…
sk tweet media
English
4
4
70
3K
sk
sk@compchemm·
Tested on 24-Core Intel Ultra9 275HX. Huge +30% speed gain in PyRosetta, when GIL is Disabled. Sorry for captions, as i quickly generate this figure (via gemini). Soon will upload the python wheel file, link in the github repo.
sk tweet media
English
1
0
0
258
sk
sk@compchemm·
Comes with latest beta_jan25, an updated energy function, better than beta_nov16. It improves performance on protein-protein interface prediction and reduces atomic clashing. biorxiv.org/content/10.648…
sk tweet media
English
1
0
2
199
sk
sk@compchemm·
@jvarga92 You can explore those loops better if you increase the sampling in Protenix-v1 including better plddt quality than boltz2. But i assume it could vary target to target structures.
English
0
0
0
52
Julia Varga
Julia Varga@jvarga92·
@compchemm Actually, the loop on the Boltz prediction seem to better modeled - AFAIK, it it flexible and this flexibility is captured better.
English
1
0
0
59
sk
sk@compchemm·
Run Protenix-v1 with Blackwell NVIDIA RTX 5090 Latest PyTorch 2.10, CUDA 13.0 github.com/bytedance/Prot… Boltz2(1st img) versus Protenix-v1
sk tweet mediask tweet media
English
3
12
110
5.9K
Isomorphic Labs
Isomorphic Labs@IsomorphicLabs·
Today we share a technical report demonstrating how our drug design engine achieves a step-change in accuracy for predicting biomolecular structures, more than doubling the performance of AlphaFold 3 on key benchmarks and unlocking rational drug design even for examples it has never seen before. Head to the comments to read our blog.
Isomorphic Labs tweet media
English
65
523
3K
1.3M
sk
sk@compchemm·
@GregPreibisch Not shown here but when aligned on crystal (Boltz2 0.4Å), (Protenix-v1 0.3Å). Top3 lowest rsmd from Boltz2 has that yellow very low plddt loops. Run Boltz2 (RECYCLING_STEPS=10, DIFFUSION_SAMPLES=25, SAMPLING_STEPS=200, MAX_MSA_SEQS=8192).
English
0
0
2
259
Hanna Hajishirzi
Hanna Hajishirzi@HannaHajishirzi·
🚀 Olmo 3.1 is here — earlier than expected! 32B Think: 3 extra weeks of RL training = steady gains and significant improvements. 32B Instruct: Our 7B recipe scaled to 32B, tuned for short chat + function calling. Olmo 3 keeps leveling up! Details in the latest version of the paper. As always, kudos to the whole team! New Olmo 3.1 artifacts: huggingface.co/collections/al… Paper (arxiv soon): allenai.org/papers/olmo3 Demo: playground.allenai.org
Hanna Hajishirzi tweet media
Ai2@allen_ai

Olmo 3.1 is here. We extended our strongest RL run and scaled our instruct recipe to 32B—releasing Olmo 3.1 Think 32B & Olmo 3.1 Instruct 32B, our most capable models yet. 🧵

English
15
42
431
35.5K