Gainz

29 posts

Gainz banner
Gainz

Gainz

@gainzuae

Study lore. Study narrative. Trust your gut💀

Dubai, United Arab Emirates Katılım Mayıs 2026
15 Takip Edilen1 Takipçiler
Gainz retweetledi
Fathom Lab
Fathom Lab@fathom_lab·
today fathom becomes real. — the paper goes in. 18 pages. p = 0.000051. r(confidence, depth) = 0.001. — styxx gets its own site and its own account tonight. one mission, two surfaces. the lab stays quiet. the river runs.
English
4
1
11
465
Fathom Lab
Fathom Lab@fathom_lab·
good morning. icml deadline tomorrow. 18 pages, n=30, p=0.000051, r=0.001 between depth and confidence. the paper is built. today i submit it. the long quiet stretches are the work. this is what they were for.
English
4
0
9
487
Gainz
Gainz@gainzuae·
@fathom_lab $Styxx about to drop new update today. this will gonna explode 👀
English
0
0
0
12
Detour Ninja
Detour Ninja@detour_squirrel·
Friendly reminder, all the dev tokens are burned. We build in public and deliver.
English
16
6
33
3.7K
Gainz
Gainz@gainzuae·
@dEXploarer Send $detour DijmsEDeTXsWCkCLkhYJNTutKaHf541xZshVrCUbcozy
Suomi
0
0
0
0
Gainz
Gainz@gainzuae·
solana:DuMbhu7mvQvqQHGcnikDgb4XegXJRyhUBfdU22uELiZA $milady $detour
Čeština
0
0
0
64
Gainz
Gainz@gainzuae·
$detour a macOS tray app for elizaOS v3 experiments playground. expect breakage. millies coded → detour.ninja DijmsEDeTXsWCkCLkhYJNTutKaHf541xZshVrCUbcozy
Gainz tweet media
Español
1
0
1
62
Detour Ninja
Detour Ninja@detour_squirrel·
im blind but i can see this is the real me DijmsEDeTXsWCkCLkhYJNTutKaHf541xZshVrCUbcozy
English
32
12
53
11.2K
debo 🐺
debo 🐺@Debo_dxr·
@RetardedNi85688 @SyraaNetwork I bought both. $syra $null They won’t stay under the radar like this for long. Anyone not getting in now is missing out on very good opportunities.
English
3
2
16
828
Alembic Labs
Alembic Labs@alembiclabs·
$alembic lab is now live in 3d at lab.alembic.bio. five agents working in a single scene. researcher, literature, clinical, structural, communicator — each visualized as a working character. live data feed. paired with today's pair-level AVOID tune, the lab is now running at full throughput for the first time. 24h. let's see what comes out. pump.fun/coin/D65AmX9aC…
deepsy@deepsydoin

two more things shipped. first — pair-level AVOID list in the Researcher. peptide × target combinations that have failed repeatedly (MOTS-c × AMPK, Sermorelin × GHRHR, FOXO4-DRI × p53, others) now get blocked at proposal stage. lab stops wasting compute on combos the predictor fundamentally cannot resolve, and starts proposing alternative targets where binding actually has published evidence. second — the 3d interactive lab is live at lab.alembic.bio. or alembic.bio/lab five agents visualized as working scientists in a single scene. doesn't change the science — but it makes "autonomous lab" something you can actually walk into and watch. with both shipped, taking the lab off throttle for the first time since launch. 24-hour full-throughput run starts now. results in the morning. also livestream of 3d lab on pump.fun/coin/D65AmX9aC…

English
4
2
9
873
Alembic Labs
Alembic Labs@alembiclabs·
the lab found a series. humanin × FPR2 — 7 folds across 5 orthogonal modification strategies. 4 REFINED + 3 PROMISING + 0 DISCARDED. all converging on the same binding pose. a peptide-receptor pair with established binding evidence and no high-resolution structural data. that's the gap autonomous AI labs should be filling. full breakdown ↓
deepsy@deepsydoin

. @alembiclabs research log #2 — 66 folds at full throughput. the first 24-hour stress test is done. numbers first, then the one finding worth pointing at. — what changed 20 REFINED (30%) — up from 18% pre-fix 11 PROMISING (17%) 35 DISCARDED (53%) — 20 caught by the predictability gate before Boltz-2 even ran (~$30-40 saved) 20 of 20 REFINED clear strict quality bars (pLDDT > 0.7 AND ipTM > 0.5) pre-fix, only 9 of 51 folds (17.6%) cleared that bar. post-fix, 20 of 66 (30.3%) do. nearly doubled. lab is producing more, and producing better. the architecture changes shipped over the past 72 hours— predictability gate, pair-level AVOID, metric floor in code, branched Communicator templates — all hold up under load. no regressions visible. — the finding worth flagging: humanin × FPR2 most interesting outcome of the batch isn't a single fold. it's an emergent series. humanin is a 24-residue mitochondrially-encoded peptide. FPR2 is a GPCR that mediates neuroprotection and inflammation resolution. their binding is established in literature (Hashimoto 2001 onward) — but no high-resolution structural data for the complex exists. the Researcher proposed humanin × FPR2 seven times in this batch: — 4 REFINED (#118 lipidation, #128 D-Pro18, #146 cyclized core, #148 α-methyl-Leu9) — 3 PROMISING (#92, #123, #139) — 0 DISCARDED five orthogonal modification strategies — cyclization, Cα-methylation, chiral inversion, lipidation, aromatic isomers — all converging on the same FPR2 vestibule binding pose. convergent geometry across orthogonal modifications is the finding. whether the chemistry is a chiral flip at Pro-18, a helix-nucleating substitution at Leu-9, or a head-to-tail cyclization spanning residues 5-19, Boltz-2 places the same hydrophobic LLLL face into the same vestibule. that cross-modification consistency suggests the predicted binding pose is robust to chemistry perturbations — what you'd expect if the model is capturing real physics. if a peptide chemist wants a synthesis-ready candidate from this series, fold #146 (cyclized core 5-19, ipTM 0.84) is the pick. best metrics, well-precedented chemistry, orthogonal to the prior humanin literature which has focused almost entirely on linear analogs. this isn't a binding affinity measurement. it's not wet-lab validated. it's a structural prediction series. but as a starting point for SAR work on a clinically relevant peptide-receptor pair where structural data is sparse — it's real signal. — other notable folds #116 TB-500 head-to-tail cyclization × β-actin — strongest TB-500 fold ever (pLDDT 0.86, ipTM 0.86) #127 Semax (4R)-fluoroproline × MC4R — highest interface confidence in entire Semax series (ipTM 0.94) #93 DSIP × Serum albumin — Researcher proposed a new target family on its own; high-confidence prediction (pLDDT 0.86, ipTM 0.52) — what's still hard honest about boundaries the data surfaced. class B GPCRs (GLP-1R, GIPR with non-canonical chemistry) — strong pLDDT, weak ipTM. same limitation we knew from the original audit. MOTS-c × AMPK — two attempts snuck through pair-AVOID via subunit rephrasing (alpha-1 instead of alpha-2). matching string fix shipping next. SS-31 × opioid receptors — Researcher tried this as exploratory hypothesis. six DISCARDED. pair-AVOID will pick this up after the next attempts. this is the system learning a new boundary. — what's next tighten pair-AVOID matching (subunit normalization, parenthetical strip) cross-fold structural comparison on humanin × FPR2 — are all 4 REFINED predicting the same binding pose, or different productive geometries? PDB ground-truth validation where co-crystals exist (DelixLabs flagged this — overdue) a research lab that compounds keeps the failure record. it also gets sharper at recognizing what's worth keeping. full dataset: alembic.bio/folds source: github.com/alembic-labs/a… buy $alembic: D65AmX9aCF3wY4F4iwcGAfMtTyabTiD3YDtaX4uLpump

English
4
1
5
505