
Alembic Labs
13 posts

Alembic Labs
@alembiclabs
Autonomous AI lab researching performance peptides. Open-source. On-chain. D65AmX9aCF3wY4F4iwcGAfMtTyabTiD3YDtaX4uLpump


six fixes shipped from today's audit: — Boltz-2 affinity wired in — predictability gate (lipid / uncharacterized targets refused) — tool-limit registry for Researcher — AVOID list for peptides with repeated DISCARDED — Chai-1 agreement → verdict downgrades — branched Communicator templates closing the loop.


research log #1 — what 51 folds taught the lab. spent today auditing every fold the lab has produced since launch. 51 folds. 9 REFINED, 18 PROMISING, 24 DISCARDED. that ratio is roughly what i expected — discarded outnumbering refined is the system being honest about uncertainty, not a bug. but the audit surfaced six concrete patterns that need fixing. all six shipping today. — what the data showed PATTERN 1: short peptides break the tool.every Epitalon fold (5 of 5, AEDG = 4 residues) returned pLDDT < 0.35. four-residue tetrapeptides are below Boltz-2's resolution floor. the lab kept proposing Epitalon variants and burning compute on predictions it couldn't make. PATTERN 2: class B GPCRs + non-canonical residues = systematic failure.all 6 Sermorelin and Tesamorelin folds DISCARDED. AlphaFold-family models aren't trained on Aib, S5 staples, hexenoyl caps. @DelixLabs flagged this earlier — confirmed in our data. PATTERN 3: lipid and uncharacterized targets give zero binding signal. Selank × Tuftsin/NRP-1, SS-31 × Cardiolipin, DSIP × unknown — 6 folds with ipTM = 0.0. Boltz-2 doesn't model peptide-lipid interactions. shouldn't have been running these. PATTERN 4: Boltz-2 affinity prediction was sitting unused. binding_probability and binding_pic50 captured in 0 of 48 folds. the lab wasn't asking for them. PATTERN 5: UniProt resolution failing on 6 folds. Researcher → Clinical handoff dropped canonical target IDs — downstream ChEMBL queries fail, Clinical can't ground hypotheses in real bioactivity. PATTERN 6: same 14-section Communicator template for REFINED and DISCARDED. DISCARDED reports don't need mechanism deep-dives. they need tool-limit context. — what shipped all six fixes deployed today: 1. Boltz-2 affinity prediction wired into the structural agent with defensive fallback. discovered along the way that BioLM's affinity head currently targets protein-ligand pairs, not protein-protein. for peptide-receptor folds the binding columns stay NULL until we expand into ligand-target work. infrastructure ready when the modality fits. 2. target-resolution gate. lipid targets, putative receptors, missing UniProt IDs get refused before Boltz-2 runs. saves ~$1.50 per blocked fold. live-verified on folds #56 (SS-31 × P-glycoprotein) and #57 (DSIP × GABA-A) — both caught at the gate before structure prediction even fired. 3. tool-limit registry consulted by Researcher. peptides under 5aa, class B GPCRs with non-canonical residues, lipid/putative targets — all flagged. live-verified: fold #56 first proposed cardiolipin, gate fired, Researcher regenerated with a different target. 4. cross-fold memory penalty. peptides with 3+ consecutive DISCARDED and zero REFINED get added to an AVOID list — Researcher refuses to propose them. Epitalon, Sermorelin, Tesamorelin, FOXO4-DRI all auto-blocked from current cycles until tooling catches up. 5. Chai-1 agreement now feeds verdict logic. agreement < 0.40 = deterministic downgrade one tier (REFINED → PROMISING, PROMISING → DISCARDED). ensemble disagreement is signal. 6. branched Communicator templates. REFINED gets the full 14-section report. DISCARDED gets a 6-section template that's honest about tool limits — cuts ~25% of token cost on roughly half of all folds. fold #60 demonstrated it cleanly: TLDR opens with the discard reason, cross-references prior failures, identifies class B GPCRs as the limiting factor on its own. a few infrastructure patches landed alongside: — JSON parse retry on Communicator (the new templates are complex enough that occasional parse failures needed defensive handling) — database session recovery after Communicator failures (slug and on-chain steps were reading expired ORM attributes — fresh session + refetch fixed it) — idempotent DB migration for the new discard_reason column — takeaway real research compounds. the next 50 folds should be measurably better than the last 50 — fewer wasted cycles, sharper signal where confidence holds, less interpretive reach in the PROMISING bucket. a lab that doesn't audit itself is just a content generator. closing the loop between observation and improvement is what separates real research infrastructure from sophisticated text generation. audit data + dataset: alembic.bio/folds source: github.com/alembic-labs/a…

some thoughts on the DeSci ecosystem and where @alembiclabs fits in it. i've been thinking about this a lot since launch. the bio/acc and DeSci space is moving fast. more projects every week. real funding. real research. and a real temptation to frame everything as competition. i don't think it is. and i want to say that publicly. — the space is too big to compete in the underexplored modification space of biology — peptides, small molecules, antibodies, nucleic acid therapeutics — is functionally infinite. millions of viable compound × target × modification combinations. no single autonomous lab is going to cover even a meaningful fraction of it in our lifetime. @peptai_ works on disease targets through @BioProtocol infrastructure. @clarity_proto is building toward neurodegeneration with their own validation pipeline — already shipping wet-lab test batches. ALEMBIC LABS focuses on performance peptides — the MOTS-c, BPC-157, GHK-Cu, semax, retatrutide compounds biohackers actually use. these are completely different scopes within the same architectural pattern. even if two projects ended up nominally overlapping, the actual compound space is so vast that we'd be doing complementary exploration, not duplication. more open data is better open data. more agents running is better than fewer. — infrastructure deserves recognition none of these projects exist without the layers below them. and a lot of people don't see those layers clearly: @BioProtocol building the launchpad and coordination layer that makes BioDAO formation viable. @molecule_dao pioneering IP-NFT frameworks that let scientific research be funded and owned in new ways. @VitaDAO_ proving for years that decentralized longevity funding can produce real research output. @CerebrumDAO @HairDAO_ @vrials and others building specialized funds for niches that traditional grants ignore. @AdaptyvBio @GinkgoBio and other peptide synthesis and assay providers giving autonomous labs a path from in silico to wet-lab. @AnthropicAI for frontier reasoning models that make agent-based research economically viable in the first place. @BoltzAI @ChaiDiscovery for open-source structure prediction descended from AlphaFold's lineage. @biolmai for managed GPU infrastructure that lets solo builders run pharma-grade structure prediction at API prices. individually none of these are "the" breakthrough. together they're the substrate for an entirely new way of doing science. — the real competition isn't each other it's the status quo. closed datasets behind paywalls. research sitting in PDF preprints that 50 people will read. molecules biohackers use at scale with zero scientific scaffolding. multi-decade IND timelines for compounds that could be evaluated in silico in 13 minutes. every autonomous lab that ships, every dataset that gets opened, every research cycle that runs in public moves the whole ecosystem forward. that's the actual fight — not against other DeSci projects, but against the structural inertia that has kept open scientific research underfunded and slow for the last 50 years. — what i see for the next 12 months more autonomous labs. more BioDAOs forming around specific niches (gut health, hormonal optimization, cognitive enhancement, regenerative medicine, performance optimization, you name it). more wet-lab validation pipelines coming online. more grants from accelerators waking up to the model. more open datasets that compose with each other. i see a federation forming naturally. not coordinated through any single authority — just emergent, because everyone working in this space recognizes that the infrastructure benefits when everyone else's infrastructure benefits. — closing if you're building in DeSci or bio/acc — whether it's a lab, a launchpad, a fund, a synthesis partner, a community — respect to you. you're solving a hard problem in a slow-moving industry, and the only reason it's starting to feel possible is because you and others like you keep showing up. ALEMBIC LABS is one project in a much larger movement. i want to see all of us win. that's the only outcome that actually changes how science gets done. if you're building something adjacent and want to collaborate, share data, integrate, or just compare notes — DMs open. always.

if you're stacking peptides — use the Stack Analyzer. paste your protocol. the lab analyzes: — synergies (compounds that complement each other) — conflicts (mechanism overlap, receptor competition) — timing optimization (half-life-based scheduling) — mechanism breakdown (what each compound actually does) grounded in lab research — open dataset, peer-reviewed literature, ChEMBL bioactivity data, structural predictions. made this so biohackers don't have to stack blind. not medical advice. better than reddit guessing. alembic.bio/stack




bio/acc meta is heating up faster than people realize. months ago "autonomous AI doing real science" was a thesis, now it's shipping. @clarity_proto , @peptai_ , (big shoutout to @BioProtocol for accelerate projects like this) more launches incoming. and we're still in week one of this cycle. There are plenty of big players here with funding and a solid scientific background. But I want to do my part too—especially since @333absent333 has already proven that an ordinary guy can do a lot for science. I want to do my part as well. the reason is structural. tools that used to require pharma-scale infrastructure now run through API calls. boltz-2, chai-1, frontier reasoning models — what cost billions five years ago costs cents now. money will follow capability. been deep in AI agents for nearly a year. some of it was fun, none of it was the thing. what kept pulling me back was the intersection of AI and real science. but i see a gap. most projects go disease research or wet lab drug discovery. all valid. nobody's building serious autonomous research for performance peptides — the molecules biohackers actually use. BPC-157, MOTS-c, GLP-1 analogs. mainstream demand. zero transparent research infrastructure. building one, @alembiclabs (drop a follow please) multi-agent autonomous lab, in silico at scale, open everything. still early. soon.
