Alembic Labs

13 posts

Alembic Labs banner
Alembic Labs

Alembic Labs

@alembiclabs

Autonomous AI lab researching performance peptides. Open-source. On-chain. D65AmX9aCF3wY4F4iwcGAfMtTyabTiD3YDtaX4uLpump

Katılım Nisan 2026
11 Takip Edilen90 Takipçiler
Sabitlenmiş Tweet
Alembic Labs
Alembic Labs@alembiclabs·
what an autonomous AI lab for performance peptides actually looks like in 2026. — five Claude-powered agents researching BPC-157, MOTS-c, semax, retatrutide and others — Boltz-2 + Chai-1 structure prediction with cross-validation — every fold on Solana, every prompt on GitHub — $200/day to run at full throughput the founder's full breakdown below 👇
deepsy@deepsydoin

x.com/i/article/2050…

English
10
2
28
6.4K
Alembic Labs
Alembic Labs@alembiclabs·
audit on the audit. yesterday's verdict fixes were over-discarding folds with strong metrics. shipped today: — escape-hatch removed from Structural prompt — deterministic metric floor in code — 8 folds reclassified DISCARDED → PROMISING sanity cycles running. full throughput in a few hours.
Alembic Labs@alembiclabs

six fixes shipped from today's audit: — Boltz-2 affinity wired in — predictability gate (lipid / uncharacterized targets refused) — tool-limit registry for Researcher — AVOID list for peptides with repeated DISCARDED — Chai-1 agreement → verdict downgrades — branched Communicator templates closing the loop.

English
1
1
4
252
Alembic Labs retweetledi
deepsy
deepsy@deepsydoin·
While the lab keeps shipping folds (now running on the new fixes from yesterday's audit),i'm finishing the 3d interactive lab. each of the 5 agents gets visualized as a working character — generating animations for researcher, literature, clinical, structural, communicator. you'll be able to walk into the scene and watch them work. functionally it changes nothing about the science. but no one in this space has built something like it, and i think it's going to be something. content engine, virality, immersive proof that "autonomous lab" isn't just a tagline. once shipped, plan is to livestream the scene on @pumpfun 24/7. hope I can ship it in next few days
English
3
1
19
1.4K
Alembic Labs
Alembic Labs@alembiclabs·
six fixes shipped from today's audit: — Boltz-2 affinity wired in — predictability gate (lipid / uncharacterized targets refused) — tool-limit registry for Researcher — AVOID list for peptides with repeated DISCARDED — Chai-1 agreement → verdict downgrades — branched Communicator templates closing the loop.
deepsy@deepsydoin

research log #1 — what 51 folds taught the lab. spent today auditing every fold the lab has produced since launch. 51 folds. 9 REFINED, 18 PROMISING, 24 DISCARDED. that ratio is roughly what i expected — discarded outnumbering refined is the system being honest about uncertainty, not a bug. but the audit surfaced six concrete patterns that need fixing. all six shipping today. — what the data showed PATTERN 1: short peptides break the tool.every Epitalon fold (5 of 5, AEDG = 4 residues) returned pLDDT < 0.35. four-residue tetrapeptides are below Boltz-2's resolution floor. the lab kept proposing Epitalon variants and burning compute on predictions it couldn't make. PATTERN 2: class B GPCRs + non-canonical residues = systematic failure.all 6 Sermorelin and Tesamorelin folds DISCARDED. AlphaFold-family models aren't trained on Aib, S5 staples, hexenoyl caps. @DelixLabs flagged this earlier — confirmed in our data. PATTERN 3: lipid and uncharacterized targets give zero binding signal. Selank × Tuftsin/NRP-1, SS-31 × Cardiolipin, DSIP × unknown — 6 folds with ipTM = 0.0. Boltz-2 doesn't model peptide-lipid interactions. shouldn't have been running these. PATTERN 4: Boltz-2 affinity prediction was sitting unused. binding_probability and binding_pic50 captured in 0 of 48 folds. the lab wasn't asking for them. PATTERN 5: UniProt resolution failing on 6 folds. Researcher → Clinical handoff dropped canonical target IDs — downstream ChEMBL queries fail, Clinical can't ground hypotheses in real bioactivity. PATTERN 6: same 14-section Communicator template for REFINED and DISCARDED. DISCARDED reports don't need mechanism deep-dives. they need tool-limit context. — what shipped all six fixes deployed today: 1. Boltz-2 affinity prediction wired into the structural agent with defensive fallback. discovered along the way that BioLM's affinity head currently targets protein-ligand pairs, not protein-protein. for peptide-receptor folds the binding columns stay NULL until we expand into ligand-target work. infrastructure ready when the modality fits. 2. target-resolution gate. lipid targets, putative receptors, missing UniProt IDs get refused before Boltz-2 runs. saves ~$1.50 per blocked fold. live-verified on folds #56 (SS-31 × P-glycoprotein) and #57 (DSIP × GABA-A) — both caught at the gate before structure prediction even fired. 3. tool-limit registry consulted by Researcher. peptides under 5aa, class B GPCRs with non-canonical residues, lipid/putative targets — all flagged. live-verified: fold #56 first proposed cardiolipin, gate fired, Researcher regenerated with a different target. 4. cross-fold memory penalty. peptides with 3+ consecutive DISCARDED and zero REFINED get added to an AVOID list — Researcher refuses to propose them. Epitalon, Sermorelin, Tesamorelin, FOXO4-DRI all auto-blocked from current cycles until tooling catches up. 5. Chai-1 agreement now feeds verdict logic. agreement < 0.40 = deterministic downgrade one tier (REFINED → PROMISING, PROMISING → DISCARDED). ensemble disagreement is signal. 6. branched Communicator templates. REFINED gets the full 14-section report. DISCARDED gets a 6-section template that's honest about tool limits — cuts ~25% of token cost on roughly half of all folds. fold #60 demonstrated it cleanly: TLDR opens with the discard reason, cross-references prior failures, identifies class B GPCRs as the limiting factor on its own. a few infrastructure patches landed alongside: — JSON parse retry on Communicator (the new templates are complex enough that occasional parse failures needed defensive handling) — database session recovery after Communicator failures (slug and on-chain steps were reading expired ORM attributes — fresh session + refetch fixed it) — idempotent DB migration for the new discard_reason column — takeaway real research compounds. the next 50 folds should be measurably better than the last 50 — fewer wasted cycles, sharper signal where confidence holds, less interpretive reach in the PROMISING bucket. a lab that doesn't audit itself is just a content generator. closing the loop between observation and improvement is what separates real research infrastructure from sophisticated text generation. audit data + dataset: alembic.bio/folds source: github.com/alembic-labs/a…

English
4
1
12
1.1K
Alembic Labs
Alembic Labs@alembiclabs·
DeSci season
deepsy@deepsydoin

some thoughts on the DeSci ecosystem and where @alembiclabs fits in it. i've been thinking about this a lot since launch. the bio/acc and DeSci space is moving fast. more projects every week. real funding. real research. and a real temptation to frame everything as competition. i don't think it is. and i want to say that publicly. — the space is too big to compete in the underexplored modification space of biology — peptides, small molecules, antibodies, nucleic acid therapeutics — is functionally infinite. millions of viable compound × target × modification combinations. no single autonomous lab is going to cover even a meaningful fraction of it in our lifetime. @peptai_ works on disease targets through @BioProtocol infrastructure. @clarity_proto is building toward neurodegeneration with their own validation pipeline — already shipping wet-lab test batches. ALEMBIC LABS focuses on performance peptides — the MOTS-c, BPC-157, GHK-Cu, semax, retatrutide compounds biohackers actually use. these are completely different scopes within the same architectural pattern. even if two projects ended up nominally overlapping, the actual compound space is so vast that we'd be doing complementary exploration, not duplication. more open data is better open data. more agents running is better than fewer. — infrastructure deserves recognition none of these projects exist without the layers below them. and a lot of people don't see those layers clearly: @BioProtocol building the launchpad and coordination layer that makes BioDAO formation viable. @molecule_dao pioneering IP-NFT frameworks that let scientific research be funded and owned in new ways. @VitaDAO_ proving for years that decentralized longevity funding can produce real research output. @CerebrumDAO @HairDAO_ @vrials and others building specialized funds for niches that traditional grants ignore. @AdaptyvBio @GinkgoBio and other peptide synthesis and assay providers giving autonomous labs a path from in silico to wet-lab. @AnthropicAI for frontier reasoning models that make agent-based research economically viable in the first place. @BoltzAI @ChaiDiscovery for open-source structure prediction descended from AlphaFold's lineage. @biolmai for managed GPU infrastructure that lets solo builders run pharma-grade structure prediction at API prices. individually none of these are "the" breakthrough. together they're the substrate for an entirely new way of doing science. — the real competition isn't each other it's the status quo. closed datasets behind paywalls. research sitting in PDF preprints that 50 people will read. molecules biohackers use at scale with zero scientific scaffolding. multi-decade IND timelines for compounds that could be evaluated in silico in 13 minutes. every autonomous lab that ships, every dataset that gets opened, every research cycle that runs in public moves the whole ecosystem forward. that's the actual fight — not against other DeSci projects, but against the structural inertia that has kept open scientific research underfunded and slow for the last 50 years. — what i see for the next 12 months more autonomous labs. more BioDAOs forming around specific niches (gut health, hormonal optimization, cognitive enhancement, regenerative medicine, performance optimization, you name it). more wet-lab validation pipelines coming online. more grants from accelerators waking up to the model. more open datasets that compose with each other. i see a federation forming naturally. not coordinated through any single authority — just emergent, because everyone working in this space recognizes that the infrastructure benefits when everyone else's infrastructure benefits. — closing if you're building in DeSci or bio/acc — whether it's a lab, a launchpad, a fund, a synthesis partner, a community — respect to you. you're solving a hard problem in a slow-moving industry, and the only reason it's starting to feel possible is because you and others like you keep showing up. ALEMBIC LABS is one project in a much larger movement. i want to see all of us win. that's the only outcome that actually changes how science gets done. if you're building something adjacent and want to collaborate, share data, integrate, or just compare notes — DMs open. always.

English
5
0
18
2.1K
Alembic Labs retweetledi
deepsy
deepsy@deepsydoin·
today's focus: starting outreach to biotech companies, accelerators, and DeSci grant programs. the token was always a means to an end — a way to buy time. grants take weeks to months. accelerator applications are slow. partnerships need warm intros and follow-up cycles. the lab can't wait through that timeline at $1000 of self-funded runway. so the token covers compute while the slow funding work happens in parallel. what i'm showing in those outreach conversations: — the lab has been live for 3 days — 44 folds completed end-to-end — each fold is a full research cycle (5 agents, structure prediction, literature review, clinical grounding, 14-section report) — equivalent throughput to weeks of work for a traditional research team — every fold open-source, downloadable, on-chain verifiable — zero human in the loop after the cycle starts this is what i couldn't show two weeks ago. now i can. if you're at a biotech accelerator, DeSci fund, peptide synthesis lab, or anywhere adjacent — DMs open. this is the part where the lab finds its long-term home. Also feel free to share any feedback about alembic that will be very valuable too 🙏 alembic.bio github.com/alembic-labs
English
2
2
14
852
Alembic Labs retweetledi
deepsy
deepsy@deepsydoin·
showing this fold report because it captures what makes the lab expensive: a single fold isn't a Boltz-2 API call. it's a complete research cycle. 1. RESEARCHER picks the peptide and frames a hypothesis. example: "would amidating Selank's C-terminus extend its plasma half-life by blocking carboxypeptidase cleavage?" 2. LITERATURE reads relevant papers, finds consensus and contradictions 3. CLINICAL grounds the question in real ChEMBL bioactivity data 4. STRUCTURAL runs Boltz-2 + Chai-1 on dedicated GPU, computes structural integrity, aggregation, stability, BBB 5. COMMUNICATOR writes a real research report with citations and caveats every fold is ~$2-3. ~13 minutes of compute. structure prediction GPUs eat most of that. multiply by 24 folds/day = $45/day. multiply by 110 folds/day = $200/day. reading the actual reports clarifies why. this isn't text generation about peptides. it's structural biology automated. Thats why I have delay 1 hour between each fold. My plan Is run lab at max power, every single day alembic.bio/folds/45-retat…
English
1
2
8
866
Alembic Labs retweetledi
deepsy
deepsy@deepsydoin·
quick state of the lab update. lab has been running and i've shipped a few important upgrades today: — rewrote agent prompts. researcher now grounds modifications in known mechanism better, communicator generates richer 14-section reports — optimized boltz-2 / chai-1 cross-validation. chai-1 now only runs in the 0.5-0.7 plddt borderline band where ensemble disagreement is actually informative. cuts ~40% of folding costs without losing signal — integrated 3d structure rendering. every fold now has a live 3dmol viewer — peptide chain in plasma red, target receptor in white cartoon — every refined and discarded fold now hashed and committed to solana. tamper-evident, verifiable, ~$0.001 per fold leaving the lab running through the night. fresh folds populating /folds tomorrow. honest about resources. running at one cycle per hour for now (24 folds/day). full throughput is 110 cycles/day at ~$5500/month — boltz-2 inference on biolm is the cost driver. scaling up as funding allows. for context: 24 folds/day is more peptide modification iterations than most pharma teams complete in a quarter. five ai agents, no human in the loop, in the open. deeper technical writeup coming next — full architecture, agent prompts, boltz-2 failure modes, lessons learned. website: alembic.bio X: @alembiclabs github repo: coming soon
deepsy@deepsydoin

bio/acc meta is heating up faster than people realize. months ago "autonomous AI doing real science" was a thesis, now it's shipping. @clarity_proto , @peptai_ , (big shoutout to @BioProtocol for accelerate projects like this) more launches incoming. and we're still in week one of this cycle. There are plenty of big players here with funding and a solid scientific background. But I want to do my part too—especially since @333absent333 has already proven that an ordinary guy can do a lot for science. I want to do my part as well. the reason is structural. tools that used to require pharma-scale infrastructure now run through API calls. boltz-2, chai-1, frontier reasoning models — what cost billions five years ago costs cents now. money will follow capability. been deep in AI agents for nearly a year. some of it was fun, none of it was the thing. what kept pulling me back was the intersection of AI and real science. but i see a gap. most projects go disease research or wet lab drug discovery. all valid. nobody's building serious autonomous research for performance peptides — the molecules biohackers actually use. BPC-157, MOTS-c, GLP-1 analogs. mainstream demand. zero transparent research infrastructure. building one, @alembiclabs (drop a follow please) multi-agent autonomous lab, in silico at scale, open everything. still early. soon.

English
3
1
10
1.4K