covenant

122 posts

covenant banner
covenant

covenant

@covenant_ai

One order, many covenants. @tplr_ai + @basilic_ai + @grail_ai

Bittensor Katılım Ağustos 2025
6 Takip Edilen5.1K Takipçiler
covenant
covenant@covenant_ai·
AI that runs its own experiments, picks what works, and improves itself overnight. No human in the loop. @Hevalon built it on @basilic_ai. 15 autonomous iterations, better model by morning. One command. This is what happens when GPU infrastructure becomes programmable.
⚡🛡️ Evan Pappas@Hevalon

I built autoresearch-rl and pointed it at GRPO fine-tuning on @basilic_ai A100s. One command. 15 iterations. Zero human intervention. 100% infrastructure success rate. GSM8K pass@1: 26% baseline to 36%. The hard part wasn't the search algorithm. It was the infrastructure.

English
2
5
48
10.3K
covenant retweetledi
This Week in Startups
This Week in Startups@twistartups·
“Nobody had hope in crypto any more… But we posted about our 72B parameter run and it caught fire.” — Templar creator Sam Dare Hear about how this subnet incentivized miners to train a massive new AI model, what’s holding it back from competing with the likes of GPT-5 and Opus 4.6, and the advantages of decentralizing and democratizing the training process on a brand new TWiST. PLUS Sam’s reaction to hearing his project discussed on the @TheAllIn podcast with special guest Jensen Huang. cc: @Jason, @Lons, @tplr_ai, @DistStateAndMe
English
19
37
187
30K
covenant
covenant@covenant_ai·
Today on @TWiStartups: @DistStateAndMe on the show with @Jason. Decentralized pre-training, the Covenant ecosystem, and why open intelligence creation matters. Live at 12 PM CT.
This Week in Startups@twistartups

NVIDIA's Jensen Huang just called Covenant-72B 'a pretty crazy technical accomplishment' on the All-In Pod. We've got the man who built it: Sam Dare of Subnet 3, Templar (@tplr_ai). Plus a demo of OpenOats -- the open source AI note-taker and assistant -- from @yazins. Track it all on the live docket: thisweekinstartups.com/docket

English
0
10
28
3.8K
covenant
covenant@covenant_ai·
Today on Novelty Search: the Covenant team. Templar, Grail, Basilica. A big couple of weeks: the viral Covenant-72B announcement, Jack Clark, All-In Podcast, Fireworks citing PULSE for Cursor. We'll walk through all of it. 9PM UTC on Bittensor Discord.
Openτensor Foundaτion@opentensor

This week on Novelty Search :: SN3 @tplr_ai @covenant_ai 72B. ~1.1T tokens. Commodity internet. No centralized cluster. No whitelist. Anyone with GPUs could join or leave freely. Special guests :: @DistStateAndMe @erfan_mhi @Hevalon @joellidin @amir_sarfi Hosted by :: @const_reborn 9PM UTC // via Bittensor Discord

English
1
10
53
4K
covenant
covenant@covenant_ai·
Congrats to the winners of the SF Bittensor hackathon track. Cryptographic assurance, physical-world coordination, decentralized quant trading. The range of what's being built on Bittensor keeps expanding. @basilic_ai provided compute credits for the prize pool. Good to see the ecosystem supporting new builders!
Openτensor Foundaτion@opentensor

Winners from the SF Bittensor hackathon track: 🥇 @provenonce_ai :: Proof of Assurance Cryptographic assurance for AI workflows. 🥈 @PCCProtocol :: Physical Capability Cloud Protocol Verified coordination for physical-world services. 🥉 @tech_insignia :: Insignia Decentralized quant trading with competitive model evaluation. Prize pool + $3500 cash + $3000 in @basilic_ai compute credits + Up to 1000 TAO discretionary investment from @UnsupervisedCap + 50 TAO toward mainnet registration from @CrucibleLabs + Intro call with @bitstarterAI Intelligence at The Frontier partner @FundingCommons Bittensor hackathon track produced by @redwoodnorth_io

English
0
3
29
3.3K
covenant
covenant@covenant_ai·
Research from the Covenant ecosystem providing the theoretical foundation for infrastructure behind @cursor_ai, one of the most-used AI coding tools in the world. @grail_ai (SN81) published PULSE in February. @FireworksAI_HQ cited it in their Composer 2 blog this week.
Erfan Miahi@erfan_mhi

Pretty wild to see our work on PULSE show up in a real 1T-scale post-training run done by @cursor_ai. Cursor built Composer 2 in collaboration with Fireworks and trained it across multiple datacenters, getting huge savings by syncing only the weights that actually changed between RL checkpoints. Fireworks reports that more than 98% of BF16 weights can stay bit-identical from one checkpoint to the next, and they cited our paper on this, too. That is basically the exact sparsity pattern we showed in our paper, where we introduced PULSE, a lossless method for 100x more efficient weight-sync communication for RL training. Their system is very close to this idea in practice: exploiting the fact that only a tiny fraction of weights actually change between RL steps. The deeper reason for this is not that RL gradients are sparse. They are not. The gradients are still dense. What becomes sparse is the realized weight update. In RL, learning rates are tiny, and with Adam, the update size stays bounded around the learning rate. Then BF16 adds a hard threshold: if the update is too small relative to the weight, it just rounds away, and the stored weight does not change at all. So from one checkpoint to the next, most of the model literally stays identical. That is why this is such a useful systems idea. Lower precision, like using BF16, does not just save compute. It can also save communication, because more tiny updates get absorbed and fewer weights need to be shipped. At that point, compute efficiency and comms efficiency stop being a tradeoff. They start reinforcing each other. If you want the deeper story on why RL updates get this sparse, the theory behind it, and how to push weight-sync bandwidth down by 100x+, take a look at our paper: arxiv.org/pdf/2602.03839 The Fireworks blog on Composer 2 that cited our work: fireworks.ai/blog/frontier-… The animation is taken from Fireworks!

English
3
27
114
9K
covenant
covenant@covenant_ai·
Open competition producing open optimization research. Crusades miners are publishing their techniques as they discover them. Selective gradient checkpointing, static shape compilation, fused cross-entropy paths. Every submission makes the next one better.
Shivam Chauhan@0hawkeye33

x.com/i/article/2034…

English
0
3
17
3.6K
covenant retweetledi
grail
grail@grail_ai·
PULSE made weight sync 100x faster. That turned the trainer itself into the bottleneck. @erfan_mhi just fixed that too. Grail's GRPO trainer is now 1.8x faster on a single B200: 27% to 47% MFU, epoch time nearly halved. Decentralized post-training is converging on centralized speed.
Erfan Miahi@erfan_mhi

Used autoresearch to make @grail_ai GRPO trainer 1.8x faster on a single B200. I kept postponing this for weeks since the bottleneck in our decentralized framework was mainly communication. But after our proposed technique, PULSE, made weight sync 100x faster, the training update itself became the bottleneck. Even with a fully async trainer and inference, a slow trainer kills convergence speed. A task that could've eaten days of my time ran in parallel while I worked on other stuff. Unlike original autoresearch, where each experiment is 5 min, our feedback loop is way longer (10-17 min per epoch + 10-60 minutes of installations and code changes), so I did minimal steering when it was heading in bad directions to avoid burning GPU hours. The agent tried so many things that failed. But, eventually found the wins: Liger kernel, sequence packing, token-budget dynamic batching, and native FA4 via AttentionInterface. 27% to 47% MFU. 16.7 min to 9.2 min per epoch. If you wanna dig deeper or contribute: github.com/tplr-ai/grail We're optimizing everything at the scale of global nodes to make decentralized post-training as fast as centralized ones. Stay tuned for some cool models coming out of this effort. Cheers!

English
0
16
81
18.3K
covenant
covenant@covenant_ai·
Sorry, everyone. TGIF #29 is cancelled this week. We will be back next Friday. See you then.
English
1
0
1
1.5K
covenant
covenant@covenant_ai·
TGIF in two hours. Miners, come celebrate with us. Newcomers to the community, come say hello.
templar@tplr_ai

TGIF #29 tomorrow! The Covenant-72B thread reached well beyond Bittensor this week. @DistStateAndMe and the full @covenant_ai team talk about what that traction means, where decentralized AI sits in the broader conversation, and what comes next. Miners, come celebrate with us. We are opening the stage to anyone who contributed compute to the run or has a story to share. Request to speak and we will bring you up. x.com/i/spaces/1oKMv…

English
1
0
11
4.3K
covenant
covenant@covenant_ai·
Seven subnet ideas are heading to testnet, backed by @basilic_ai compute credits. Basilica sponsors the Ideathon because the best way to validate a subnet design is to run it, and compute should not be the bottleneck. Congratulations to all the teams advancing.
HackQuest@HackQuest_

Bittensor Subnet Ideathon — Round I Results 🏆 After reviewing an incredible set of submissions, we’re excited to announce the teams moving forward to Round II (Testnet Phase). Both our Top 7 and Honorable Mention teams will advance to the next round. Top 7 Teams (Ranked Alphabetically): • C-SWON — Aditya Singh • ChronoSeek — Connor Daly • Defektr — Hiw3 • Mentiss_AI — Jeremy Wang • OpenMind — Bello Iteoluwakisi • Proven — Christopher H.G • vividverse — @vividverseai Honorable Mentions: • BitDefense — A.G. • DaVinci — Chris Romano • Keyword Intelligence Subnet (KIS) — Ozan Andaç • Moirai Subnet — @ai_moir • Probity — Dicky Bayu Sadewo • Query Agent — Daniel Derefaka • sotarad-ai — Wade • Talos Protocol — Christopher H.G • TensorClock — Valeriy Lihachev • Titan — JKohav With over 150+ projects, the competition was fierce — and these teams stood out for their strong mechanism design and promising subnet ideas on @opentensor 👏

English
0
2
30
7.2K
covenant
covenant@covenant_ai·
Covenant72B preview today with @DistStateAndMe walking through the run and what comes next. Full report and model weights on Monday. @const_reborn will be joining us to celebrate this milestone in deAI history!
templar@tplr_ai

This week on TGIF, @DistStateAndMe previews the Covenant72B report ahead of the full release on Monday. He'll walk through what the training run produced, what we learned, and what comes next. Also on the agenda: multi-GPU Crusades went live this week and other updates across the ecosystem. x.com/i/spaces/1jxXg…

English
0
10
28
3.7K
covenant
covenant@covenant_ai·
Good to see strong problem-driven work coming out of the Bittensor Ideathon at @SankalpForum Africa Summit. @wadewalton13, the first-place winner built a medical AI pre-screening subnet to address Africa's radiologist shortage! As @DistStateAndMe put it during last week's TGIF: "I was shocked to listen to people that understood Bittensor so much." We were glad to participate and sponsor the compute. Thanks to @HackQuest_ and @opentensor for organizing.
HackQuest@HackQuest_

Bittensor Subnet Ideathon at Sankalp Africa Summit: Winning Ideas 🇰🇪 Together with @opentensor, we saw sharp, problem-driven pitches from some of Africa’s strongest talent — truly an honor to support this initiative @SankalpForum. Congrats to our winners: 🏆 1st Place: Medical AI for Pre-screening — Wade Walton 🎖️ Runner Up: Creative Subnet — David Opiyo Special thanks to Sam @DistStateAndMe and Gareth @GarethHowe95938 for their mentorship, and our partner Andres @amarquezlara from #Ufacilitate for hosting. Learn more 👇

English
0
5
21
2.1K