Gradients

128 posts

Gradients banner
Gradients

Gradients

@gradients_ai

The world's best AutoML platform, powered by Subnet 56 on Bittensor. https://t.co/ArqoyWjn8D

Bittensor Beigetreten Nisan 2025
4 Folgt2K Follower
Gradients
Gradients@gradients_ai·
Open-source tournaments keep on delivering... The miners are improving nicely on the new RL environment training tasks. The Gin Rummy environment from the Affinetes GAME was especially fruitful, as seen below. After Goospiel, Alfworld, and Gin Rummy we are expanding to other environments from Affinetes until we get the best scripts to train on all of them
Gradients tweet media
English
0
8
53
3.1K
Gradients retweetet
const
const@const_reborn·
People beginning to realize that when software is cheap all that remains is digital commodities behind APIs, your agent needs inference (64) it needs compute (4,51), it needs fine tunes (56), it needs VPN IPs (65), it needs data scraping (13), it needs twitter data (22), it need prediction (6) … You can orchestrate with a prompt and some digital commodities what previously took teams of hundreds now just with a loop and good prompt. It collapses to the commodities and you MUST get on top of them before it falls down. #TAO
English
22
109
646
54.2K
Gradients retweetet
covenant
covenant@covenant_ai·
Basilica's first public subnet partnership since the architecture overhaul. @gradients_ai is running RL evaluations on @basilic_ai compute infrastructure. The model is symbiotic: subnets help stress-test and improve Basilica's infrastructure, and Basilica provides compute at terms that make their operations sustainable. Both networks get stronger. Value compounds rather than transfers. This is how the Covenant AI ecosystem grows, through practical integrations where the value accrues to both sides.
basilica@basilic_ai

x.com/i/article/2021…

English
1
12
40
2.2K
Gradients retweetet
Quasar
Quasar@QuasarModels·
With the help of @const_reborn, we have developed strategic partnerships and acquired some serious engine upgrades: ☁️ Hosted on @chutes_ai 🛠️ Fine-tuning handled via @gradients_ai 🧠 64× B200s for training from @TargonCompute Quasar × Chutes × Gradients x Targon 🤝 Loading…
Quasar tweet media
English
9
37
171
23.9K
Gradients retweetet
Gradients
Gradients@gradients_ai·
New Sheriff in Text Town! 🤙 We have a new winner in our open-source tournaments for text AutoML scripts. How did they beat the old champ? 1. Early Training (100 steps with default LR): - Measures training speed - Estimates how many runs fit in remaining time - Records loss and stops early 2. LR Grid Search (5-9 runs with different LRs): - Generates LRs logarithmically around initial LR (0.66x to 1.51x) - Each run trains to step 100 only - Records loss, deletes poor checkpoints - Finds which LR gives best early loss 3. Final Training (Full training with best LR): - Uses the winning LR from step 2 - Trains to completion There were other improvements in training time utilization, handling crashes, checkpoint selection and the general supporting infra using Redis etc. Kudos (and emissions) to the new champ! Find the latest AutoML scripts under: github.com/gradients-open…
English
1
7
35
6.6K
Gradients retweetet
const
const@const_reborn·
2026 is already becoming Bittensor's Year of Integration. There are no explicit incentives for teams to work together. It's just happening. "survival of the fittest" -> "survival of the collaborative"
English
21
94
523
22.1K
Gradients
Gradients@gradients_ai·
📢103 TAO worth of alpha buyback and burn: As promised, the collected tournament fees have been used to buy alpha and then burn it. 🔥 The extrinsic can be found on taostats: taostats.io/extrinsic/7227… Happy New Year from the Gradients team! 🤙
English
2
12
99
3K
Gradients
Gradients@gradients_ai·
She knows... Latest and greatest image generation models added to Gradients: Z-Image and Qwen Image 🤙 Choose between them or one of the other 36 image models to fine-tune to your style, brand or face on gradients.io - no code, just a few clicks and done Styles and faces were already great with the previous Flux and SDXL models but the latest models also excel with text as shown in the images generated on the Gradients models below. These newest models are also getting added to the miner tournaments, so the results will approach perfection by the power of bittensor incentives in a few tournaments
Gradients tweet mediaGradients tweet mediaGradients tweet media
English
2
5
40
1.7K
Gradients
Gradients@gradients_ai·
2/2 Our YaRN-extended Covenant-Chat (32k context) demonstrates what's possible when you combine extended context windows with optimized gradient-based training. Longer context means the model sees more relevant information during each training step, leading to stronger learning signals and faster convergence. YaRN is already integrated into the platform and the tournaments - the full post-training pipeline coming to users in January 🤙
Gradients tweet media
English
4
5
39
1.7K
Gradients
Gradients@gradients_ai·
1/2 Gradients takes the best decentralized, open-source base LLM from Templar Covenant and finetunes it into a chatbot assistant that can carry multi-turn conversation and reasonably respond to user queries, here's how we did it: - Chat template integration and embedding update - YaRN context window extension from 2k to 32k - Finetuning on a mix of SOTA open source datasets built to improve open source finetuning to destroy benchmarks, but augmented using our own synthetic few shot enhancement pipeline The results speak for themselves: Gradients training slashed test loss by 75% in under one epoch. This dramatic improvement shows how efficiently our autoML training pipeline extracts performance from your models.
Gradients tweet media
English
1
17
73
8.3K
Gradients retweetet
templar
templar@tplr_ai·
Our TGIF X Space unfortunately got cut short by a technical glitch before @samoline56 could finish his dive deep into the post-training details. Still, what we captured shows the technical approach is sound. Covenant72B Checkpoint 2 tl;dr update: Our first collaboration with @gradients_ai demonstrates that cross-subnet training workflows are production-ready. Thanks to @samoline56 and the entire Gradients team for helping us continue to push the boundaries of what's possible with decentralized AI infrastructure! Watch the discussion: youtu.be/kQ9tNKk2vfM?si… Covenant72B continues training - full run completes in the new year.
YouTube video
YouTube
English
0
2
15
1.6K
Gradients retweetet
covenant
covenant@covenant_ai·
Great to see ecosystem partnerships in action. @samoline56 is joining @DistStateAndMe for a deep dive into the technical collaboration between Covenant AI and @gradients_ai. The end goal? A competitive LLM trained entirely through decentralized infrastructure on Bittensor! 🔗 x.com/i/spaces/1BdxY…
templar@tplr_ai

TGIF #19 starts in 90 minutes. A special technical collaboration deep dive with @DistStateAndMe and @samoline56 from @gradients_ai Behind the scenes of our recent collaboration on Covenant72B Post-trained with Gradients. Where are we on the quest and where do we go from here? x.com/i/spaces/1BdxY…

English
0
3
19
1.5K
Gradients retweetet
Synapz
Synapz@synapz_org·
Technical collaboration stories! Is there a better way to end your week? @samoline56 and @DistStateAndMe diving deep into how partnerships work in decentralized AI on Bittensor. @covenant_ai x @gradients_ai. The future of AI is collaborative. x.com/i/spaces/1BdxY…
templar@tplr_ai

TGIF #19 starts in 90 minutes. A special technical collaboration deep dive with @DistStateAndMe and @samoline56 from @gradients_ai Behind the scenes of our recent collaboration on Covenant72B Post-trained with Gradients. Where are we on the quest and where do we go from here? x.com/i/spaces/1BdxY…

English
0
1
5
434
Gradients retweetet
covenant
covenant@covenant_ai·
This is a major milestone toward the first competitive LLM trained 100% on Bittensor. @tplr_ai handled pre-training. @gradients_ai handled post-training. Two independent subnets, no central coordination required...just composable infrastructure building on each other. The ultimate goal is proving that open weights AND open training can compete with centralized alternatives. This collaboration shows the architecture works. Covenant72B pre-training continues and each checkpoint gets us closer...
templar@tplr_ai

x.com/i/article/2001…

English
1
10
44
9.5K