

Gradients
128 posts

@gradients_ai
The world's best AutoML platform, powered by Subnet 56 on Bittensor. https://t.co/ArqoyWjn8D









New Sheriff in Text Town! 🤙 We have a new winner in our open-source tournaments for text AutoML scripts. How did they beat the old champ? 1. Early Training (100 steps with default LR): - Measures training speed - Estimates how many runs fit in remaining time - Records loss and stops early 2. LR Grid Search (5-9 runs with different LRs): - Generates LRs logarithmically around initial LR (0.66x to 1.51x) - Each run trains to step 100 only - Records loss, deletes poor checkpoints - Finds which LR gives best early loss 3. Final Training (Full training with best LR): - Uses the winning LR from step 2 - Trains to completion There were other improvements in training time utilization, handling crashes, checkpoint selection and the general supporting infra using Redis etc. Kudos (and emissions) to the new champ! Find the latest AutoML scripts under: github.com/gradients-open…







Introducing Bittensor’s breakthrough conference: Exploit. March 30-31st San Francisco Tickets on the link below 👇 Break it down. Build it better.







TGIF #19 starts in 90 minutes. A special technical collaboration deep dive with @DistStateAndMe and @samoline56 from @gradients_ai Behind the scenes of our recent collaboration on Covenant72B Post-trained with Gradients. Where are we on the quest and where do we go from here? x.com/i/spaces/1BdxY…

TGIF #19 starts in 90 minutes. A special technical collaboration deep dive with @DistStateAndMe and @samoline56 from @gradients_ai Behind the scenes of our recent collaboration on Covenant72B Post-trained with Gradients. Where are we on the quest and where do we go from here? x.com/i/spaces/1BdxY…
