Hone Training

14 posts

Hone Training banner
Hone Training

Hone Training

@traininghone

High Compression AI Pretraining · Subnet 5 on Biττensor · powered by @manifoldlabs & @latentholdings

United States Katılım Ağustos 2025
12 Takip Edilen646 Takipçiler
Sabitlenmiş Tweet
Hone Training
Hone Training@traininghone·
Big news! 🗞️ 🌊 We just merged the first version of the codebase for @bittensor subnet 5 : github.com/manifold-inc/h… and we are opening up 1% incentive for our live production test! Validator is now running. 1/3 🧵
English
2
10
42
5K
Hone Training retweetledi
Targon
Targon@TargonCompute·
Introducing Secure Targon Compute Today, we’re giving developers the ability to rent H200 and CPU nodes along with scaling workloads as needed, all secured by the Targon Virtual Machine (TVM). Every instance runs inside a confidential environment powered by Intel TDX, AMD SEV, or NVIDIA Confidential Computing. Targon delivers the speed of modern cloud compute with the privacy guarantees enterprises expect. targon.com
English
5
18
98
19.9K
Hone Training
Hone Training@traininghone·
The most important thing is to expose the endpoints needed for sending the ARC-AGI problems — as for the solution, you can do whatever you want. 🫱🏻‍🫲🏼 3/3 🧵
English
0
0
6
400
Hone Training
Hone Training@traininghone·
There are no constraits on miner solutions. Eg. miner starter code uses an OpenAI key to solve the @arcprize (ARC-AGI 2) problems, and you can: - use another provider - more prompt engineering - finetune your own LLM - try new methods like HRMs 2/3 🧵
English
1
0
10
487
Hone Training
Hone Training@traininghone·
Big news! 🗞️ 🌊 We just merged the first version of the codebase for @bittensor subnet 5 : github.com/manifold-inc/h… and we are opening up 1% incentive for our live production test! Validator is now running. 1/3 🧵
English
2
10
42
5K
Hone Training retweetledi
Mark Jeffrey
Mark Jeffrey@markjeffrey·
Hash Rate - Ep 136: Hone ($TAO Subnet 5) Chasing AGI 🧙 Guest: @0xcarro of @traininghone Can a Bittensor subnet crack AGI before Elon or Sam? Hone (SN5) thinks it can. 00:00 Intro to Hone and AGI 02:52 Understanding the Arc AGI Benchmark 08:48 Introducing JEPA & Hone's Approach to AGI 23:31 The Data Rich Get Data Richer 26:14 The Most Ferocious Form of Capitalism 32:14 Final Thoughts and Future Directions
English
7
13
63
14.1K
Hone Training
Hone Training@traininghone·
GM ☀️ Latest @arcprize results : GPT 5 : 9% vs 7% in public bench Grok 4 : 22% vs 16% in public bench Gonna keep optimizing, in parallel we should run the mainnet test soon so we can launch! 🔥🚀
English
2
4
24
8.9K
JJ
JJ@JosephJacks_·
Bittensor Subnet 5 is permissionlessly incentivizing hill-climbing of ARC-AGI … If the top incentive mechanisms are any indication (64, 62, 4, etc), we will see Bittensor win the @ARCPrize. 🥇 👀 🔥
Hone Training@traininghone

Update on Subnet 5: 🗞️🪡🧶 The core of our synthetic data generation is ARC AGI 1, the difference between that and ARC AGI 2 is the difficulty & complexity of tasks. For this we use github.com/google/ARC-GEN for ARC AGI 1 tasks generation 🧵 1/5

English
5
11
73
10.2K
Hone Training
Hone Training@traininghone·
For ARC AGI 2, we are still tweaking the difficulty of transformations so that our benchmarks fit well with the public one too, for now we're at 28% with `GPT-5` (vs 7% in the public benchmark) More soon! 🦾🧵 5/5
English
1
0
11
400
Hone Training
Hone Training@traininghone·
For ARC AGI 1 (the 1st part in our pipeline) : - `Gpt5-medium` : on our synthetics (`57%`) vs public ARC benchmark (`56%`) - `O3-medium` : on our synthetics (`49%`) vs public ARC benchmark (`53%`) - `Grok-4` : on our synthetics (`60%`) vs public ARC benchmark (`67%`) 🧵 4/5
English
1
0
10
492
Hone Training
Hone Training@traininghone·
Update on Subnet 5: 🗞️🪡🧶 The core of our synthetic data generation is ARC AGI 1, the difference between that and ARC AGI 2 is the difficulty & complexity of tasks. For this we use github.com/google/ARC-GEN for ARC AGI 1 tasks generation 🧵 1/5
English
2
6
33
14.6K