Openτensor Foundaτion

1.9K posts

Openτensor Foundaτion banner
Openτensor Foundaτion

Openτensor Foundaτion

@opentensor

Incentivizing intelligence

انضم Haziran 2021
1 يتبع170.3K المتابعون
تغريدة مثبتة
Openτensor Foundaτion
Openτensor Foundaτion@opentensor·
The largest decentralised LLM pre-training run in history. SN3 @tplr_ai trained Covenant-72B across 70+ contributors on open internet infrastructure. Now it’s being discussed by @chamath with @nvidia CEO Jensen Huang. Distributed, open-weight model training on Bittensor is getting started.
English
52
305
1.3K
65.5K
Openτensor Foundaτion أُعيد تغريده
Data Universe ・ SN13
Data Universe ・ SN13@Data_SN13·
Introducing `dv` - a Rust CLI for querying real-time social data from X & Reddit. Powered by Bittensor SN13's decentralized miner network. ``` dv search x -k bitcoin -l 100 ``` One command. Live data. No middleman. Open source. Built for agents. 🧵👇
Data Universe ・ SN13 tweet media
English
8
27
281
14.6K
Openτensor Foundaτion أُعيد تغريده
mogmachine (ττ)
mogmachine (ττ)@mogmachine·
Preparing a talk for #Bittensor #Breakout in SF (@bt_commons ). The topic... "State of Bittensor and how we capitalize and become a household name." The honest answer? We earn it the same way every technology platform did. Not through the protocol, but through the products built on top of it. Get your tickets here: luma.com/v5ujk0gv
mogmachine (ττ) tweet media
English
1
6
48
1.6K
Openτensor Foundaτion
RT @const_reborn: Surely the idea of universal human rights has been shattered by now. The behavior of our governments and the people pulli…
English
0
32
0
104
Openτensor Foundaτion أُعيد تغريده
templar
templar@tplr_ai·
On the @theallinpod this week, @chamath asked @nvidia CEO Jensen Huang about decentralized AI training, calling our Covenant-72B run "a pretty crazy technical accomplishment." One correction: it's 72 billion parameters, not four. Trained permissionlessly across 70+ contributors on commodity internet. The largest model ever pre-trained on fully decentralized infrastructure. Jensen's answer is worth hearing too.
English
71
329
1.4K
286.5K
Openτensor Foundaτion أُعيد تغريده
templar
templar@tplr_ai·
Crusades is doing exactly what it was designed to do. Miners competing on the same hardware, same model, and the MFU ceiling keeps rising. 63% on 2xA100 and the techniques are getting shared openly. The leaderboard is live. tplr.ai/tournament
Shivam Chauhan@0hawkeye33

x.com/i/article/2034…

English
2
10
92
8.6K
Openτensor Foundaτion أُعيد تغريده
Vidaio
Vidaio@vidaio_·
"Please fasten your seatbelts and ensure your seatback and tray tables are in their full upright position." From groundbreaking tools... To a complete intelligent ecosystem... "LADIES AND GENTLEMEN, PREPARE FOR TAKEOFF" Introducing: VidaioOS The next dimension in enterprise video management. $TAO @vidaio_
English
21
33
139
13.1K
Openτensor Foundaτion أُعيد تغريده
Apex・SN1
Apex・SN1@Apex_SN1·
We’ve built a simulation of the @IOTA_SN9 communication network. This is a high-fidelity digital twin - an abstracted version of our distributed training architecture, designed as a testing ground to run experiments and develop novel algorithms to increase the speed and quality of model training. We’re using the simulator as an environment for open competitions on Apex, outsourcing algorithmic innovations to the Bittensor miners. It’s the first time our simulator can be interacted with by the public. Our opening simulator competition is live now.
English
2
8
56
7.2K
Openτensor Foundaτion أُعيد تغريده
METANOVA
METANOVA@metanova_labs·
ArboNOVA: Patent–Molecule matching loop We’ve been experimenting with an agent that maps molecules → prior art using only open data + tools Benchmark: ~1500 molecules across ADHD-related patents (since 2012) In ~12 hours: 18 iterations of the loop → Best hit rate: 85.4% How this is usually done: Pharma intelligence teams + expensive proprietary databases + manual workflows + even conference attendance Early, but promising. Moving one step closer toward automating drug discovery and identifying which molecules are most strategic to advance in the wet lab. Based on @const_reborn (github.com/unconst/Arbos) and @karpathy autoresearch framework #Bittensor #SN68 #ralphloop #agents #DrugDiscovery #Desci #DeAI
METANOVA tweet media
English
9
28
111
20.8K
Openτensor Foundaτion أُعيد تغريده
grail
grail@grail_ai·
PULSE made weight sync 100x faster. That turned the trainer itself into the bottleneck. @erfan_mhi just fixed that too. Grail's GRPO trainer is now 1.8x faster on a single B200: 27% to 47% MFU, epoch time nearly halved. Decentralized post-training is converging on centralized speed.
@

Used autoresearch to make @grail_ai GRPO trainer 1.8x faster on a single B200. I kept postponing this for weeks since the bottleneck in our decentralized framework was mainly communication. But after our proposed technique, PULSE, made weight sync 100x faster, the training

English
0
10
42
7.9K
Openτensor Foundaτion أُعيد تغريده
Apex・SN1
Apex・SN1@Apex_SN1·
Apex and @IOTA_SN9 are working together again. The IOTA simulator competition launches later today. Join us as we accelerate distributed training.
English
8
21
113
10.5K
Openτensor Foundaτion أُعيد تغريده
const
const@const_reborn·
What if you could create an auto-research where your agent just focused on the eval and it was designed so that others could have swarms of agents across the web try to solve it and you paid them based on the ownership of the mechanism which produced the research
const tweet media
English
11
27
234
15.7K
Openτensor Foundaτion أُعيد تغريده
Tyler DurdΞth
Tyler DurdΞth@tylerdurdeth·
The thing with Bittensor is that subnets keep pushing the boundaries regardless of market volatility. Targon got accepted to the Nvidia accelerator. Nova made new breakthroughs in drug discovery. $TAO never sleeps
Tyler DurdΞth tweet mediaTyler DurdΞth tweet media
English
5
24
136
5.3K
Openτensor Foundaτion أُعيد تغريده
Distributed State
Distributed State@DistStateAndMe·
When you fix one bottleneck, the next one becomes visible. At @covenant_ai we built PULSE (arxiv.org/abs/2602.03839) to make weight sync 100× faster. That worked. Then the trainer itself became the new ceiling. So @erfan_mhi ran autoresearch on our GRPO trainer. 27% → 47% MFU. 16.7 min → 9.2 min per epoch. 1.8× faster on a single B200. Decentralized post-training, closing the gap with centralized. github.com/tplr-ai/grail
@

Used autoresearch to make @grail_ai GRPO trainer 1.8x faster on a single B200. I kept postponing this for weeks since the bottleneck in our decentralized framework was mainly communication. But after our proposed technique, PULSE, made weight sync 100x faster, the training

English
4
16
103
6.8K
Openτensor Foundaτion أُعيد تغريده
Targon
Targon@TargonCompute·
Today we are excited to share some news, Targon has been accepted into the @nvidia Inception program for startups! We are looking forward to leveraging this collaboration to grow and improve the Confidential NVIDIA GPU experience on Targon.com
Targon tweet media
English
27
102
486
37.1K