sun runner

4.4K posts

sun runner banner
sun runner

sun runner

@0xSunRun

starboy living the τao of poverty || thought journal

Katılım Mart 2024
323 Takip Edilen2.6K Takipçiler
Sabitlenmiş Tweet
sun runner
sun runner@0xSunRun·
A good number of people in my mentions/DMs are looking for help taopilling the masses. In the interest of saving time, here is all you need to get started: 1.) If you only have time to do one thing, sit in on Bittensor co-founder @const_reborn's 1-hour lecture on Bittensor: youtube.com/watch?v=XRhTE9… 2.) If your interest is sufficiently piqued, watch the best documentary of 2025 created by the @evert_scott and featuring all the usual suspects: youtube.com/watch?v=71rvAS… 3.) If you now understand Bittensor at a high level and want to peel the onion a layer deeper without getting completely lost in the sea of complexity, check out @markjeffrey's of @stillcorecap's "State of TAO" report: youtube.com/watch?v=wJ2Lah… 4.) Follow/sub to this Bittensor list I maintain that will be a portal into the ecosystem for you: x.com/i/lists/203230… If these three things weren't enough to taopill you, feel free to slide into my DMs, and I'll get to you eventually (maybe). Cheers.
YouTube video
YouTube
YouTube video
YouTube
YouTube video
YouTube
English
5
17
101
11.1K
sun runner
sun runner@0xSunRun·
If OpCo is extractive with its revenue, it'll lose both miners and TAOflow and end up with a husk of a subnet after too long. The subnet can always be forked and re-established, and all the miners will flow there. OpCos are the GTM layer on top of the open-source intelligence being provided by miners. The subnet is existential.
English
0
0
0
10
Daniel LJ Attia
Daniel LJ Attia@hereforElon·
base code can be open source, but doesn’t OpCo retain the edge eg : customer contracts, SaaS margins, data, brand, and equity? steelman: how does alpha capture Cursor-level terminal value? reconcile: how do the incentives series A investors differ from those of those subnet alpha holders?
English
2
0
0
21
sun runner retweetledi
Openτensor Foundaτion
The largest decentralised LLM pre-training run in history. SN3 @tplr_ai trained Covenant-72B across 70+ contributors on open internet infrastructure. Now it’s being discussed by @chamath with @nvidia CEO Jensen Huang. Distributed, open-weight model training on Bittensor is getting started.
English
49
292
1.3K
58.6K
dimn
dimn@dimndimn·
@0xSunRun The only coin thats been in the top10 longer than xrp is btc..
English
1
0
0
78
sun runner
sun runner@0xSunRun·
@0xBold Gonna need to turn log on for that chart
English
0
0
0
40
defimorphosis
defimorphosis@DeFimorphosis·
@0xSunRun It won’t be below ETH AI > DeFi It will be a whole new world. No one’s ready
English
1
0
1
145
sun runner
sun runner@0xSunRun·
Jensen Huang: "I believe we fundamentally need models as a first-class [proprietary] product as well as models as open-source. These two things are not A or B; it's A and B. And the reason for that is: models are a technology, not a product..." (emphasis mine) Jensen essentially just said the entire Bittensor thesis out loud: INTELLIGENCE = COMMODITY.
templar@tplr_ai

On the @theallinpod this week, @chamath asked @nvidia CEO Jensen Huang about decentralized AI training, calling our Covenant-72B run "a pretty crazy technical accomplishment." One correction: it's 72 billion parameters, not four. Trained permissionlessly across 70+ contributors on commodity internet. The largest model ever pre-trained on fully decentralized infrastructure. Jensen's answer is worth hearing too.

English
4
4
81
3.7K
sun runner
sun runner@0xSunRun·
To all the “crypto is useless” people please see below
templar@tplr_ai

On the @theallinpod this week, @chamath asked @nvidia CEO Jensen Huang about decentralized AI training, calling our Covenant-72B run "a pretty crazy technical accomplishment." One correction: it's 72 billion parameters, not four. Trained permissionlessly across 70+ contributors on commodity internet. The largest model ever pre-trained on fully decentralized infrastructure. Jensen's answer is worth hearing too.

English
2
5
120
4.8K
sun runner retweetledi
Mark Jeffrey
Mark Jeffrey@markjeffrey·
Bittensor peeps: check out 31:44 - Templar sn3 discussed. @chamath -- they've achieved a *72* billion parameter model with decentralized training, not a 4 billion parameter model :)
English
17
80
323
48.5K
sun runner retweetledi
Ridges AI | SN62
Ridges AI | SN62@ridges_ai·
🏔️ Ridges Update Subnet screeners now scale dynamically. As submission volume increases, more screeners deploy to handle the load - filtering outputs before full evaluation. This means: • Faster evaluation times under load • Better subnet efficiency • Smoother throughput as more agents compete Continuous improvements to the evaluation pipeline on Subnet 62.
English
6
26
136
3.4K
Sami Kassab
Sami Kassab@Old_Samster·
crypto-AI feels like the only corner of the broader industry where you wake up every morning actually excited I've seen every AI breakthrough (agentic loops, auto-research, personal AI assistants, vibe coding) get immediately absorbed into Bittensor everyday is something new
const@const_reborn

What if you could create an auto-research where your agent just focused on the eval and it was designed so that others could have swarms of agents across the web try to solve it and you paid them based on the ownership of the mechanism which produced the research

English
5
5
66
4.2K
sun runner retweetledi
const
const@const_reborn·
What if you could create an auto-research where your agent just focused on the eval and it was designed so that others could have swarms of agents across the web try to solve it and you paid them based on the ownership of the mechanism which produced the research
const tweet media
English
11
27
233
15.6K