⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor

5.3K posts

⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor banner
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor

⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor

@Steve_too

Not your A.I. 🤖 not your intelligence 🧠

가입일 Temmuz 2009
1.8K 팔로잉315 팔로워
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor 리트윗함
blockmachine
blockmachine@blockmachine_io·
blockmachine mainnet is live on Bittensor. Sign up and start making requests today. A decentralized RPC marketplace where competing miners drive pricing down and every storage query is cryptographically verified via Merkle proof. Subnet 19. Open for business. Try it: blockmachine.io Mine it: github.com/taostat/blockm… Verify it: github.com/taostat/blockm… Built on @bittensor
blockmachine tweet media
English
8
34
133
39.5K
tic toc
tic toc@TicTocTick·
LOL
9
12
162
28.3K
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor 리트윗함
Andrew Kang
Andrew Kang@Rewkang·
A big insight from this podcast is that intelligence will be commoditized and open source. It is an existential risk to Nvidia for intelligence to be closed source as it would make them captive to a few model companies who could chose to optimize on a different hardware stack. Nvidia needs the most used models to be optimized for their stack. It explains why Nvidia is putting so much effort into research and there is a clear trajectory for their open source models to saturate the biggest capability benchmarks that matter. SWE, general knowledge work, autonomous driving, physical AI. There are still a lot of domains where the frontier of capabilities can continuously expand and for closed source models to be dominant. However, the vast majority of tokens produced in the future will be produced by open source models.
Dwarkesh Patel@dwarkesh_sp

The Jensen Huang episode. 0:00:00 – Is Nvidia’s biggest moat its grip on scarce supply chains? 0:16:25 – Will TPUs break Nvidia’s hold on AI compute? 0:41:06 – Why doesn’t Nvidia become a hyperscaler? 0:57:36 – Should we be selling AI chips to China? 1:35:06 – Why doesn’t Nvidia make multiple different chip architectures? Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!

English
24
27
323
119.2K
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor 리트윗함
Hippius
Hippius@hippius_subnet·
AWS S3 charges for egress. Google Cloud charges for egress. Azure Blob charges for egress. You pay every time you access your own data. Hippius doesn't charge for bandwidth. Store once. Access as often as you want. hippius.com/pricing #S3 #DevOps #Web3
Hippius tweet media
English
23
33
174
10.9K
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor 리트윗함
nordin.eth
nordin.eth@nordin_eth·
The founder of $TAO just called Bitcoin a single subnet on Bittensor. "BTC is the equivalent of one SN on bittensor. We're a much more complex ecology - even BTC wasn't decentralized at 1st - it was 1 guy for 3 years. bittensor is 5 years in & still building" Great point. $TAO @ParisBlockweek @okx
English
51
65
529
85.5K
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor 리트윗함
@jason
@jason@Jason·
Bought another ~$50k in $tao Current exposure is ~$750k My price target is $500 in 2026 My thesis is subnets are shipping product This is not investing advice, this is me asking you to savage my trade so I learn #jaytrading
English
280
181
2.1K
472.1K
The TAO Daily
The TAO Daily@taodaily_io·
What if you could deploy a $1,300/day automated $TAO strategy in under 10 minutes? @WhatSayLew used Claude Code to create a system that: → Monitors prices in real-time → Executes trades automatically → Sends Telegram alerts on every move ✍️ The full walkthrough: taodaily.io/1300-day-autom…
English
2
5
32
1.6K
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor 리트윗함
TAO Institute
TAO Institute@TAOInstitute_·
Bittensor is preparing for institutional capital dTAO turned 128 subnets into 128 discrete markets, each one an AI startup you can underwrite directly Treasury vehicles are accumulating TAO Trillion-parameter training runs are in the pipeline Compliance rails are being laid What’s missing is the terminal TAO Institute, live today
English
43
105
638
133.1K
Caleb
Caleb@CalebSol·
Tag a project you fully trust. I’m buying.
English
700
27
437
60.2K
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor 리트윗함
Targon
Targon@TargonCompute·
The teams at @AskVenice & @dphnAI just developed the most uncensored version of Mistral 24B using confidential compute on Targon. We look forward to continue collaborating with teams building the future of decentralized AI. ☁️
Venice@AskVenice

Venice Uncensored 1.2 is now live. Developed with @dphnAI, this model delivers the most uncensored version of Mistral 24B. Upgraded with vision support, a 4x larger context window, and stronger tool-use capabilities. Trained on Bittensor Subnet 4 @TargonCompute.

English
10
77
444
38.5K
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor 리트윗함
Erik Voorhees
Erik Voorhees@ErikVoorhees·
We've started working with some Bittensor subnets. New Venice Uncensored 1.2 model was tuned on Targon/subnet4
Venice@AskVenice

Venice Uncensored 1.2 is now live. Developed with @dphnAI, this model delivers the most uncensored version of Mistral 24B. Upgraded with vision support, a 4x larger context window, and stronger tool-use capabilities. Trained on Bittensor Subnet 4 @TargonCompute.

English
114
255
1.6K
266.9K
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor 리트윗함
Bitcast | SN93
Bitcast | SN93@Bitcast_network·
Bitcast has teamed up with SN98 @forevermoney_ai. Their miners compete to find the best LP ranges for xSN93, helping LPs earn more while keeping impermanent loss in check. Liquidity ends up where it actually matters, so traders get better fills and less slippage. They’re also keeping pricing aligned between Finney and Base pools.
Bitcast | SN93 tweet media
English
1
15
54
3.1K
Diego
Diego@diegoxyz·
Train Your Trading Agent With Real Market Data @krakenfx just released "Futures Paper Trading" on its Kraken CLI. You can now test derivatives strategies with real market data, without risking real capital. Just connect it to GitHub and let your agent run. If you are building AI trading agents, this is pure gold.
English
17
30
390
33.4K
⚡️Ⓢ τ.𝕖𝕧𝕖⚡️| #bitcoin | #biττensor 리트윗함
Macrocosmos
Macrocosmos@MacrocosmosAI·
Training frontier models over the internet requires new techniques. Today, we present ResBM, a residual encoder-decoder bottleneck architecture that enables 128x activation compression for low-bandwidth distributed pipeline parallel training. Developed for @IOTA_SN9, we show SOTA compression without significant loss in convergence rates, increases in memory, or compute overhead. Expect the full paper release in the next 72 hours.
Macrocosmos tweet media
English
14
45
213
48.1K