ConvictionX

13K posts

ConvictionX banner
ConvictionX

ConvictionX

@daddAIXXX

DOGE TAO FET ONDO PLUME FIL ZEC

Believe in somethinghenchOG เข้าร่วม Ekim 2023
7.3K กำลังติดตาม673 ผู้ติดตาม
ทวีตที่ปักหมุด
ConvictionX
ConvictionX@daddAIXXX·
AI 是全世界的希望同样是加密最后的希望 !AI从叙事到落地会比以往任何时代都会更疯狂!不要质疑AI泡沫互联网时代泡沫 同样诞生了伟大公司 谷歌 亚马逊 阿里腾讯入场时机非常重要 !AI板块很快会爆炸增长占加密市值至少3/1!目前AI项目加起来不到300亿美元真的理解这句话含金量吗等待OpenAI 上市!TAO
Openτensor Foundaτion@opentensor

The largest decentralised LLM pre-training run in history. SN3 @tplr_ai trained Covenant-72B across 70+ contributors on open internet infrastructure. Now it’s being discussed by @chamath with @nvidia CEO Jensen Huang. Distributed, open-weight model training on Bittensor is getting started.

中文
1
1
1
167
ConvictionX
ConvictionX@daddAIXXX·
@Filecoin @filecoinbeam I think we need real technology that can really solve the pain points of human beings or enterprises quickly!
English
0
0
0
7
ConvictionX
ConvictionX@daddAIXXX·
@Filecoin @filecoinbeam TurboQuant algorithm (2026): This is a high-performance technology released by Google on ICLR 2026. By rotating the KV cache vector and using a 1-bit corrector, it reduces the memory occupancy of the large model by 6 times and improves the performance by 8 times.
English
1
0
0
7
Filecoin
Filecoin@Filecoin·
Storing data is the easy part. Retrieval at scale fast, reliable, globally distributed is where most decentralized storage falls short. @filecoinbeam puts a global retrieval layer on top of Filecoin. Any dataset, one URL, production-grade access.
Filecoin tweet media
English
3
5
39
1.4K
ConvictionX รีทวีตแล้ว
Openτensor Foundaτion
Bittensor SN3 :: @tplr_ai // @covenant_ai 72B "If intelligence is the most powerful thing, then decentralized training is humanity's last dance. it's the same thing we fought for, for ages." "It's what the internet tried to do. It's what Bitcoin tried to do - how do we reclaim agency? It was always, can we renegotiate the social contract with the Leviathan." "We can turn the Internet into a data center... But what for? The fight is to create optionality..." -@DistStateAndMe Novelty Search is a weekly community call hosted by Bittensor co-founder @const_reborn
English
18
57
283
8.9K
ConvictionX รีทวีตแล้ว
The TAO Daily
The TAO Daily@taodaily_io·
The $TAO premine FUD clarified by @const_reborn: • Zero premine, zero insider allocations • ~600K TAO sold OTC (2021–2023) at $18 avg—all personally mined • Early buyers: Firstmark, DCG, Polychain • Binance holdings = user deposits, not exchange-owned $TAO is earned by participation. ✍️: taodaily.io/const-sets-the…
English
6
41
211
10.1K
ConvictionX
ConvictionX@daddAIXXX·
@SBF_FTX @SBF_FTX When will AI projects with a market value of 100 billion appear in the field of encryption?
English
0
0
0
12
SBF
SBF@SBF_FTX·
📷 Two wins for homebuyers in two days 📷 1) Fannie Mae to accept crypto-backed mortgages 2) Trump admin to streamline permitting
English
188
27
353
67.6K
ConvictionX รีทวีตแล้ว
Not Jerome Powell
Not Jerome Powell@alifarhat79·
Who did this lmao
English
677
6.5K
27.1K
2.4M
ConvictionX รีทวีตแล้ว
DRUSKI
DRUSKI@druski·
How Conservative Women in America act 😂🇺🇸
English
25.5K
147.8K
1.2M
166.7M
ConvictionX รีทวีตแล้ว
Tanaka
Tanaka@Tanaka_L2·
AI Crypto Tier List 2026 (practical view) 🧵 If I had to tier AI projects right now, this would be my selection based on real usage and real capital flows: S Tier: – $TAO | @opentensor → Bitcoin of AI, #1 decentralized ML network. Subnets competing on inference & agent tasks. PoI consensus, 256+ subnets. – $NEAR | @NEARProtocol → Full pivot to Blockchain for AI Agents. Shipped Agent Market, IronClaw runtime, confidential GPU. Highest adoption among agent-native chains. – $VIRTUAL | @virtuals_io → Shopify of AI Agents. 18K+ agents deployed, Agent Commerce Protocol live, multi-chain (Base + Solana). Agent GDP > $450M. A Tier: – $FET | @ASI_Alliance → Merger of Fetch.ai + SingularityNET + Ocean + CUDOS. Agentverse + ASI-1 Mini LLM + ASI Compute. – $RENDER | @rendernetwork → Decentralized GPU compute (5,600+ nodes). Core infrastructure for AI inference & rendering. – $ICP | @dfinity → Onchain compute for AI apps & canisters. Scalable hosting for agents directly on-chain. B Tier: – $OLAS | @autonolas → Agent-native framework, verifiable AI (Proof of Inference). Widely used in prediction markets & DeFi agents. – $AKT | @akashnet → DePIN compute marketplace. Low-cost, scalable for AI workloads. – $TRAC | @origin_trail → Decentralized Knowledge Graph solving hallucination for agents (1.32B+ knowledge assets). C Tier: – @Gaianet_AI, @ritualnet: Verifiable on-chain inference, still scaling nodes. – @iEx_ec: Monetizing computing power for AI. – @0G_labs: Verifiable DA & provable compute (addressing black-box AI). – @elizaOS ecosystem (on Solana): Foundational OS for agents, strong adoption but still early. D Tier: – @grass, @AlloraNetwork: DePIN data/compute but still thin adoption. – Agent memes / Solana-native agents (@pippinlovesyou, @openclaw, @bankrbot): High utility, small caps, extreme volatility. – @MorpheusAIs, @AIOZNetwork: Community-driven but unclear product-market fit. Key insight: – Capital → infra – Users → agents → The winner will be the one that captures both. I’m prioritizing S + A. So I’m betting on $TAO leading the AI narrative across the entire market.
Tanaka tweet media
English
76
31
264
22K
ConvictionX รีทวีตแล้ว
Crypto Caesar
Crypto Caesar@CryptoCaesarTA·
Realize: $TAO is officially entering the big leagues and is literally being discussed by @chamath and Jensen Huang. Mainstream AI attention is slowly forming around it. This might be the play of the next run.
English
6
15
192
25.1K
ConvictionX รีทวีตแล้ว
Crypto Patel
Crypto Patel@CryptoPatel·
The Guy Who Made Millions From Uber Says This $300 Coin Could Hit $32,000 Jason Calacanis Predicts 200x for $TAO: Here's What You Need to Know Early Uber investor Jason Calacanis just made a bold call on #TAO during his "This Week In Startups" podcast. He believes TAO could deliver a 200x return over the next 5-10 years, targeting a $500 billion market cap. He called it the "better Bitcoin" and compared its potential to Ethereum and Solana. But here's what most people are Missing: ➤ Calacanis is NOT a neutral observer. He has invested ~$500K in TAO personally and is a consulting partner at Stillcore Capital, a fund built specifically around Bittensor. ➤ This is a classic "talking his own book" situation. What makes TAO interesting regardless: ➤ Fixed supply of 21M tokens (same as Bitcoin) ➤ First halving completed in Dec 2024 ➤ Nvidia CEO Jensen Huang recently endorsed the decentralized AI model ➤ Currently trading around ~$300 with ~$3B market cap ➤ Ranked #32 on CoinGecko What CryptoPatel community already did: ➤ We shared #TAOUSDT spot entry setup with chart around $150-$160 ➤ Already delivered 160%+ profit from our entry ➤ Our community was positioned well before mainstream attention A 200x from here means $500B market cap. For context, that's roughly where Ethereum sits today. Possible? Yes. Guaranteed? Absolutely not. Always check who benefits from a prediction before you act on it. TA Only. Not Financial Advice. ALWAYS DYOR.
Crypto Patel tweet media
English
43
68
426
23K
Grey BTC
Grey BTC@greybtc·
What's your $TAO price prediction for this year?
Grey BTC tweet media
English
87
5
198
42.7K
ConvictionX รีทวีตแล้ว
Cointelegraph
Cointelegraph@Cointelegraph·
⚡️ JUST IN: Elon Musk is reportedly considering allocating up to 30% of SpaceX IPO shares to retail investors, far above the typical 5–10%, to boost demand and stabilize trading.
Cointelegraph tweet mediaCointelegraph tweet media
English
59
66
436
45K
ConvictionX รีทวีตแล้ว
Hasheur
Hasheur@PowerHasheur·
Il se passe un truc intéressant sur $TAO en ce moment +113% sur 30 jours Comme souvent, une partie de cette hausse c'est le marché crypto: quand l'IA revient dans les discussions, TAO score mécaniquement. Et quand il prend de la valeur, ça crée du FOMO En ce moment en crypto on a tendance à voir #bitcoin et le reste corréler à la macro. Ça s'auto-entretient et ça plaît de voir une ou deux valeurs qui « sortent du lot » et de cette mécanique « trump fait le marché », même momentanément Mais cette fois, il y a un truc qui me plaît un peu plus derrière: Le subnet Templar (SN3) vient de réussir quelque chose d'historique : entraîner un modèle LLM de 72 milliards de paramètres de façon entièrement décentralisée 70 nodes indépendants, aucun data center, que des données publiques. Le modèle s'appelle Covenant-72B et il score « 67.1 sur MMLU » , ce qui est comparable aux meilleurs modèles de 2024 Ok, c'est pas GPT-4o. C'est pas Claude non plus. Mais… ici c'est pas le sujet. Le sujet c'est que pour la première fois, un modèle de cette taille a été entraîné de façon transparente et vérifiable. On sait exactement comment il a été conçu Pas de boîte noire. Pas d'influence cachée politique, idéologique ou commerciale. Les biais potentiels des grands modèles sont un vrai sujet et l'agnosticité dans l'entraînement est un enjeu d'avenir Jensen Huang (CEO Nvidia) l'a dit lui-même lors de GTC 2026 : les modèles ouverts et décentralisés vont coexister avec les propriétaires Chamath (big capital risk investor) l'a mis en avant dans « All-In » comme exemple concret d'IA décentralisée qui sort enfin de la théorie 🔗 reddit.com/r/bittensor_/c… Le subnet (SN3) prend 194% en 7 jours… et mécaniquement : quand un subnet prend de la valeur, ça pousse aussi le $TAO à la hausse Tout ça, quelques mois après que son premier halving est passé (10,5 millions de supply, la moitié des 21M au total, avec une supply fixe). Mécaniquement, ça réduit l'émission et ça compresse l'offre Est-ce que ça justifie +113% ? Je ne sais pas. Est-ce une hausse durable ? Ca aussi c'est une autre question 🤣. Mais pour une fois il y a de la substance.
Hasheur tweet media
Français
58
69
620
64.4K
ConvictionX รีทวีตแล้ว
Easy
Easy@NotSoEasyMoney·
For context, this would put TAO at $72,000 Higher than Bitcoins current price.
English
62
27
234
48K
ConvictionX รีทวีตแล้ว
Mark Jeffrey
Mark Jeffrey@markjeffrey·
I did say that :) I used LeadPoet subnet 71's product and it produced great results -- far better than I was guessing it would, and with a richly detailed 'dossier' on each lead, and why it fit my criteria.
Alchemist - τ@SubnetSummerT

10/ 📣Product launch went viral The recent launch didn't just get attention - it validated demand. Real users tested the product and immediately saw quality. As @markjeffrey put it: "Holy crap they're GOOD LEADS. This is GREAT" Not speculation. Not promises. Real users reacting to real value. And when that happens publicly - demand compounds fast. Product Launch Link: x.com/LeadpoetAI/sta…

English
8
11
68
8K
ConvictionX รีทวีตแล้ว
grail
grail@grail_ai·
The theoretical foundation for this was published in February by our very own @erfan_mhi. @FireworksAI_HQ just confirmed it works at 1T scale across multiple data centres for @cursor_ai's Composer 2. The PULSE paper: arxiv.org/abs/2602.03839
Erfan Miahi@erfan_mhi

Pretty wild to see our work on PULSE show up in a real 1T-scale post-training run done by @cursor_ai. Cursor built Composer 2 in collaboration with Fireworks and trained it across multiple datacenters, getting huge savings by syncing only the weights that actually changed between RL checkpoints. Fireworks reports that more than 98% of BF16 weights can stay bit-identical from one checkpoint to the next, and they cited our paper on this, too. That is basically the exact sparsity pattern we showed in our paper, where we introduced PULSE, a lossless method for 100x more efficient weight-sync communication for RL training. Their system is very close to this idea in practice: exploiting the fact that only a tiny fraction of weights actually change between RL steps. The deeper reason for this is not that RL gradients are sparse. They are not. The gradients are still dense. What becomes sparse is the realized weight update. In RL, learning rates are tiny, and with Adam, the update size stays bounded around the learning rate. Then BF16 adds a hard threshold: if the update is too small relative to the weight, it just rounds away, and the stored weight does not change at all. So from one checkpoint to the next, most of the model literally stays identical. That is why this is such a useful systems idea. Lower precision, like using BF16, does not just save compute. It can also save communication, because more tiny updates get absorbed and fewer weights need to be shipped. At that point, compute efficiency and comms efficiency stop being a tradeoff. They start reinforcing each other. If you want the deeper story on why RL updates get this sparse, the theory behind it, and how to push weight-sync bandwidth down by 100x+, take a look at our paper: arxiv.org/pdf/2602.03839 The Fireworks blog on Composer 2 that cited our work: fireworks.ai/blog/frontier-… The animation is taken from Fireworks!

English
8
11
89
11.2K