ᴸᴵᴸ€ryp (τ,τ)

5.1K posts

ᴸᴵᴸ€ryp (τ,τ) banner
ᴸᴵᴸ€ryp (τ,τ)

ᴸᴵᴸ€ryp (τ,τ)

@lilcryptm

crypto ~ class of 2020 - $BTC - $TAO

Metaverse Tham gia Eylül 2016
857 Đang theo dõi891 Người theo dõi
Tweet ghim
ᴸᴵᴸ€ryp (τ,τ)
ᴸᴵᴸ€ryp (τ,τ)@lilcryptm·
$TAO is the $SOL of this cycle 2021 = L1s ---> 2025 = AI
ᴸᴵᴸ€ryp (τ,τ) tweet media
English
18
32
230
22.3K
ᴸᴵᴸ€ryp (τ,τ) đã retweet
Chutes
Chutes@chutes_ai·
Does your coding agent run on Claude Sonnet 4.6? Costing you $3.00 per million input tokens and $15.00 per million output? MiniMax M2.5 on Chutes costs just $0.19 input and $1.15 output all while running inside a secure and private TEE. M2.5 scores 80.2% on SWE-Bench Verified while Sonnet 4.6 scores 79.6%. You might be paying 15x more per input token and 13x more per output token for a model that scores lower on the benchmark most teams use to evaluate coding agents. M2.5 also scores 51.3% on Multi-SWE-Bench (multi-repo tasks) and 76.3% on BrowseComp (agentic search). MiniMax trained it across 200,000+ real-world coding environments in 10+ languages. The TEE variant on Chutes means your prompts and outputs stay inside a hardware-secured enclave. Claude's API has no equivalent option. Just swap the model string and run your eval suite. Compare and see for yourself the power of open source models on Chutes. 🔗 chutes.ai/app/chute/ce6a…
Chutes tweet media
English
20
63
285
14.1K
Mark Jeffrey
Mark Jeffrey@markjeffrey·
Spread these words to all the nerds.
Mark Jeffrey tweet media
English
7
31
230
6.9K
ᴸᴵᴸ€ryp (τ,τ) đã retweet
const
const@const_reborn·
Everyone should know about what Chutes is doing. Fully permissionless inference mining. Fully end to end encrypted. Fully private TEE machines. You could safely send private keys over the wire and know it was fully private. The entire stack encrypted from your machine to the LLM and back. The fact that that happens on top of a permissionless network with infra run my god knows who is nothing short of mind boggling.
Jon Durbin@jon_durbin

Longer write-up about the end-to-end encryption we launched a few weeks ago 👀 This is one of those things that really should be ubiquitous across AI inference providers. TEE + full end-to-end (attestable) encryption. I also saw @NEARProtocol and @PhalaNetwork have launched a similar E2EE system now too (and @AskVenice via near/phala), which is awesome! Demand better privacy!

English
26
104
577
29.6K
ᴸᴵᴸ€ryp (τ,τ) đã retweet
Clemente
Clemente@Chilearmy123·
The average $TAO holder be like
English
207
91
1.3K
211.9K
ᴸᴵᴸ€ryp (τ,τ)
ᴸᴵᴸ€ryp (τ,τ)@lilcryptm·
@AlexFinn Since you’re on top of things related AI - have you checked out any of bittensor Subnets for your overall set up ?
English
0
0
0
2
Alex Finn
Alex Finn@AlexFinn·
This is potentially the biggest news of the year Google just released TurboQuant. An algorithm that makes LLM’s smaller and faster, without losing quality Meaning that 16gb Mac Mini now can run INCREDIBLE AI models. Completely locally, free, and secure This also means: • Much larger context windows possible with way less slowdown and degradation • You’ll be able to run high quality AI on your phone • Speed and quality up. Prices down. The people who made fun of you for buying a Mac Mini now have major egg on their face. This pushes all of AI forward in a such a MASSIVE way It can’t be stated enough: props to Google for releasing this for all. They could have gatekept it for themselves like I imagine a lot of other big AI labs would have. They didn’t. They decided to advance humanity. 2026 is going to be the biggest year in human history.
Google Research@GoogleResearch

Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI

English
332
880
9.7K
1.5M
ᴸᴵᴸ€ryp (τ,τ) đã retweet
@jason
@jason@Jason·
This Bittensor Subnet Could Cut Drug Discovery Costs in Half x.com/i/broadcasts/1…
English
44
86
520
57.4K
ᴸᴵᴸ€ryp (τ,τ)
ᴸᴵᴸ€ryp (τ,τ)@lilcryptm·
@AlgodTrading tried setting up my miner w openclaw x kimi , been insufficient.. claude code seems to not even know what tao actually is let alone subnets and how to mine em efficiently - any tips / tricks which subnets to mine on w agents and what models you use / ur setup
Algod@AlgodTrading

If you’re in bittensor, use claude code or setup an openclaw instance, try to find holes or outcompete miners on subnets The better the miner output, the faster bittensor gets full blown adoption Everyone can mine now, just be creative

English
0
0
1
90
@jason
@jason@Jason·
[ NOT FINANCIAL ADVICE ] [ NOT FINANCIAL ADVICE ] Family member: "What's this $tao you keep talking about on the pod?" Me: "Sell half your $btc, buy $tao " [ NOT FINANCIAL ADVICE ] [ NOT FINANCIAL ADVICE ] #remindmeofthistweet in six, 12, 18 and 24 months What logical sense does it make to buy $mstr instead of $btc directly?
Michael Saylor@saylor

The Orange March Continues.

English
258
140
901
404.3K
ᴸᴵᴸ€ryp (τ,τ) đã retweet
@jason
@jason@Jason·
$tao > $btc
QCT
654
466
2.4K
1.3M
ᴸᴵᴸ€ryp (τ,τ)
ᴸᴵᴸ€ryp (τ,τ)@lilcryptm·
@AlexFinn Which local model are you using on your Mac’s and DGX? And what specific LoRAs tool / Programm for fine tuning ? And if you’re at it , any tips / tricks for how to fine tune properly and efficiently? Thanks
English
0
0
0
7
Alex Finn
Alex Finn@AlexFinn·
My mind is so blown I have my own personal AI research lab running 24/7/365 I'm just one dude with an entire team of AI agents training models and doing R&D I think this is the biggest opportunity right now: taking Karpathy's Autoresearch framework and applying it to everything I have a team of AI agents running experiments all day and night on system prompts, local models, and LoRAs. I also have them doing R&D on my new project. They spend all day discussing my app, coming up with new ideas, then debating eachother An entire organization of autonomous agents continuously improving my business 24/7/365 I feel like I have unlimited power Right now they are all running on ChatGPT 5.4, but today I will move them to local models running on my 3 Mac Studios and DGX Spark so this will all become free Free, local super intelligence working for me at all times. 10 year old me would think this is a scifi Do this immediately: 1. Ask your agent about Karpathy's Autoresearch. Deeply understand it 2. Ask your agent how you could apply that framework to other projects you're working on 3. Download a local model. Doesn't matter what computer you have. There is a model you can run on it. 4. Just get used to how it works. Learn from it. 5. Push yourself to get uncomfortable every day and try new things. There has never been a better/more profitable time to be a tinkerer
Alex Finn tweet media
English
252
226
2.3K
164.8K
ᴸᴵᴸ€ryp (τ,τ) đã retweet
Openτensor Foundaτion
Openτensor Foundaτion@opentensor·
The largest decentralised LLM pre-training run in history. SN3 @tplr_ai trained Covenant-72B across 70+ contributors on open internet infrastructure. Now it’s being discussed by @chamath with @nvidia CEO Jensen Huang. Distributed, open-weight model training on Bittensor is getting started.
English
76
380
1.7K
117.1K
ᴸᴵᴸ€ryp (τ,τ) đã retweet
templar
templar@tplr_ai·
On the @theallinpod this week, @chamath asked @nvidia CEO Jensen Huang about decentralized AI training, calling our Covenant-72B run "a pretty crazy technical accomplishment." One correction: it's 72 billion parameters, not four. Trained permissionlessly across 70+ contributors on commodity internet. The largest model ever pre-trained on fully decentralized infrastructure. Jensen's answer is worth hearing too.
English
102
403
1.7K
456.5K
ᴸᴵᴸ€ryp (τ,τ) đã retweet
nordin.eth
nordin.eth@nordin_eth·
This is huge. $TAO is now part of the conversation between @chamath and @NVIDIA CEO Jensen Huang. Read that again. Credit: @theallinpod
English
120
359
2.1K
329.8K