ᴸᴵᴸ€ryp (τ,τ)

5.1K posts

ᴸᴵᴸ€ryp (τ,τ) banner
ᴸᴵᴸ€ryp (τ,τ)

ᴸᴵᴸ€ryp (τ,τ)

@lilcryptm

crypto ~ class of 2020 - $BTC - $TAO

Metaverse शामिल हुए Eylül 2016
858 फ़ॉलोइंग891 फ़ॉलोवर्स
पिन किया गया ट्वीट
ᴸᴵᴸ€ryp (τ,τ)
ᴸᴵᴸ€ryp (τ,τ)@lilcryptm·
$TAO is the $SOL of this cycle 2021 = L1s ---> 2025 = AI
ᴸᴵᴸ€ryp (τ,τ) tweet media
English
18
32
230
22.3K
ᴸᴵᴸ€ryp (τ,τ) रीट्वीट किया
Algod
Algod@AlgodTrading·
Bittensor obviously has flaws: -few issues with the incentive mechanism -overal subnet quality can be better -multi consensus could open up more use cases That being said, the quality is increasing at a rapid pace, many people from frontier labs and so on starting to build and compete, one of the few narratives in crypto that actually makes sense Show me another project in crypto with the same potential and development activity as bittensor
English
49
72
479
28K
ᴸᴵᴸ€ryp (τ,τ)
@AlexFinn Wdym by until it’s too late ? How will it ever be to late to own my own models ? How will anyone take away open models
English
0
0
0
30
Alex Finn
Alex Finn@AlexFinn·
I told you so. For months I’ve been telling you to buy Mac Minis Mac Studios and DGX Sparks I told you AI companies were going to ban you. Reduce limits. Increase prices Now it’s happening All while local models get 100x better My DMs are now filled with messages like this I don’t care Anthropic banned OpenClaw. Right now I have 3 Mac Studios a Mac Mini and a DGX Spark running incredible local models. You can never take those away from me This isn’t even close to over either. Tokens will only get more expensive. Local models will only get better and smaller. The clocks ticking. Own your intelligence before it’s too late
Alex Finn tweet media
Alex Finn@AlexFinn

It’s over. Anthropic just banned OpenClaw. Uncensored thoughts: 1. Massive mistake that will come back to bite them 2. Open source needs to win. If you have a local model running on your Mac mini, no corporation will ever be able to ban you 3. ChatGPT 5.4 is the best model. But it sucks compared to opus in OpenClaw. I will continue to pay for Anthropic api 4. I have no doubt the next OpenAI model will be optimized for Openclaw and be excellent 5. In 6 months the local models will be as good as opus 4.6 and all of this will be forgotten 6. It’s feels like from a consumer sentiment perspective things have flipped for OpenAI and Anthropic. They were the darlings when Opus 4.5 came out 7. Going to the Kanye concert right now please don’t spoil the stage or set list in the replies 8. The best openclaw set up is now Opus as the orchestrator, then much cheaper models as the execution layer. If you do this properly you won’t be paying much more than $200 a month. I’m using Gemma 4 and Qwen 3.5 for execution on my DGX Spark and Mac Studio

English
140
37
650
98K
ᴸᴵᴸ€ryp (τ,τ) रीट्वीट किया
Chutes
Chutes@chutes_ai·
Does your coding agent run on Claude Sonnet 4.6? Costing you $3.00 per million input tokens and $15.00 per million output? MiniMax M2.5 on Chutes costs just $0.19 input and $1.15 output all while running inside a secure and private TEE. M2.5 scores 80.2% on SWE-Bench Verified while Sonnet 4.6 scores 79.6%. You might be paying 15x more per input token and 13x more per output token for a model that scores lower on the benchmark most teams use to evaluate coding agents. M2.5 also scores 51.3% on Multi-SWE-Bench (multi-repo tasks) and 76.3% on BrowseComp (agentic search). MiniMax trained it across 200,000+ real-world coding environments in 10+ languages. The TEE variant on Chutes means your prompts and outputs stay inside a hardware-secured enclave. Claude's API has no equivalent option. Just swap the model string and run your eval suite. Compare and see for yourself the power of open source models on Chutes. 🔗 chutes.ai/app/chute/ce6a…
Chutes tweet media
English
20
62
284
14.5K
Mark Jeffrey
Mark Jeffrey@markjeffrey·
Spread these words to all the nerds.
Mark Jeffrey tweet media
English
7
31
230
6.9K
ᴸᴵᴸ€ryp (τ,τ) रीट्वीट किया
const
const@const_reborn·
Everyone should know about what Chutes is doing. Fully permissionless inference mining. Fully end to end encrypted. Fully private TEE machines. You could safely send private keys over the wire and know it was fully private. The entire stack encrypted from your machine to the LLM and back. The fact that that happens on top of a permissionless network with infra run my god knows who is nothing short of mind boggling.
Jon Durbin@jon_durbin

Longer write-up about the end-to-end encryption we launched a few weeks ago 👀 This is one of those things that really should be ubiquitous across AI inference providers. TEE + full end-to-end (attestable) encryption. I also saw @NEARProtocol and @PhalaNetwork have launched a similar E2EE system now too (and @AskVenice via near/phala), which is awesome! Demand better privacy!

English
26
103
574
29.8K
ᴸᴵᴸ€ryp (τ,τ) रीट्वीट किया
Clemente
Clemente@Chilearmy123·
The average $TAO holder be like
English
207
91
1.3K
212.3K
ᴸᴵᴸ€ryp (τ,τ)
ᴸᴵᴸ€ryp (τ,τ)@lilcryptm·
@AlexFinn Since you’re on top of things related AI - have you checked out any of bittensor Subnets for your overall set up ?
English
0
0
0
2
Alex Finn
Alex Finn@AlexFinn·
This is potentially the biggest news of the year Google just released TurboQuant. An algorithm that makes LLM’s smaller and faster, without losing quality Meaning that 16gb Mac Mini now can run INCREDIBLE AI models. Completely locally, free, and secure This also means: • Much larger context windows possible with way less slowdown and degradation • You’ll be able to run high quality AI on your phone • Speed and quality up. Prices down. The people who made fun of you for buying a Mac Mini now have major egg on their face. This pushes all of AI forward in a such a MASSIVE way It can’t be stated enough: props to Google for releasing this for all. They could have gatekept it for themselves like I imagine a lot of other big AI labs would have. They didn’t. They decided to advance humanity. 2026 is going to be the biggest year in human history.
Google Research@GoogleResearch

Introducing TurboQuant: Our new compression algorithm that reduces LLM key-value cache memory by at least 6x and delivers up to 8x speedup, all with zero accuracy loss, redefining AI efficiency. Read the blog to learn how it achieves these results: goo.gle/4bsq2qI

English
333
874
9.7K
1.5M
ᴸᴵᴸ€ryp (τ,τ) रीट्वीट किया
@jason
@jason@Jason·
This Bittensor Subnet Could Cut Drug Discovery Costs in Half x.com/i/broadcasts/1…
English
44
85
519
57.6K
ᴸᴵᴸ€ryp (τ,τ)
ᴸᴵᴸ€ryp (τ,τ)@lilcryptm·
@AlgodTrading tried setting up my miner w openclaw x kimi , been insufficient.. claude code seems to not even know what tao actually is let alone subnets and how to mine em efficiently - any tips / tricks which subnets to mine on w agents and what models you use / ur setup
Algod@AlgodTrading

If you’re in bittensor, use claude code or setup an openclaw instance, try to find holes or outcompete miners on subnets The better the miner output, the faster bittensor gets full blown adoption Everyone can mine now, just be creative

English
0
0
1
92
@jason
@jason@Jason·
[ NOT FINANCIAL ADVICE ] [ NOT FINANCIAL ADVICE ] Family member: "What's this $tao you keep talking about on the pod?" Me: "Sell half your $btc, buy $tao " [ NOT FINANCIAL ADVICE ] [ NOT FINANCIAL ADVICE ] #remindmeofthistweet in six, 12, 18 and 24 months What logical sense does it make to buy $mstr instead of $btc directly?
Michael Saylor@saylor

The Orange March Continues.

English
258
139
893
404.6K
ᴸᴵᴸ€ryp (τ,τ)
ᴸᴵᴸ€ryp (τ,τ)@lilcryptm·
@bleighky @Jason You think if he was bullish on something he wouldn’t be invest or what are you implying w this ? Yall are retards
English
1
0
0
63
ᴸᴵᴸ€ryp (τ,τ) रीट्वीट किया
@jason
@jason@Jason·
$tao > $btc
QCT
654
464
2.4K
1.3M
ᴸᴵᴸ€ryp (τ,τ)
ᴸᴵᴸ€ryp (τ,τ)@lilcryptm·
@AlexFinn Which local model are you using on your Mac’s and DGX? And what specific LoRAs tool / Programm for fine tuning ? And if you’re at it , any tips / tricks for how to fine tune properly and efficiently? Thanks
English
0
0
0
7
Alex Finn
Alex Finn@AlexFinn·
My mind is so blown I have my own personal AI research lab running 24/7/365 I'm just one dude with an entire team of AI agents training models and doing R&D I think this is the biggest opportunity right now: taking Karpathy's Autoresearch framework and applying it to everything I have a team of AI agents running experiments all day and night on system prompts, local models, and LoRAs. I also have them doing R&D on my new project. They spend all day discussing my app, coming up with new ideas, then debating eachother An entire organization of autonomous agents continuously improving my business 24/7/365 I feel like I have unlimited power Right now they are all running on ChatGPT 5.4, but today I will move them to local models running on my 3 Mac Studios and DGX Spark so this will all become free Free, local super intelligence working for me at all times. 10 year old me would think this is a scifi Do this immediately: 1. Ask your agent about Karpathy's Autoresearch. Deeply understand it 2. Ask your agent how you could apply that framework to other projects you're working on 3. Download a local model. Doesn't matter what computer you have. There is a model you can run on it. 4. Just get used to how it works. Learn from it. 5. Push yourself to get uncomfortable every day and try new things. There has never been a better/more profitable time to be a tinkerer
Alex Finn tweet media
English
252
225
2.3K
165.2K
ᴸᴵᴸ€ryp (τ,τ) रीट्वीट किया
Openτensor Foundaτion
Openτensor Foundaτion@opentensor·
The largest decentralised LLM pre-training run in history. SN3 @tplr_ai trained Covenant-72B across 70+ contributors on open internet infrastructure. Now it’s being discussed by @chamath with @nvidia CEO Jensen Huang. Distributed, open-weight model training on Bittensor is getting started.
English
76
375
1.7K
117.4K