Veighτ | TuDudes

279 posts

Veighτ | TuDudes banner
Veighτ | TuDudes

Veighτ | TuDudes

@0xVeight

Half @0xTuDudes | Half @chutes_ai | #DecriminalizeNFTs

web3 Katılım Nisan 2023
337 Takip Edilen186 Takipçiler
Sabitlenmiş Tweet
Veighτ | TuDudes
Veighτ | TuDudes@0xVeight·
More than excited to take Chutes into the stratosphere!
Chutes@chutes_ai

Chutes Team Expansion Update We’re excited to share that @0xTuDudes has officially joined Chutes.ai to bring extra firepower across development, sales, marketing, and support. The team has been working hard behind the scenes to scale up and push Chutes forward, and this marks a big step in that direction. Say hello to @0xVeight @0xsirouk @0xAlgowary + more in our discord, here on X or in some new channels listed below 👇

English
0
0
2
326
Veighτ | TuDudes retweetledi
Nous Research
Nous Research@NousResearch·
How it feels to discover Hermes
Nous Research tweet media
English
65
69
999
37.2K
Veighτ | TuDudes retweetledi
Chutes
Chutes@chutes_ai·
We think you should know exactly who's shipping your inference. No CEO. No board. No single person who can pull the plug. Ten contributors, a smart contract, and 100B+ tokens processed daily. @jon_durbin, Backend @airesearch12 (Florian), Backend + Research @KyleWidmann, Backend, built our TEE @0xsirouk (Chris), Backend @0xveight (Vince), Frontend @0xAlgowary (Timon), Sales & Customer Success @Fezicles_eth, Marketing & Community @RCSKBD (Rykorb), Technical Staff Cxmplex, Backend Vonkaiser, Technical Staff Our funds live in a smart contract that pays staking rewards to team members. No one person controls the money, the code, or the direction. Now you know. $TAO
Chutes tweet media
English
33
70
436
24.4K
Veighτ | TuDudes retweetledi
Jon Durbin
Jon Durbin@jon_durbin·
Chutes is and always will be a bittensor project, chutes is bittensor, and bittensor is chutes 🫶 Reminder too that chutes team operates as a group of independent corporations with no CEO, and our funds are locked into a smart contract which pays our staking rewards to fund the team members. If any subnet teams want assistance in setting up a similar smart contract situation or anything, we are happy to help.
English
37
131
820
53K
Veighτ | TuDudes retweetledi
Raleigh, CA | Taoshi
Raleigh, CA | Taoshi@Raleigh_CA·
Why $TAO is about to Breakout.
English
55
120
619
61.8K
Veighτ | TuDudes retweetledi
Chutes
Chutes@chutes_ai·
State of the art on SWE-Bench Pro is now open source. @Zai_org's GLM-5.1. is Live on Chutes. 58.4 on SWE-Bench Pro. Ahead of Claude Opus 4.6 (57.3), GPT-5.4 (57.7), and Gemini 3.1 Pro (54.2). The first open weights model to lead that benchmark outright. It also leads: • CyberGym: 68.7 (best in class) • BrowseComp: 68.0 (best in class) • Terminal-Bench 2.0 with Claude Code: 66.5 • τ³-Bench: 70.6 • HLE with tools: 52.3 • NL2Repo: 42.7 754B params. MIT licensed. Built to hold its edge across hundreds of rounds and thousands of tool calls without plateauing. Running inside a TEE on Chutes. The GPU operators serving the model can't see your prompts or outputs. Confidential compute at the hardware level, on a decentralized GPU network. $0.95 in / $3.15 out per million tokens. Try it now 🔗 chutes.ai/app/chute/b048…
Chutes tweet media
English
13
28
205
9.8K
Veighτ | TuDudes retweetledi
Chutes
Chutes@chutes_ai·
OpenRouter has updated our provider status after verifying our privacy policy thanks to our recent updates Chutes is in their default routing now! openrouter.ai/provider/chutes
Chutes tweet media
English
15
70
326
41.4K
Veighτ | TuDudes retweetledi
Alex DRocks
Alex DRocks@DrocksAlex2·
Bittensor subnet 64 is a central piece of the $TAO network. For context, this is subnet 11 in the screenshot.
Alex DRocks tweet media
English
3
6
55
2.6K
Veighτ | TuDudes retweetledi
Alex DRocks
Alex DRocks@DrocksAlex2·
Chutes revenues are growing @chutes_ai even with a leaner GPU inventory and no $TAO emissions. Higher revenue = more $ captured per hosted GPUs Fewer GPUs = more efficient usage of what's available Things you want to see when aiming for profitability in the long-term Net result is more value staying within the Bittensor eco Chutes never burned miner emissions since day 1. The price pumped in early dTao days and then trended back down. This could have killed the project but in fact it's standing tall and stronger than before. When the price was way up, the network was much less efficient and didn't even have the privacy e2ee + TEE services yet. Now it's battletested and keeps improving all while the revenues are going in the right direction. You either hate it or love it but those are verifiable facts onchain + in their open-source code at github.com/chutesai this isn't just me saying it you can verify by yourself now with an AI agent
Alex DRocks tweet media
English
9
28
113
4K
Veighτ | TuDudes retweetledi
Chutes
Chutes@chutes_ai·
Does your coding agent run on Claude Sonnet 4.6? Costing you $3.00 per million input tokens and $15.00 per million output? MiniMax M2.5 on Chutes costs just $0.19 input and $1.15 output all while running inside a secure and private TEE. M2.5 scores 80.2% on SWE-Bench Verified while Sonnet 4.6 scores 79.6%. You might be paying 15x more per input token and 13x more per output token for a model that scores lower on the benchmark most teams use to evaluate coding agents. M2.5 also scores 51.3% on Multi-SWE-Bench (multi-repo tasks) and 76.3% on BrowseComp (agentic search). MiniMax trained it across 200,000+ real-world coding environments in 10+ languages. The TEE variant on Chutes means your prompts and outputs stay inside a hardware-secured enclave. Claude's API has no equivalent option. Just swap the model string and run your eval suite. Compare and see for yourself the power of open source models on Chutes. 🔗 chutes.ai/app/chute/ce6a…
Chutes tweet media
English
20
60
281
15.1K
Veighτ | TuDudes retweetledi
const
const@const_reborn·
Everyone should know about what Chutes is doing. Fully permissionless inference mining. Fully end to end encrypted. Fully private TEE machines. You could safely send private keys over the wire and know it was fully private. The entire stack encrypted from your machine to the LLM and back. The fact that that happens on top of a permissionless network with infra run my god knows who is nothing short of mind boggling.
Jon Durbin@jon_durbin

Longer write-up about the end-to-end encryption we launched a few weeks ago 👀 This is one of those things that really should be ubiquitous across AI inference providers. TEE + full end-to-end (attestable) encryption. I also saw @NEARProtocol and @PhalaNetwork have launched a similar E2EE system now too (and @AskVenice via near/phala), which is awesome! Demand better privacy!

English
26
104
571
30.6K
Veighτ | TuDudes retweetledi
Jon Durbin
Jon Durbin@jon_durbin·
Neat, we are regularly hitting > 1 million total tokens (in+out) per second average across LLMs.
Jon Durbin tweet media
English
11
22
170
10.4K
Veighτ | TuDudes
Veighτ | TuDudes@0xVeight·
Actually huge
Chutes@chutes_ai

AI inference should not require trust in infrastructure. chutes.ai/news/end-to-en… On March 2nd we shipped end-to-end encrypted transport on Chutes. Here's how it actually works under the hood. Your data is encrypted on your machine, directly to the GPU instance running inside a Trusted Execution Environment. It stays encrypted through our API, load balancers, and the network. Decryption only happens inside TEE-protected hardware where memory is isolated from the host. Impossible for anyone to see including us. The key exchange uses ML-KEM-768 — a NIST-standardized post-quantum key encapsulation mechanism. Every request gets a fresh ephemeral keypair. Forward secrecy by default. Resistant to future quantum attacks. Full technical breakdown in the blog: chutes.ai/news/end-to-en… If you want to try it: → Python: pip install chutes-e2ee → Any language: docker run parachutes/e2ee-proxy:latest github.com/chutesai/chute… github.com/chutesai/e2ee-…

English
0
0
0
10