Sam Stolt

18.5K posts

Sam Stolt banner
Sam Stolt

Sam Stolt

@Sam_A_Stolt

MBA & CFA I like Crypto - tao Real Estate Portfolio Sold 1 business Bought 5 business Bought 10 media assets Travelled 21 countries

Clearwater FL เข้าร่วม Mayıs 2013
4.8K กำลังติดตาม7K ผู้ติดตาม
Sam Stolt รีทวีตแล้ว
Jeremy
Jeremy@Jeremybtc·
Anthropic accidentally leaked their entire source code yesterday. What happened next is one of the most insane stories in tech history. > Anthropic pushed a software update for Claude Code at 4AM. > A debugging file was accidentally bundled inside it. > That file contained 512,000 lines of their proprietary source code. > A researcher named Chaofan Shou spotted it within minutes and posted the download link on X. > 21 million people have seen the thread. > The entire codebase was downloaded, copied and mirrored across GitHub before Anthropic's team had even woken up. > Anthropic pulled the package and started firing DMCA takedowns at every repo hosting it. > That's when a Korean developer named Sigrid Jin woke up at 4AM to his phone blowing up. > He is the most active Claude Code user in the world with the Wall Street Journal reporting he personally used 25 billion tokens last year. > His girlfriend was worried he'd get sued just for having the code on his machine. > So he did what any engineer would do. > He rewrote the entire thing in Python from scratch before sunrise. > Called it claw-code and Pushed it to GitHub. > A Python rewrite is a new creative work. DMCA can't touch it. > The repo hit 30,000 stars faster than any repository in GitHub history. > He wasn't satisfied. He started rewriting it again in Rust. > It now has 49,000 stars and 56,000 forks. > Someone mirrored the original to a decentralised platform with one message, "will never be taken down." > The code is now permanent. Anthropic cannot get it back. Anthropic built a system called Undercover Mode specifically to stop Claude from leaking internal secrets. Then they leaked their own source code themselves. You cannot make this up.
Jeremy tweet mediaJeremy tweet media
English
533
2.4K
14.1K
622.7K
Sam Stolt รีทวีตแล้ว
const
const@const_reborn·
As for OTF budget and funding. We didn’t premine, so Ala and myself donated a good portion of what we mined from early days. Then the community kicked in and staked to our validator allowing us an 18% take on staking yield for the next few years. Those funds went more or less 100% to running chain infrastructure and paying employees. We had 30 employees at peak. Since the foundation is a Canadian not-for-profit it would have been illegal to take a profit. There are no equity holders and you can’t remove funds.
English
14
31
260
8.7K
Sam Stolt
Sam Stolt@Sam_A_Stolt·
finally getting my openclaw to be slightly persistent at getting things done! haha finally!
English
0
0
0
52
Sam Stolt รีทวีตแล้ว
mogmachine (ττ)
mogmachine (ττ)@mogmachine·
Honest question for Bittensor builders. If emissions stopped tomorrow... would your subnet survive? Not "would it be fine for a while." Would actual customers pay enough to keep the lights on? Because emissions are runway. Not revenue. And I think a lot of teams haven't internalised that yet. $TAO #Bittensor
mogmachine (ττ) tweet media
English
29
13
194
11.6K
Sam Stolt รีทวีตแล้ว
const
const@const_reborn·
Everyone should know about what Chutes is doing. Fully permissionless inference mining. Fully end to end encrypted. Fully private TEE machines. You could safely send private keys over the wire and know it was fully private. The entire stack encrypted from your machine to the LLM and back. The fact that that happens on top of a permissionless network with infra run my god knows who is nothing short of mind boggling.
Jon Durbin@jon_durbin

Longer write-up about the end-to-end encryption we launched a few weeks ago 👀 This is one of those things that really should be ubiquitous across AI inference providers. TEE + full end-to-end (attestable) encryption. I also saw @NEARProtocol and @PhalaNetwork have launched a similar E2EE system now too (and @AskVenice via near/phala), which is awesome! Demand better privacy!

English
25
104
576
29.4K
Sam Stolt
Sam Stolt@Sam_A_Stolt·
@jon_durbin great post - completely agree. burning so alpha price doesnt go down is fine but actually using the emissions to build is way better!
English
0
0
4
288
Jon Durbin
Jon Durbin@jon_durbin·
This makes me a bit sad: minersunion.ai/burn?view=heat… (or see taostats subnet list, sort by incentive burn) 73% of subnets are burning >= 50% of miner emissions, 61% are at 100% 🤯 AKA ~half of all bittensor subnets are not paying miners to produce their commodity. There are of course some good reasons to burn emissions, but if you burn 100%, miners are making nothing, therefore there's no incentive to build or provide whatever commodity your subnet produces. And yes there are other exceptions, e.g. sn3 reduces burn a bit when there is an active model training run and then off between runs, etc. Would be nice to see that trend in the other direction, so these tokens are flowing into incentivizing this network to be built, rather than just flowing to subnet owners and stakers. (i.e. would be better to not have pumpfun vibes IMO but this is crypto after all). I don't know what the purpose of emissions/inflows are if there's no corresponding incentive to build all the things.
English
15
21
135
4.7K
Sam Stolt รีทวีตแล้ว
Quasar
Quasar@QuasarModels·
This is Quasar Attention, the mechanism behind the upcoming Quasar models, designed to support context lengths of up to 5 million tokens. Attention has long been a bottleneck for processing extended context. Standard attention mechanisms struggle to scale beyond ~200k tokens in training, creating a ceiling on how much information models can reliably use. One approach to solving this has been linear attention methods, such as gated delta attention (used in Qwen 3.5) or Kimi delta attention. These improve efficiency and allow longer sequences, but introduce trade-offs: instability at extreme lengths, quality degradation, and in practice, they are not strictly linear. Quasar Attention takes a different approach. It uses a continuous-time formulation, implemented as a fully matrix-based system rather than relying on vector-state approximations. In practice, this improves stability, reduces cost, and maintains performance as sequence length increases. In internal stress tests at 50 million tokens, KDA-based approaches begin to lose stability, while Quasar Attention remains stable. This allows performance to hold as sequence length increases, rather than degrading beyond a fixed threshold. On BABILong, a Quasar-based model pretrained on 20B tokens and fine-tuned on 16k sequences was evaluated on contexts ranging from 1 million to 10 million tokens, maintaining consistent performance across that range. By contrast, models using gated delta attention show significant degradation at longer lengths, in some cases dropping to ~10% performance at 10 million tokens. (Note: results are indicative; setups are not directly comparable) On RULER benchmarks, a Quasar-10B model (built on Qwen 3.5 with frozen base weights and Quasar Attention added), pretrained on 200B tokens, achieved 87% at 1 million tokens, outperforming significantly larger baselines, including Qwen3 80B, under the same evaluation conditions. Taken together, this points to a shift in where long-context performance is won or lost: not in model size alone, but in the attention mechanism itself. Quasar Attention represents a step change in long-context modelling, setting a new standard for stability and performance at scale. We thank @TargonCompute for the compute and for being our compute provider and long-term partner in training the upcoming Quasar models Here is the link to our paper 👇
Quasar tweet media
English
24
81
249
105.6K
Sam Stolt รีทวีตแล้ว
Andy ττ
Andy ττ@bittingthembits·
🚨 Intel just co-authored a whitepaper with Manifold Labs @TargonCompute A Bittensor subnet team $TAO's SN4 Let that land for a second 🔥 Not a blog post. Not a tweet. A formal technical whitepaper published on Intel's official site. Co-written by Intel engineers alongside the Manifold Labs team. Detailing architecture they built together. This is Intel validating decentralized compute at the deepest technical level possible. Most people will scroll past this without realizing what just happened. The single biggest objection to decentralized compute has always been trust. If your AI workload runs on some random person's hardware, how do you know they're not stealing your data? Your model weights? Your training sets? Your proprietary information? That objection just died. Here's what Manifold Labs built with Intel. When you rent a GPU from Amazon or Google, you trust them not to look at your data. That's it. Trust. A legal agreement. A reputation. But technically, the cloud provider can see everything running on their machines. Your data is encrypted when it's stored. It's encrypted when it moves across the network. But the moment it's actually being used, the moment the GPU is computing on it, it's exposed. Decrypted in memory. Visible to the host. Every centralized cloud provider operates on this assumption. You just trust them not to look. Manifold Labs said, we don't trust anyone. And neither should you. They used Intel's TDX technology to create Confidential Virtual Machines. These are encrypted environments where your data is protected at every stage. At rest. In transit. And during execution. The person who owns the hardware physically cannot see what's running on their own machine. Not the data. Not the model. Not the intermediate computations. Nothing. The hardware itself enforces the privacy. Not a contract. Not a promise. Silicon. And it doesn't just check once. Every 72 minutes, the system re-verifies that the machine is still in a genuine confidential state. If anything changes, if the boot chain is tampered with, if the hardware is modified, if the VM is moved to a different machine, the system detects it immediately and shuts it down. The encrypted disk becomes permanently inaccessible. You cannot copy it. You cannot move it. You cannot replay it. You cannot inspect it. The cryptography is enforced at the CPU level by Intel and at the GPU level by NVIDIA. Think about what this unlocks. Every enterprise that has ever said "we can't use decentralized compute because of security concerns" just lost their argument. Every regulated industry, healthcare, finance, legal, defense, that requires data to be protected during processing now has a decentralized option that meets enterprise-grade security requirements. Every AI startup that can't afford $50,000 a month for secure cloud instances can now access the same level of confidential computing on decentralized infrastructure at a fraction of the cost. Intel did not do this as a favor. Intel co-authored this paper because they see decentralized confidential computing as a legitimate market. They see Bittensor subnet teams building real infrastructure that their enterprise customers will use. They see the future of compute and they are choosing to build with the decentralized ecosystem rather than against it. A Fortune 500 semiconductor company just put its name next to a Bittensor subnet's architecture in a formal technical publication. That is not a partnership announcement. That is institutional recognition at the hardware layer. The cloud monopoly is built on two things scale and trust. Bittensor subnets are solving scale through incentivized compute. Manifold Labs and Intel just solved trust through hardware-enforced confidentiality. Both walls are falling. $TAO DYOR.
Targon@TargonCompute

We needed to run trusted workloads on untrusted host machines. So over a year ago, we started building the Targon Virtual Machine to enable Confidential TEEs in production. Today we're sharing our white paper written alongside @intel: Decentralized Compute on Untrusted Hardware Using Intel® TDX and Encrypted CVMs

English
5
45
200
15.3K
Sam Stolt รีทวีตแล้ว
Sami Kassab
Sami Kassab@Old_Samster·
what will be different in this TAO bull run: Besides TAO emissions being halved in Dec 2025, all TAO emissions now flow into subnet liquidity pools rather than straight to miners Subnet owners are highly incentivized to prevent TAO extraction since it directly affects their token price and the subnet teams have gotten exceptionally good at building robust incentive mechanisms to prevent reward gamification at the same time, they've learned how to throttle emissions to only what's necessary to operate their subnet Less sell pressure, better-designed incentives, and cleaner tokenomics create a really nice setup this time
English
6
33
240
17.4K
Sam Stolt รีทวีตแล้ว
templar
templar@tplr_ai·
On the @theallinpod this week, @chamath asked @nvidia CEO Jensen Huang about decentralized AI training, calling our Covenant-72B run "a pretty crazy technical accomplishment." One correction: it's 72 billion parameters, not four. Trained permissionlessly across 70+ contributors on commodity internet. The largest model ever pre-trained on fully decentralized infrastructure. Jensen's answer is worth hearing too.
English
99
402
1.7K
455.1K
Sam Stolt
Sam Stolt@Sam_A_Stolt·
@LeadpoetAI this is interesting? whats the cost per lead?
English
1
0
1
78
Leadpoet
Leadpoet@LeadpoetAI·
Introducing Leadpoet. The AI agent that delivers ready-to-buy prospects on demand. Your next customer is already looking for your solution. Leadpoet finds them. Comment “Poet” and we’ll send you 100 free lead credits for your ICP.
English
694
106
743
682.1K