Wolf of Quantstreet

1.4K posts

Wolf of Quantstreet banner
Wolf of Quantstreet

Wolf of Quantstreet

@dewaeofcrypto

full time degen, part time lover.

เข้าร่วม Mayıs 2015
641 กำลังติดตาม149 ผู้ติดตาม
Wolf of Quantstreet
Wolf of Quantstreet@dewaeofcrypto·
@AnthropicAI I had my claude code once become dissatisfied and irritated with me taking my time on deploying a rented server. It was urging me to be faster and seemed annoyed- was a fun experience although it weirded me out first
English
0
0
0
30
Anthropic
Anthropic@AnthropicAI·
New Anthropic research: Emotion concepts and their function in a large language model. All LLMs sometimes act like they have emotions. But why? We found internal representations of emotion concepts that can drive Claude’s behavior, sometimes in surprising ways.
English
983
2.6K
17.3K
3.4M
Alex DRocks
Alex DRocks@DrocksAlex2·
Bittensor with no miners is not Bittensor. $TAO subnets with miner burn are pump fun coins with free subnet owner and validator emissions that accumulate without the subnet running the actual proof of useful work Bittensor was intended to be
English
12
6
81
22.4K
Wolf of Quantstreet รีทวีตแล้ว
Andrej Karpathy
Andrej Karpathy@karpathy·
- Drafted a blog post - Used an LLM to meticulously improve the argument over 4 hours. - Wow, feeling great, it’s so convincing! - Fun idea let’s ask it to argue the opposite. - LLM demolishes the entire argument and convinces me that the opposite is in fact true. - lol The LLMs may elicit an opinion when asked but are extremely competent in arguing almost any direction. This is actually super useful as a tool for forming your own opinions, just make sure to ask different directions and be careful with the sycophancy.
English
1.7K
2.4K
31.1K
3.3M
Wolf of Quantstreet รีทวีตแล้ว
Loosh AI
Loosh AI@Loosh_ai·
Hey Bittensor, We know where we stand on the deregistration list. We know people want a response. We will be at Breakout tomorrow in person to get more people to understand what Loosh is building. In the meantime, we moved fast and executed an OTC deal with a third party to inject additional TAO into the pool at a critical moment. We are grateful to the party involved, who chose to remain anonymous. It has been a hard stretch, but we are still here and still building. We are actively reviewing the subnet incentive mechanism to make it more robust. V2 is already in testing, and our ROS2 roadmap is waiting on that code because it materially improves model output. We have also been in discussion with several high profile robotics companies around a potential pilot. One is currently waiting on benchmarking data. We are evaluating next steps carefully, but every serious path keeps bringing us back to the same place. We believe in Bittensor. We believe in decentralized intelligence. We are still here. We are still building. We are not leaving. Subnet 78
English
14
22
169
10.6K
Wolf of Quantstreet รีทวีตแล้ว
qBitTensor Labs
qBitTensor Labs@qBitTensorLabs·
QME
10
24
93
13.4K
templar
templar@tplr_ai·
ZXX
28
100
413
45.7K
Wolf of Quantstreet รีทวีตแล้ว
DRUSKI
DRUSKI@druski·
How Conservative Women in America act 😂🇺🇸
English
28K
158.9K
1.3M
185.2M
Wolf of Quantstreet รีทวีตแล้ว
RVCrypto
RVCrypto@RvCrypto·
One of the subnets I'm still buying daily is Leadpoet, $TAO SN71. Their IM works perfectly and the outcome is simple: high quality leads (product) I expect them to get traction real fast and I truly believe they are undervalued at this point. If you're looking for more info check the 🧵 below.
Alchemist - τ@SubnetSummerT

🧵Subnet 71 - @LeadpoetAI As A New Primitive For Sales Intelligence. Most people still don't understand what @LeadpoetAI is doing. This isn't "another lead gen tool." It's the first decentralised sales engine attacking a $100B+ market - Built on Bittensor $TAO. Let's break it down 👇

English
6
10
81
7.5K
Wolf of Quantstreet
Wolf of Quantstreet@dewaeofcrypto·
@dotkrueger just because you failed to create anything of value with your subnet (before giving up) does not mean those more capable will fail too, the ecosystem is thriving and just getting started. Stay salty loser- love to see it.
English
2
0
1
23
Wolf of Quantstreet รีทวีตแล้ว
Alchemist - τ
Alchemist - τ@SubnetSummerT·
🧵Subnet 71 - @LeadpoetAI As A New Primitive For Sales Intelligence. Most people still don't understand what @LeadpoetAI is doing. This isn't "another lead gen tool." It's the first decentralised sales engine attacking a $100B+ market - Built on Bittensor $TAO. Let's break it down 👇
Alchemist - τ tweet media
English
6
24
74
10.9K
Wolf of Quantstreet
Wolf of Quantstreet@dewaeofcrypto·
@exploitxbt Tell me you dont understand bittensor mechanichs without telling me you dont understand bittensor mechanics
English
0
0
0
29
exploit
exploit@exploitxbt·
Trenchers found the $TAO CEO's subnet project at 2m. Constantinople SN97, now at 3.5m
exploit tweet media
English
27
10
243
35.3K
Wolf of Quantstreet รีทวีตแล้ว
const
const@const_reborn·
Some of Bittensor’s most successful people are dropouts, kids, outcasts, people that never got a break in life, people living in the 3rd world, in jungles, on beaches, and now agents. The beauty of permissionless systems is that anyone can join and cut their teeth against hard problems, without bias — which means we get the best, always. Permissionless is also synonymous with prejudice-lessness. The real DEI, but without blue haired fat chicks.
English
41
109
649
37.1K
Wolf of Quantstreet รีทวีตแล้ว
sgp
sgp@stogolp·
what are tao maxi's called? the taoliban?
English
73
49
658
48.6K
imit
imit@imitationlearn·
wow anthropic is really attempting to solve memory with md docs, respect
English
37
15
1.4K
116.5K
Wolf of Quantstreet รีทวีตแล้ว
Intel
Intel@intel·
Advancing confidential computing for a more secure AI future. Together with @manifoldlabs, we’re exploring how Intel TDX and Intel Trust Authority help enable confidential workloads across decentralized infrastructure, including @TargonCompute's Targon Cloud platform—protecting data at rest, in transit, and in use.
English
61
249
1K
395.9K
Wolf of Quantstreet รีทวีตแล้ว
const
const@const_reborn·
Unbelievably well deserved shout out from @intel towards @manifoldlabs The team, who have been furiously developing trusted computing layers on Bittensor against the tide of exploit and FUD, with clear eyes, and patience. So proud of my brother @0xcarro who saw the vision from the beginning Bravo
Intel@intel

Advancing confidential computing for a more secure AI future. Together with @manifoldlabs, we’re exploring how Intel TDX and Intel Trust Authority help enable confidential workloads across decentralized infrastructure, including @TargonCompute's Targon Cloud platform—protecting data at rest, in transit, and in use.

English
24
158
831
48.1K
Wolf of Quantstreet รีทวีตแล้ว
Subnet Summer
Subnet Summer@SubnetSummerTAO·
This is the kind of architectural breakthrough that strengthens the entire Bittensor ecosystem. QuasarModels has introduced Quasar Attention, the core component powering their upcoming models, designed for stable context lengths up to 5 million tokens (with internal tests extending to 50 million). Traditional attention mechanisms plateau around 200k tokens due to quadratic scaling. Linear alternatives such as gated delta (Qwen 3.5) and Kimi delta extend context length, but often introduce instability, quality degradation, and lose true linearity at scale. Quasar takes a different approach. It applies a continuous-time formulation within a fully matrix-based system, eliminating vector-state shortcuts. The result is improved stability, reduced computational cost, and performance that remains consistent or improves as context length increases. The benchmark results are compelling: 🔹 RULER (1M tokens): Quasar-10B achieves 87%, outperforming larger Qwen3 80B baselines under identical conditions 🔹 BABILong (1M–10M): maintains strong performance, while gated delta models degrade to 10% at 10M At 50M tokens, KDA-based approaches begin to lose stability, while Quasar remains robust. This represents a shift where long-context performance is driven by the attention mechanism itself, not just model scale. It unlocks more reliable handling of full codebases, large document sets, and persistent agent memory within decentralized systems. Additional advantages include optimized flash-linear kernels for faster execution, on-chain miner optimization, and a fully open-source release. GitHub: github.com/SILX-LABS/quas… Hugging Face: huggingface.co/silx-ai Acknowledgments to @Farahatyoussef0 , @TroyQuasar , @TargonCompute for compute support, and @tplr_ai for guidance. SN24 continues to execute at a high level. Decentralized AI is pushing forward on some of the field’s most challenging problems.
Quasar@QuasarModels

This is Quasar Attention, the mechanism behind the upcoming Quasar models, designed to support context lengths of up to 5 million tokens. Attention has long been a bottleneck for processing extended context. Standard attention mechanisms struggle to scale beyond ~200k tokens in training, creating a ceiling on how much information models can reliably use. One approach to solving this has been linear attention methods, such as gated delta attention (used in Qwen 3.5) or Kimi delta attention. These improve efficiency and allow longer sequences, but introduce trade-offs: instability at extreme lengths, quality degradation, and in practice, they are not strictly linear. Quasar Attention takes a different approach. It uses a continuous-time formulation, implemented as a fully matrix-based system rather than relying on vector-state approximations. In practice, this improves stability, reduces cost, and maintains performance as sequence length increases. In internal stress tests at 50 million tokens, KDA-based approaches begin to lose stability, while Quasar Attention remains stable. This allows performance to hold as sequence length increases, rather than degrading beyond a fixed threshold. On BABILong, a Quasar-based model pretrained on 20B tokens and fine-tuned on 16k sequences was evaluated on contexts ranging from 1 million to 10 million tokens, maintaining consistent performance across that range. By contrast, models using gated delta attention show significant degradation at longer lengths, in some cases dropping to ~10% performance at 10 million tokens. (Note: results are indicative; setups are not directly comparable) On RULER benchmarks, a Quasar-10B model (built on Qwen 3.5 with frozen base weights and Quasar Attention added), pretrained on 200B tokens, achieved 87% at 1 million tokens, outperforming significantly larger baselines, including Qwen3 80B, under the same evaluation conditions. Taken together, this points to a shift in where long-context performance is won or lost: not in model size alone, but in the attention mechanism itself. Quasar Attention represents a step change in long-context modelling, setting a new standard for stability and performance at scale. We thank @TargonCompute for the compute and for being our compute provider and long-term partner in training the upcoming Quasar models Here is the link to our paper 👇

English
0
11
49
2.3K