Quasar

216 posts

Quasar banner
Quasar

Quasar

@QuasarModels

Bittensor subnet built to crush the long-context barrier | SN24 | Backed by @const_reborn @bitstarterAI

SN24 Se unió Kasım 2025
128 Siguiendo933 Seguidores
Tweet fijado
Quasar
Quasar@QuasarModels·
Introducing Subnet 24: Quasar by SILX AI We are solving the context window limitation by introducing a new subnet for competitive long-text understanding. What’s coming in 2026: - We are actively updating our subnet code - Quasar models will be released as open source - An RL framework for enhancing open-source models Quasar subnet leaderboard and ranking system We are backed by @dsvfund @bitstarterAI Founders : @TroyQuasar Eyad Gomaa @Farahatyoussef0 Youssef Farahat
Quasar tweet media
English
15
13
60
21.2K
Quasar retuiteado
100x Research
100x Research@The100xresearch·
128 subnets. Most of them are green today, especially after Templar’s recent run. If you want to understand the bittensor ecosystem better Here are the top 16 subnets with interesting use cases, high emissions and relatively stable market caps. COMPUTE & INFRASTRUCTURE: @chutes_ai (SN64): A simple, serverless-style compute subnet. > It is designed to run AI workloads in a simple way so developers don’t need to manage infra directly. > It abstracts compute through containerized jobs, a cost-effective alternative to traditional AI APIs. > It has also begun incorporating TEEs for privacy and expanding model support. @TargonCompute (SN4): A secure compute subnet focused on verifiable inference. It uses trusted execution environments (TEEs) and hardware attestation to ensure models and data remain protected during execution. @hippius_subnet (SN75) - A decentralized storage subnet > It provides lower costs, and secure data storage using a distributed network of miners > Rather than a single data centre, uses the InterPlanetary File System (IPFS) to store data across a global network of decentralised servers > S3 Compatibility: It offers an S3 API, allowing developers to integrate it easily with existing tools, applications, and workflows. @blockmachine_io (SN 19): Replacing the old "Nineteen" branding, Blockmachine focuses on production-grade infrastructure. > SN19 is specialized in high-performance inference, particularly for image generation using models like Stable Diffusion derivatives. > Users can access top tier opensource models (such as LLaMA 3) effectively in terms of both cost and speed, compared to traditional model-as-a-service providers > Miners run GPU-intensive AI models locally. They receive requests from validators, generate images or text, and send them back instantly. MODEL TRAINING & COORDINATION @Tplr_ai (SN3): A subnet focused on LLM training and coordinating distributed compute to train open models across the network. > The system is built to handle latency across nodes, enabling decentralised training at scale. > It recently coordinated one of the largest decentralised pre-training runs (Covenant-72B) > This showed that distributed participants can train competitive models without centralised infrastructure. > And it holds the highest tao emissions today - ~6.5% @IOTA_SN9 (SN9): built by @MacrocosmosAI, one of the largest teams in Bittensor ecosystem. > It is another subnet focused on decentralised pre-training of large models. > Unlike templar, IOTA is a framework designed for orchestrating training. > Iota uses an approach where a central coordinator manages async updates across nodes. > It works in contrast to traditional AI training - it doesn't require a central cluster > IOTA enables miners on smaller GPUs to connect their compute power to form a "swarm," training massive models collaboratively. MODELS, AGENTS & INFERENCE SYSTEMS @Quasarmodels (SN24): Quasar (by SILX Labs) is a long context foundation model subnet. > It specialises in optimising flash-linear attention kernels - enabling AI models to process, remember, and reason over massive amounts of information > These models can handle millions of tokens with ~10x cost efficiency > Quasar is built to handle tasks such as Document Analysis (Reasoning entire books), Codebase Analysis (debugging entire software repositories at once), etc. @Gradients_ai (SN56): An autoML subnet focused on model fine-tuning and alignment > It is developed by @RayonLabsAI > Users train models on custom data sets (without managing heavy GPU infra) -> improved outputs -> rewards -> more reliable outputs. @Basilic_ai (SN 39): Basilica is a marketplace for high-performance GPU power. > Developed by the @covenant_ai team, it is part of a three-part AI development pipeline (@tplr_ai for pre-training, @grail_ai for RL post-training, and @basilic_ai for compute infrastructure). > It is essentially a compute cloud owned by the community. > It connects miners with GPU resources to developers who need heavy data for training and inference. @Apex_SN1 (SN1): A subnet focused on improving LLM responses and evaluation. ​​> It uses a GAN-style framework - Validators create complex tasks involving tools and function calls. > Miners have to generate AI responses under time constraints > Validators use a high quality reference response to verify the quality of AI generated response. >The best performing miners receive TAO emissions @Affine_io (SN120): Functions as a decentralised Reinforcement Learning (RL) coordinator. >It is the "subnet bridge" that scales reasoning as a service across other specialised subnets. > Connects different subnets - such as taking compute resources from Chutes (SN64) to solve complex inference tasks. > This allows models to reinforce learning by interacting across multiple Bittensor data sources. > Miners are rewarded for submitting models that show improvements in complex logical tasks. > It is a competitive loop where an AI improvement of just 1% can result in rewards, encouraging constant, rapid optimisation. @Ridges_ai (SN62): A subnet focused on developing autonomous AI coding agents >It functions as a decentralised research and development laboratory > Miners compete to write, debug, and improve code, aiming to build AI capable of accelerating or automating software development. > It evaluates outputs based on functional correctness, rewarding code that successfully passes task-specific tests and benchmarks. VISION, MEDIA @Webuildscore (SN44): A computer vision subnet used for sports analytics, powered by @YumaGroup > It evaluates how accurately models track players and in-game events from live footage. > It uses advanced tracking metrics (like GS-HOTA) to benchmark performance in real time scenarios @BitMindAI (SN34): A deepfake detection subnet. > It helps you decide what’s real and fake, and the models are continuously evaluated against synthetic media > The goal is to improve the detection - across images, video, and audio > Miners compete to develop the best deepfake models to prevent spreading misinformation TRADING, PREDICTIONS @VantaTrading (SN8): an AI market for financial predictions > It is developed by @taoshiio's team > The miners compete to generate high quality trading signals for crypto and Forex markets. > Users can purchase these signals or contribute to them by participating in the 24/7 trading competition > Validators evaluate these signals not just on profit, but on risk-adjusted metrics like the Sortino Ratio and Omega Ratio.
100x Research tweet media
English
0
4
20
738
Quasar retuiteado
Ayles Flow
Ayles Flow@aylesflow·
We put Deep Research inside an infinite canvas. Ask a question → get a full document. Edit it. Expand it. Export it. 📄⚡ No tabs. No copy-paste. Just one seamless flow. ♾️ Build ideas, essays, and research faster than ever. 🌐 aylesflow.com
English
0
3
12
863
Quasar retuiteado
Punisher ττ
Punisher ττ@CryptoZPunisher·
#Biττensor >> ∆ τ << #τₐcc > $TAO < Subnet 24: Quasar @QuasarModels The Quasar team just shared an interesting experiment regarding long-context AI models. ELI5: What are they actually doing? Most AI models today can only “remember” a limited amount of text at once. Think of it like a desk: if the desk is too small, you can only spread a few documents before everything starts falling off. This limit is called the context window. What Quasar is experimenting with is pushing this limit to 2 million tokens, which is extremely large. To achieve this, they are modifying the core architecture of the model, including: replacing traditional attention mechanisms with linear attention removing the standard positional system (RoPE) and replacing it with a different approach (NoPE) The goal is simple: build models that can reason over extremely long documents without forgetting information. To test this idea, they used a smaller model (Qwen 9B) as a testbed before applying the same architecture to their upcoming Quasar 22B model. The interesting part: according to their experiment, the model remains stable at 2M context length. Why this matters If this approach works at scale, it could allow AI systems to: read entire research archives analyze massive codebases work across extremely large datasets without losing track of information. In short, it addresses one of the biggest bottlenecks in modern AI systems: memory and context length. Personal note From what we can observe so far, the Quasar team appears to be working seriously on the architecture itself, not just building wrappers around existing models. This kind of experimentation, modifying attention mechanisms, testing positional systems, and running distillation stages, is exactly how meaningful progress in AI usually happens. Of course, it is still early, and the real test will be: performance real-world usage and adoption But technically speaking, SN24 is clearly one of the most interesting subnets to watch right now on Bittensor. Execution will tell the rest.
τroy@TroyQuasar

One of the experiments we’re currently running is extending the context length of a Qwen 3.5 9B base model to 2M tokens through architectural re-engineering. The original architecture is built around: - Gated Delta Attention - Gate Attention However, in our Quasar 22B architecture we use a different attention stack: - "Quasar" Continuous-Time Attention - Gated "Linear" Attention So we ran the following experiment. Step (1) Replace Gate Attention → Gated Linear Attention Then use layer distillation so the new architecture learns the behavior of the original model. We trained this stage for ~10B tokens (would likely benefit from more training of course). Step (2) Train using layer distillation so the new architecture learns the behavior of the original model. Then we take this new architecture and modify the positional system: - Remove RoPE - Replace it with NoPE - Increase the sequence length to 2M tokens Then train the model for 20B tokens at the full 2M sequence length. And It works. The model stays stable and operates at 2M context without RoPE scaling tricks Why this matters This experiment is important because the same architectural changes and training stages are also being used in the Quasar 22B model. This smaller model acts as a testbed before scaling the approach. Later this model will also be distilled into Quasar 22B. huggingface.co/silx-ai/Quasar…

Ελληνικά
0
6
29
3.3K
Quasar
Quasar@QuasarModels·
@TargonCompute is used to perform such experiments and for the Quasar 22B cluster training (still in progress).
τroy@TroyQuasar

One of the experiments we’re currently running is extending the context length of a Qwen 3.5 9B base model to 2M tokens through architectural re-engineering. The original architecture is built around: - Gated Delta Attention - Gate Attention However, in our Quasar 22B architecture we use a different attention stack: - "Quasar" Continuous-Time Attention - Gated "Linear" Attention So we ran the following experiment. Step (1) Replace Gate Attention → Gated Linear Attention Then use layer distillation so the new architecture learns the behavior of the original model. We trained this stage for ~10B tokens (would likely benefit from more training of course). Step (2) Train using layer distillation so the new architecture learns the behavior of the original model. Then we take this new architecture and modify the positional system: - Remove RoPE - Replace it with NoPE - Increase the sequence length to 2M tokens Then train the model for 20B tokens at the full 2M sequence length. And It works. The model stays stable and operates at 2M context without RoPE scaling tricks Why this matters This experiment is important because the same architectural changes and training stages are also being used in the Quasar 22B model. This smaller model acts as a testbed before scaling the approach. Later this model will also be distilled into Quasar 22B. huggingface.co/silx-ai/Quasar…

English
0
1
21
522
Quasar retuiteado
Punisher ττ
Punisher ττ@CryptoZPunisher·
#Biττensor >> ∆ τ << #τₐcc > $TAO < Subnet 24: Quasar @QuasarModels For those who are not familiar with Quasar yet, this team is clearly not here to play a minor role. Their successful launch on Bitstarter is another strong signal that this subnet deserves to be on your radar. I’m sharing two links here to my first articles dedicated to SN24. ➡️x.com/CryptoZPunishe… ➡️x.com/CryptoZPunishe… Personally, I remain convinced that this project can go very far.
Punisher ττ tweet mediaPunisher ττ tweet media
Quasar@QuasarModels

What’s coming from us this month really can’t let us sleep. It must be done right!

Ελληνικά
0
5
32
2.5K
Quasar retuiteado
Andy ττ
Andy ττ@bittingthembits·
Real builders @TroyQuasar no sleep in 48 hours, building on $TAO you need to move at the speed of AI. The network may produce intelligence…but it’s powered by people. Take @QuasarModels SN24: @TroyQuasar is Chasing SOTA AI, not replace human effort, but to amplify it MORE. They’re pushing models with millions of token context while making inference 10x cheaper on everyday GPUs. How? Instead of traditional quadratic attention (which explodes in cost as context grows), they’re using Flash Linear Attention scaling compute linearly with context length. More memory. Without the cost explosion. Then miners compete to submit optimized CUDA kernels that squeeze maximum performance out of normal GPUs. No massive datacenter required. Just raw optimization + competition. That’s the beauty of Bittensor. People building like maniacs. Markets rewarding the best models. Infrastructure getting better every single day. You can’t fake that, folks. The future of AI isn’t being built by machines, NO, NO, NO. It’s being built by incredible people competing with each other to create them. That is: Human curiosity. Human obsession. Human labor. Thats is $TAO #SN24 #Quasar
τroy@TroyQuasar

I haven’t slept for the past 48 hours, and I can’t , building state of the art on Bittensor is really not easy ha

English
2
6
59
4.1K
Quasar retuiteado
· τaoli ·
· τaoli ·@taoleeh·
The agentic future isn't coming. It's already here — and most people are still treating it like a prediction. @cursor_ai is writing production code. @claudeai is browsing the web. Devin (@cognition) is opening PRs. Agents are booking flights, running pipelines, managing workflows. The question is no longer "will AI act autonomously?" It's "what infrastructure does it need to do so reliably, at scale, without a single point of failure?" That question is why I'm watching a specific cluster of Bittensor subnets very closely right now. Agents have 3 fundamental needs: 1. DATA — they need to query the state of the world, especially on-chain and cross-chain, in real time 2. CONTEXT — they need to hold entire codebases, documents, and conversation histories in memory without losing the thread 3. TRUST — their outputs need to be evaluated by something that can't be gamed or manipulated Centralized infrastructure fails on all three over time. It gets rate-limited, censored, corrupted, or simply shut down. Decentralized infrastructure doesn't. Here are the subnets I believe are most exposed to the coming wave of agentic demand: DATA: @HermesSubnet (#SN82) — Decentralized GraphQL query infrastructure and blockchain data indexing across multiple chains. This is the data access layer that agents will depend on. When an agent needs to query live on-chain state across #Ethereum, #Polkadot, or any #SubQuery/Graph-indexed network, it needs something reliable, censorship-resistant, and fast. Hermes is building exactly that — miners compete to serve accurate, low-latency GraphQL responses while validators benchmark them against ground truth. As multi-chain dApps scale and agent-driven transactions become the norm, the demand for this infrastructure will compound quietly and then all at once. CONTEXT: @QuasarModels (#SN24) — Long-context foundation models evaluated decentrally, with context windows scaling from 32k all the way to 2 million tokens. Think about what agents actually do: they read entire repositories, digest lengthy contracts, reason over hours of conversation history. The 8k context window era is already dead. Quasar isn't just benchmarking long-context models — it's building them (silx-ai/Quasar-2M-Base is their own 26B parameter model built specifically for this). As agents take on longer, more complex tasks, the demand for models that don't drop context mid-thought will become a hard requirement, not a nice-to-have. And the evaluation infra to validate which models actually perform? That's Quasar's moat. TRUST: @platform_tao (#SN100) — A trustless, decentralized evaluation framework for AI challenges powered by Byzantine fault-tolerant consensus (PBFT). Miners submit code and models. Multiple validators independently run those submissions inside sandboxed Docker containers. Results are stake-weighted and outlier-filtered before weights are submitted to Bittensor. The design philosophy here is important: in an agentic world, you can't just ask a model if its own output is correct. You need a neutral, manipulation-resistant arbiter. Platform is building the trustless ground truth layer for AI evaluation — and it's written in Rust, which tells you something about the people building it. The through-line connecting all three: Agents are becoming the primary consumers of AI infrastructure. Not humans typing into chatboxes — autonomous systems making thousands of API calls per hour, holding long context windows open, querying live data, and producing outputs that need to be verified. The subnets positioned to serve that demand — data access, long context, trustless evaluation — are the ones I'd want exposure to before this cycle matures. We're at the infrastructure layer of the agentic stack. Early. @opentensor #Bittensor $TAO
English
2
3
9
688
Quasar retuiteado
OBLock
OBLock@oblock107·
What Subnets to Invest In 2026?📈 2026 is shaping up to be an extremely exciting year for $TAO subnets. In my view, now is one of the best times to enter subnets. Why I think subnets offer better asymmetric bet than root in the coming months: ‣ Subnets are shipping real, usable products (e.g., Score and Ridges beta launches in March/April) ‣ Subnets are generating tangible revenue (e.g., Chutes at ~$6M annualized run rate as of early 2026) ‣ General crypto accounts are increasingly noticing $TAO, and larger VC and TradFi players are recognizing the importance of decentralized AI (e.g., Stillcore Capital’s recent thesis, @Jason having a dedicated space on $TAO in front of his 1.1M followers, institutional staking flows surging) ‣ As evidence of this shift, the % of TAO locked in subnets has steadily climbed and now sits at ~19% The most important question is: which subnets to choose? This answer will vary depending on your risk appetite. If you are looking for lower-risk, more conservative exposure (ideal for new entrants or larger positions where capital preservation matters), I would look into more established subnets in top 10-15. Some of my favorites are: ‣ SN64 @chutes_ai: serverless AI inference cloud (cheapest, fastest LLM access) ‣ SN44 @webuildscore: computer vision for sports and real-world monitoring ‣ SN62 @ridges_ai: autonomous coding agents (SWE-Bench leader) ‣ SN120 Affine: decentralized network for advanced reasoning model development ‣ SN3 @tplr_ai: distributed open-source AI training ‣ SN8 @VantaSN8: decentralized leveraged trading on TAO/alphas ‣ SN9 Iota @MacrocosmosAI: large-scale LLM pre-training If you’re more experienced with subnets and comfortable with higher risk/reward, lower-cap plays can deliver much higher returns. Some of my favorites in this category (in numerical order): ‣ SN2 DSperse @inference_labs: Proof-of-Inference (zkML) for verifiable AI agent execution ‣ SN6 @numinous_ai: Decentralized forecasting by aggregating AI agents into superhuman LLM forecasters ‣ SN24 @QuasarModels: Long-context LLM subnet for massive memory & coherence ‣ SN46 @resilabsai: Decentralized real-estate data oracle ‣ SN50 @SynthdataCo: Predictive intelligence for financial markets ‣ SN58 @handshake_58: Trustless micropayments for autonomous AI agents ‣ SN59 @babelbit: Real-time multilingual translation ‣ SN60 @bitsecai: Decentralized AI-powered security auditing ‣ SN63 @qBitTensorLabs: Quantum-inspired ML and IP/patents for advanced compute ‣ SN75 @hippius_subnet: Blockchain-backed cloud storage, VMs, and apps ‣ SN71 @LeadpoetAI: Decentralized sales intelligence engine ‣ SN85 @vidaio_: High-efficiency video compression and upscaling ‣ SN82 @HermesSubnet: Real-time on-chain data querying for blockchain-native AI ‣ SN88 @Investing88ai: AI-driven investing strategies and portfolio tools ‣ SN103 @djinn_gg: Decentralized coordination for market insight and execution These are some of the subnets where I expect future demand to rise consistently. But higher risk also means higher chance of loss, so I highly recommend you only allocate what you can afford to lose, and always do your own research. The timing feels ripe. Markets are quiet, crypto is dead, wars rage on; yet many subnets aren’t even flinching against $TAO. In fact, many of them are thriving. When you see that kind of resilience amidst sea of red, you know you’re looking at something special. Betting on $TAO isn’t about hoping numbers go up. It’s betting that decentralized AI will claim a meaningful share of the intelligence market reshaping every industry. I come from crypto and this is my third cycle. I’ve made some, lost far more. Most were trading plays, some were investments. And rarely there comes along an opportunity of a life time. For me, the first was Bitcoin. Bittensor is the second. I’m making sure I seize this opportunity. If you see the same opportunity in Bittensor, I hope you seize it too.
English
2
9
39
1.9K
Quasar
Quasar@QuasarModels·
March should be named "Month of the Quasar" 👀
English
3
7
33
3.6K
Quasar retuiteado
Ugo Chiya τ_τ Al
Ugo Chiya τ_τ Al@ugo_chiya21·
While much of the attention in the Bittensor ecosystem focuses on price action and hype cycles, some subnets are quietly building infrastructure that could become essential as decentralized AI scales. There are four in particular that I expect future demand for to rise significantly: @HermesSubnet (SN82) Hermes is building a critical intelligence layer for the agentic economy. It allows AI agents to reliably query and understand blockchain data using GraphQL. Right now, on-chain data is fragmented and difficult for AI systems to interpret. Hermes solves that problem, unlocking use cases like AI trading agents, governance automation, and compliance systems. As agent-driven crypto workflows become mainstream, Hermes could become quiet but indispensable infrastructure. @Bitcast_network (SN93) The crypto attention economy is dominated by a few massive creators, but the most engaged communities often exist on smaller channels. Bitcast flips the model. Instead of paying large upfront sponsorship fees, it activates a network of smaller creators and aligns incentives through performance-based distribution. Multiple channels. Highly engaged audiences. Aligned incentives. That model could become a powerful growth engine for the Bittensor ecosystem. @QuasarModels (SN24) One of the biggest bottlenecks in AI today is context length. Modern LLMs struggle when dealing with massive datasets. Quasar is solving this by building decentralized long-context models capable of handling 2M+ tokens. Miners optimize flash-linear attention kernels while validators verify performance on-chain. As coding agents, research agents, and enterprise AI systems grow, the ability to handle massive contexts becomes essential. Quasar is essentially turning AI memory into a decentralized commodity. @vidaio_ (SN85) Video now represents 80–85% of global internet traffic, yet video processing remains expensive and centralized. Vidaio introduces decentralized AI models that can upscale low-resolution video to HD/4K or compress files by up to 95% while maintaining quality. Validators score results using metrics like VMAF, PieAPP, and CLIP-IQA to ensure fair rewards in $TAO. With a $400B+ global video processing market, the opportunity here is enormous, especially as AI-generated video, streaming platforms, and autonomous systems continue to grow. As Bittensor expands toward 100+ subnets, the real value will come from infrastructure that powers the ecosystem. These four stand out to me as quiet builders positioned for massive demand in the years ahead. Final thought: @HermesSubnet As more AI agents begin interacting with blockchains, the ability to structure and query on-chain data efficiently becomes critical. Hermes may not be the loudest subnet today, but the layer it’s building could quietly become one of the most important pieces of infrastructure in the agentic crypto economy.
Ugo Chiya τ_τ Al tweet media
English
3
3
17
650