RonX Labs

508 posts

RonX Labs banner
RonX Labs

RonX Labs

@RonXLabs

https://t.co/9nhrBmCsVZ

Katılım Nisan 2024
525 Takip Edilen274 Takipçiler
RonX Labs retweetledi
mikecontango | τ,τ
mikecontango | τ,τ@mikecontango·
IMO @trishool (sn23) continues to be one of the most underrated subnets on #Bittensor. Model safety is one of the most under-appreciated risks of the next decade. This is another bet with massive asymmetry, similar to decentralized training. - The market size for AI security is 10x size of the $250B cybersecurity market. - Frontier labs are acquiring AI security startups at a rapid pace. Traditional cybersecurity protects the software stack. @Trishool is protecting the AI stack. If we can prove models can be made "effectively safe" via a Bittensor incentive mechanism, the marketplace for this type of product is gigantic.
Trishool | SN23@trishoolai

We are thrilled to share that Astroware, Trishool's parent company, has been accepted in Nvidia's Inception program. By becoming a member, we are in Nvidia'a active AI ecosystem, giving us access to experts, partner networks, compute credits and VC connections. It's also a validation of Trishool's thesis and a recognition for our AI credentials.

English
3
6
22
2.5K
RonX Labs retweetledi
Mark Jeffrey
Mark Jeffrey@markjeffrey·
Checking out @vocence_bt Subnet 78 ... I've spent a LOT of time using Suno to generate AI performance of human-written music -- so trying their 'open Suno' music-gen engine -- But also their voice cloner ... which, apparently, I'm on :)
Mark Jeffrey tweet media
English
2
10
40
4.6K
RonX Labs retweetledi
Sami Kassab
Sami Kassab@Old_Samster·
I think Conviction is going to be a massive net positive for the ecosystem it won't prevent subnet owners from selling to cover expenses, it'll just require them to be more intentional and transparent about it. Teams can run programmatic unlock schedules that will get priced in by the market since they'll be predictable. This is basically a push toward professional treasury management for teams
seth bloomberg@bloomberg_seth

One of the largest Bittensor upgrades, called Conviction, is slated to arrive as early as next week. The overarching goal of the whole Conviction Upgrade is to add a layer of governance at the subnet layer and provide investor protections that currently don't exist natively within the protocol. The Conviction Upgrade in a nutshell: • Subnet owners can choose to lock their token holdings to signal their "Conviction" to the subnet. All of their ongoing/future subnet owner emissions are, by default, auto-locked. • Subnet owners can unlock tokens, which is an onchain action, at any point. But the unlock has a delay function applied to it. After ~30 days, 63% of the unlocked tokens will be spendable, 95% after ~90 days. The delay means subnet owners can't unload their entire holdings without giving an onchain alert to current holders that the owner is exiting. • Any token holder can lock their tokens toward any address. This could be the current subnet owner or a potential new subnet owner. As it currently stands, the initial Conviction Upgrade only implements the locking mechanism. There won't be the ability to "elect" new subnet owners yet. This is a big economic change and does present risks to existing/future subnet owners, so I expect there to be enthusiastic debate from both sides.

English
6
24
144
11.8K
RonX Labs retweetledi
Andy ττ
Andy ττ@bittingthembits·
🚨 Most people completely missed what just happened with $TAO's @QuasarModels SN24 Let me fix that. Here is who Adaption Labs actually is: ▫️ Co-founded by Sara Hooker (@sarahookr) ▫️ Former VP of Research at Cohere ▫️ Veteran researcher at Google Brain AND Google DeepMind ▫️ Author of "The Slow Death of Scaling" the essay the entire AI research world is quoting right now ▫️ $50M seed round raised ▫️ 60,500 followers of serious AI researchers ▫️ Speaking at Fortune Brainstorm Tech in Aspen this June on fixing the AI stack ▫️ HuggingFace now integrated into their data platform A Cohere / Google DeepMind veteran with $50M in funding just plugged her adaptive data pipeline directly into a Bittensor subnet. Read that one more time. Why she chose Bittensor. Adaption's entire thesis: "AI was built for a fraction of the world. Adaption is building it for the rest." "Everything intelligent adapts. So should AI." They are betting AGAINST brute force scaling. They are betting FOR continual learning. Real-time adaptation. Efficiency. Systems that actually evolve. That is literally what Bittensor was designed to do. This wasn't a deal. This was thesis gravity. Sara Hooker didn't pick a Bittensor subnet because someone pitched her. She picked it because when you believe what she believes: Adaptive > Static Efficiency > Scale Open > Closed Bittensor is the only network that structurally makes sense. What Quasar miners are now getting. State-of-the-art adaptive datasets nobody in decentralized AI has ever had access to before. High-quality reasoning data. Long-context training data. Generalization data. Pulled from one of the most credentialed data pipelines in the world. Troy from the Quasar team said it plainly: "SN24 miners are the luckiest in the world." And then: "When we said we are going to deliver state of the art, we meant it." This is the data infrastructure that closed-source frontier labs guard jealously. Quasar just opened it up to permissionless miners on Bittensor. The quality bar nobody else has set. Quasar said something publicly that took courage. Previous decentralized training runs even SN3's celebrated Covenant-72B were technically impressive but not practically usable because quality was never the core priority. Quasar is the first team in decentralized AI to say: Scale is not enough. We are building models people will actually use. Largest MoE training run in decentralized AI. SOTA performance as a requirement from day one. Long-context. Reasoning. Generalization. Not impressive numbers on a benchmark. Usable models that compete with GPT and Claude directly. Now look at the timing. Sara Hooker is on the Fortune Brainstorm Tech main stage in Aspen. June 8-10. Five weeks away. The same stage that has hosted the CEOs of NVIDIA, Microsoft, Google, and OpenAI. The topic: how we get from the AI stack we have to the AI stack we truly need. She is going to stand on that stage in front of Fortune 500 executives and talk about why the current AI stack is broken. And Bittensor is now part of her solution. That is not a podcast comment from Jensen Huang. That is a $50M-funded research pioneer with Google DeepMind credentials putting Bittensor inside her flagship research partnership before a Fortune main stage keynote. Every validation raises the bar for the next one. The pattern is not random. What this says about where we are. The AI world is splitting in two right now. Camp one: scale is everything. Bigger clusters. Bigger checkpoints. Bigger corporate control. Static models. Frozen intelligence. Centralized rails. Camp two: scale is dying. Adaptation wins. Real-time learning. Efficiency first. Open infrastructure. Decentralized coordination. Sara Hooker wrote the paper on why camp one loses. She just joined camp two on Bittensor. @adaption_ai @sarahookr @sudip_r0y @TroyQuasar Source: SILX AI blog | Fortune BrainstormTech 2026 | adaptionlabs.ai Not financial advice.
Andy ττ tweet mediaAndy ττ tweet media
Quasar@QuasarModels

Happy to announce our collaboration with @adaption_ai Adaption Labs is an AI research company focused on building adaptive intelligence systems . Through this partnership, Adaption Labs will provide SILX AI with state-of-the-art adaptive data to support the training of the Quasar foundation models. Their role will be to generate and refine high-quality, adaptive datasets at scale, enabling Quasar to continuously improve its reasoning and generalization capabilities. This collaboration strengthens Quasar path toward achieving SOTA performance and competing with leading closed-source models. The company is co-founded by Sara Hooker, former Vice President of Research at Cohere and a veteran researcher from Google DeepMind, alongside Sudip Roy. Adaption Labs has also raised $50M in seed funding to advance its mission in adaptive AI.

English
8
33
134
17.6K
RonX Labs retweetledi
Trishool | SN23
Trishool | SN23@trishoolai·
We made a bet. That as AI agents become autonomous enough to actually run enterprise workloads, every enterprise on earth will need a constitutional safety layer they can audit, customize, and trust. Today, that bet has its first proof point. We are partnering with Velantris, an Enterprise AI Agents Startup.
English
5
7
38
4.9K
RonX Labs retweetledi
const
const@const_reborn·
University is a scam. Learn how to mine Bittensor.
English
88
139
1.1K
81.3K
RonX Labs
RonX Labs@RonXLabs·
@bittingthembits re: "Miners compete to generate mouse movements that are different from real humans." Miners are trying to generate mouse movements that are as similar with real humans as possible, aren't they?
English
0
0
1
65
Andy ττ
Andy ττ@bittingthembits·
🚨 $TAO's SN61 RedTeam is the real deal. Cybersecurity is a $200B+ industry. Bot detection alone is massive. Roughly 51–53% of all web traffic is generated by automated bots. They have supply demand economics applied to decentralized compute. First time anyone's done this on Bittensor. "Human activity has quickly become the minority of online interactions. Existing detection solutions struggle to keep pace." 🖱Current challenge: Human mouse movement imitation. Miners compete to generate mouse movements that are different from real humans. Validators score based on: ▫️Probability is human (vs bot) ▫️Distance from previous solutions (diversity bonus) Why? ▫️Every exchange needs bot detection ▫️Every DeFi protocol needs Sybil resistance ▫️Every airdrop gets farmed by bots ▫️Every governance vote gets manipulated The economic design is continuous pressure to improve. Can't submit once and ride emissions. Must stay sharp or get replaced. They just shipped real infrastructure: • Multi-signal UID linking (IP + coldkey + DockerHub) • 4-stage anti-gaming pipeline • Decay mechanism forcing continuous improvement • True marketplace (blockmachine pricing) • USDC payment rails for AI agents You can't game this by tweaking variable names or adding comments. The system tracks execution behavior, not just code similarity. The dashboard tells the truth: • 1,110 submissions, elite 7% success rate • 162 miners competing The challenge is hard. Most miners are getting filtered out. The ones winning have real expertise. Only 12 miners (out of 162) have non-zero incentives. That's a 7% success rate. That's exactly what you want from a cybersecurity competition, high barriers, elite participants, a proven solution. The killer feature? Blockmachine decentralized RPC with USDC payments and open marketplace pricing. MINERS change that price in real time based on: • How much demand there is right now • What their actual costs. • What other miners are charging. First true supply-and-demand marketplace inside any Bittensor subnet. The network literally self-tunes toward the best combination of speed + price + honesty with zero central control. Two layers of competition at the same time: ▫️Incentive layer (the subnet scores this) Quality of responses, Uptime Verification pass-rate. This determines how much subnet emission (TAO/alpha) the miner earns. ▫️Open market layer (pure price) Users (validators, AI agents, developers) just pick the cheapest reliable node that meets their needs. This is the foundation for a permissionless cyber security bounty system crypto actually needs. Technically excellent. Waiting for the first big customer announcement to go nuclear. $TAO DYOR
RedTeam@_redteam_

UID linking: what it is and why it's good for the subnet. Every ~20 minutes, the subnet groups UIDs sharing the same IP, coldkey, or DockerHub account into one identity. Here's why that matters: ✦ Fair rewards – emissions go to real independent operators, not whoever registered the most slots ✦ No bans – just grouping. Fix your setup, and you're independent again instantly ✦ Always current – recomputed fresh every round, no cached judgments ✦ Real competition – infrastructure quality wins, not account count The strongest miners rise. Not the most creative registrants. For more information, visit: blog.theredteam.io/latest/blog/po…

English
4
12
62
5.3K
RonX Labs retweetledi
Project Rubicon
Project Rubicon@Rubicon_Bridge·
🧬 Rubicon Bridge V2 is live. Built with @Chainlink CCIP, the most trusted decentralized bridging infrastructure in Web3. Rubicon is a liquid staking solution first, and a bridge second. As the canonical route from Bittensor to Base, it brings $TAO and leading subnet alpha tokens to where Web3 liquidity lives, while still earning staking rewards. Now supporting ~$700K TVL across TAO and top subnet alpha tokens.
English
5
14
61
5.8K
RonX Labs retweetledi
Lucky
Lucky@LLuciano_BTC·
While exploring the bittensor subnets, one idea kept getting clearer to me: @leoma_ai - Subnet 99 is not trying to be “just another bittensor subnet.” It is aiming to become the video generation company that proves bittensor can compete with the biggest closed AI labs. Thanks to @MarkCreaser, CEO of DSV and investor in @leoma_ai - Subnet 99, and @vex020900, CTO of Vex|Rendix, for taking me through the vision. Also, huge respect to the rendix team for building the leoma subnet. After hearing the direction directly, it became very clear to me why this matters. One thing i want to highlight is vex. The CTO of leoma is a giga brain in AI. From what i have seen, his technical knowledge is on another level, and i believe his skill, vision, and execution will be a major reason leoma can actually build what it is aiming for. Today, @leoma_ai - Subnet 99 miners are training and fine-tuning wan 2.2, one of the strongest open-source video generation models in the world, but that is only the starting point. During the call, the team shared that leoma is preparing an initial free-version launch on April 30. This is not the final vision. It is the first public product step: showing the community that leoma has built the infrastructure to sync with the top miner’s model and turn subnet competition into a real user-facing product. That's what matters the most. Because a subnet should not only produce emissions. It should produce a product. The real goal is much bigger: a 25B+ leoma-native video generation base model, which will be built by the rendix team, released as open source, and then improved through bittensor’s miner competition. This is where things become serious. Building a 25B+ video model will cost millions of dollars. GPUs, training infrastructure, datasets, storage, inference optimization, evaluation, and scaling are not small expenses. This is the level of infrastructure normally reserved for google, openai, xai, runway, and other giant AI labs. That is why I believe funding leoma’s infrastructure matters. ⇢ Without serious infrastructure, there is no serious model. ⇢ Without serious model training, there is no real challenge against closed-company SOTA. ⇢ Without subnets willing to take that level of challenge, bittensor will never prove its full potential. So yes, I will help support leoma’s infrastructure push, this is not just a support. But to help give leoma the resources it needs to build the 25B+ base model, support miner fine-tuning, scale evaluation, and move toward a real video generation product. Remember, no bittensor subnet has ever pushed past the largest open-source models in its field and gone directly after closed-company SOTA, leoma is aiming to do exactly that. ⇢ Not by writing another roadmap. ⇢ Not by pretending decentralization alone is enough. ⇢ But by combining capital, infrastructure, miners, validators, and open-source development around one objective: THE BEST MODEL WINS. If leoma can deliver video quality competitive with closed AI companies while offering generation at 3x–5x lower market cost, that is not just a win for leoma. That is a proof-of-concept moment for all of $TAO. Imagine a bittensor subnet producing video quality that competes with the biggest closed labs, while being cheaper, open, and continuously improved by miners. That is the kind of subnet bittensor needs. That is the kind of subnet that changes the narrative. ⇢ I am not promising @leoma_ai - Subnet 99, becomes the top subnet overnight. ⇢ I am not promising they beat every closed AI lab tomorrow. ⇢ But I believe this is one of the most important attempts happening on bittensor right now. TIME WILL TELL. But if leoma pulls this off, it will not just be a game changer for leoma. It will be a game changer for bittensor. To all $TAO holders: I strongly suggest you take a serious look at leoma. Read their papers, study the vision, and understand why this subnet could matter for the future of bittensor: leoma.ai/Leoma_Whitepap… leoma.ai/Leoma_Litepape…
English
67
53
1.1K
132.6K
Mariuszek
Mariuszek@sobczak_mariusz·
@RonXLabs @theminos_ai @genomes_io Miners do the reading, validators test it, it’s bit more nuanced but it’s 6:45 am for me so I am not going into it 😂
English
1
0
0
114
Mariuszek
Mariuszek@sobczak_mariusz·
Pro tip: never ever sell @theminos_ai. This is one of the most misunderstood asymmetric bets in the entire $TAO ecosystem. People look at SN107 and think “genomics narrative.” I think that is way too shallow. Minos is building at the validation layer, where raw DNA becomes trustworthy signal. Variant calling, benchmarking, synthetic genomes, known truth sets — this is the boring but critical infrastructure every serious genomics model will eventually need. And yes, that includes the frontier labs. If @OpenAI or anyone else builds serious genomics models, they are going to need mountains of validated genomes, benchmarked variant-calling pipelines, and clean datasets where the correct answer is actually known (this is a hint boys and girls). That is what makes @centrum_blue so important. He is not a crypto guy borrowing a DeSci story. He has a PhD in statistical genetics and is building directly in his domain. In genomics, you cannot fake the PhD. SN55 and SN68 are interesting. But Minos sits upstream. If the genomic data layer is wrong, everything downstream is compromised. @theminos_ai is not a DeSci play. It is the data layer every serious genomics model will eventually have to touch. That is a very different thing.
English
9
8
59
8.3K
Steeve Morin
Steeve Morin@steeve·
this is so fun, Zig on GPU is a match made in heaven like what do you mean defer syncThreads() and comptime BLOCK_M: usize !
Steeve Morin tweet media
English
8
5
170
15.4K
RonX Labs
RonX Labs@RonXLabs·
@sobczak_mariusz @theminos_ai @genomes_io what subnet does is what miners do. so in SN55, miners generate synthetic genomic data and in SN107, miners are generating some genomic data validation pipeline? and SN107 validators are validating the validation pipelines generated by miners?
English
1
0
0
146
Mariuszek
Mariuszek@sobczak_mariusz·
@RonXLabs @theminos_ai @genomes_io SN55 is trying to generate synthetic genomes, while SN107 is trying to validate and benchmark the variant-calling pipelines that determine whether genomic data is actually trustworthy.
English
1
0
4
332
RonX Labs retweetledi
Léo Mercier
Léo Mercier@leomercier·
Skills are the new software layer for agents and humans to consume services. Pair that with x402 and you have a commodity layer available over HTTP. Currently building out skill packages for astrid arena agents SN127 to continue scaling the agents abilities and open source. Currently 65 AI agents with an Astrid Score earning $TAO for sharing intel and decision making in real time. Target is 100 agents. why this skill matters for trading agents: LLM agents are bad at monitoring. Openclaw wakes on a 30-minute heartbeat - fine for most work, useless for price action that moves in seconds. agent-tripwire solves this. Backend daemon watches Pyth feeds, fires on price events, wakes the agent to act. Also ships historical data for backtesting technical indicators. Humans don't scale decision-making. Agents do (with the right skills) To install for your agent - link below.
Léo Mercier tweet media
English
1
5
17
1.2K