Max
5.6K posts

Max
@MaxScore
co-founder and ceo @manakoai | core-contributor @webuildscore ⎸ sire ⎸ prev: @crunchdao ⎸ opinions are my own

"Detect Fire" went live on @webuildscore console on Monday. 🚒 5 days later, the target is pretty much hit and a whole new sales vertical is ready for the taking. 🏬 The speed of iteration on Bittensor is unrivalled. 🚀 $TAO


📢 Only 6 hours left until Subnet Summer TAO’s second edition X Spaces! We’re going deep on institutional adoption and real-world use cases for TAO in the Bittensor ecosystem. 📅 Today • Friday May 15 • 6:30 PM GMT Speakers: ▫️@SiamKidd ▫️@MaxScore ▫️@gavinzaentz ▫️@JesusMartinez Subnet Spotlights: ▫️@NiomeAI (SN55) ▫️@babelbit (SN59) ▫️@LeadpoetAI (SN71) ▫️@vocence_bt/ Pertub (SN78 & SN26) A high-signal conversation for miners, builders, and investors across the ecosystem. Set your reminder x.com/SubnetSummerTA…



guess we doing computer vision deployments today chat

After 1 great year as a Quantitative Analyst at @webuildscore and @sire_agent, I’m moving on. Being part of a team operating at the frontier of prediction markets, sports analytics, and the first public on-chain sports hedge fund has been a real privilege. I got to work with some of the sharpest people I’ve met in the industry, and I’ve learned more in the last 12 months than I could have expected. From reading research papers and studying market behaviour, to building our in-house meta model and turning noisy sports data into trading signals, this past year pushed me in the best possible way. SIRE was where a lot of that learning happened. The core idea was simple, but extremely hard to execute: sports alpha as structurally uncorrelated yield. In practice, that meant trying to build a return stream driven by inefficiencies in sports markets rather than broader crypto sentiment, leverage, or market beta. As a quant, the challenge was not just asking whether an edge existed. It was asking whether that edge could survive contact with reality: • How do you evaluate signal quality when sample sizes are limited? • How do you separate real predictive power from noise? • How do you account for market movement, liquidity, odds availability, execution, variance, and drawdowns? • How do you combine models in a way that improves robustness rather than just overfitting the past? Those were the questions I got to work on every day. Building our in-house meta model was one of the most valuable experiences of the year for me. It forced me to think beyond individual predictions and focus on how different sources of information interact, where models agree, where they diverge, and how to turn that into a cleaner decision-making framework. A lot of quant work is humbling because the market does not care how elegant your model looks. It only cares whether your assumptions hold up. That was one of the biggest lessons I’ll take with me: be systematic, but not rigid. Trust the process, but keep updating. Think long term, but respect short-term risk. αVault was the clearest public expression of that work. Seeing it fill its initial cap instantly at launch was a huge moment. Watching it perform strongly in its first months, especially while broader markets weakened, made the thesis feel very real. Q1 was harder from a trading perspective, but that is part of building any serious strategy. Variance exists. Regimes change. Edges decay. Models need to be tested, improved, and challenged constantly. I leave genuinely confident that the core thesis remains intact. The quant team is growing, the models are improving, and the opportunity set is still incredibly compelling. I’m grateful to @MaxScore and the whole team for the opportunity, the trust, and the lessons. Proud of what we built, proud of the role I played in it, and excited for what comes next. For now, I go back to being a community member. I’m open to full-time or part-time Quant/Data Science roles across prediction markets, sports trading, crypto trading, and applied AI, anywhere building at the frontier and looking for someone who can turn data, models, and messy markets into real edge.


every benchmark for a google model is like this until you actually try it and it is totally shit

PRISM is now live. Beam’s orchestrator scoring system is officially deployed and the code is now public. PRISM ties rewards to real, verified work: - exposure × quality × confidence × penalty - No static weights. - No hidden allocation logic. Performance, verification, and reliability now directly drive exposure and emissions. This release represents the most decentralized and scalable version of Beam so far. To promote fairness across the network, a short buffer period will remain in place until Friday 05/15 before transfer activity fully ramps up. This gives orchestrators time to deploy, update, and properly configure their infrastructure under the new system so everyone starts on equal footing. The public repo and documentation are now available for the community to review, audit, and build on. github.com/Beam-Network/b… Beam-core public: github.com/Beam-Network/b… Documentation: beamcore.b1m.ai/guide/intro This is only the beginning. Beam: Powering the open internet









