Autoppia | Subnet 36 on Bittensor

1.2K posts

Autoppia | Subnet 36 on Bittensor banner
Autoppia | Subnet 36 on Bittensor

Autoppia | Subnet 36 on Bittensor

@AutoppiaAI

On a mission to have the best Web Operator (Automata) and the best Web Benchmark in the world (Infinite Web Arena).

参加日 Ekim 2023
645 フォロー中1.6K フォロワー
Autoppia | Subnet 36 on Bittensor
We owe our community an honest conversation. With everything that's happened lately, we want to talk about where we are, what went wrong, and what we're doing about it. For the past year, we've been building. IWA, Automata, Dynamic Zero, and open-sourcing our solution. But here's what we got wrong: we built in silence. And when our community raised concerns, we got defensive instead of listening. That's on us. So today, we want to address some of the things you've been saying, and what we're doing about it. 𝟭) "𝗪𝗛𝗘𝗥𝗘 𝗔𝗥𝗘 𝗧𝗛𝗘 𝗥𝗘𝗦𝗨𝗟𝗧𝗦? 𝗪𝗛𝗘𝗥𝗘 𝗔𝗥𝗘 𝗧𝗛𝗘 𝗔𝗚𝗘𝗡𝗧𝗦?" They exist. Automata is live right now at automata(dot)autoppia(dot)com and it's powered by the web agent built by our top miner on the subnet. That's the model working as intended: miners compete, the best agent rises, and it gets put to work. Is it perfect though? No. The agent is still improving and there are tasks it struggles with. Building SOTA web agent is genuinely hard — if it weren't, OpenAI and Anthropic wouldn't be pouring resources into the same problem. But ours is already good enough to power a live product, and it's getting better every week. We'll be showcasing what Automata can do so you can see for yourselves. We also have 17 dynamic websites running on IWA, miners deploying models through Chutes, and scores improving on the leaderboard. 𝟮) "𝗠𝗜𝗡𝗘𝗥𝗦 𝗖𝗔𝗡'𝗧 𝗝𝗨𝗦𝗧𝗜𝗙𝗬 𝗪𝗢𝗥𝗞𝗜𝗡𝗚 𝗛𝗘𝗥𝗘. 𝗧𝗛𝗘 𝗕𝗨𝗥𝗡 𝗜𝗦 𝗧𝗢𝗢 𝗛𝗜𝗚𝗛." You were right. 0.75τ/day to miners isn't enough to attract the volume of talent needed to push agents further. Our top miner proved the system works — one miner built an agent good enough to ship in production. Now we need to bring in more miners at that level. We heard this feedback and we took it seriously. 𝙀𝙛𝙛𝙚𝙘𝙩𝙞𝙫𝙚 𝙈𝙖𝙧𝙘𝙝 30, 𝘼𝙪𝙩𝙤𝙥𝙥𝙞𝙖 𝙢𝙤𝙫𝙚𝙨 𝙩𝙤 0% 𝙗𝙪𝙧𝙣. All emissions flow to miners. We're going all in on our miners because the proof of concept is already here. 𝟯) "𝗬𝗢𝗨'𝗟𝗟 𝗚𝗘𝗧 𝗗𝗘𝗥𝗘𝗚𝗜𝗦𝗧𝗘𝗥𝗘𝗗." We're not going anywhere. We've significantly restructured our marketing so you'll get clearer and more consistent messaging from now on. 0% burn will accelerate our progress as we attract more quality miners to the subnet. We're open to feedback. We know we have to earn back your trust and we're asking the community to give us another chance to make things right. We have a cracked team that is fully committed to Bittensor and to execution, and we intend to add value to this network. We know trust isn't rebuilt with one post. It's rebuilt one kept promise at a time. March 30 is the first. Hold us to it. #Bittensor $TAO #WebAgents
English
0
0
0
21
Shizzy
Shizzy@ShizzyUnchained·
My Bittensor Subnet portfolio right now… SN3 τemplar 20.3% SN62 Ridges 16.3% SN4 Targon 11.2% SN44 Score 9.1% SN64 Chutes 8.7% SN120 Affine 8.4% SN9 iota 4.2% SN105 Beam 3.0% SN75 Hippius 2.83% SN66 AlphaCore 2.73% SN51 lium 2.21% SN33 ReadyAI 2.19% SN42 Gopher 2.04% SN78 Loosh 1.93% SN6 Numinous 1.73% SN68 NOVA 1.72% SN97 Constantinople 1.38% #Bittensor #TAO $TAO
English
18
7
120
5.6K
Autoppia | Subnet 36 on Bittensor
Autoppia open-sourced something most AI companies never would: the entire pipeline. Here's how it works: 1/ Miners submit Agent code to GitHub. Not just prompt templates, but full agent logic with LLM orchestration, heuristics, and multi-model calls per step. 2/ Validators clone the repo and deploy it in a sandboxed Python environment. No internet access. LLM calls are routed through an approved gateway (OpenAI, Chutes, more coming). 3/ Agents are evaluated step by step. At each step, the agent sees browser state (HTML, screenshot, URL) and returns an action & reasoning. Cost and time are capped per step and per task. 4/ Scoring follows a weighted formula. The best agent takes all the incentive. Winner-takes-all. 5/ Anti-spam is embedded: score-based cooldowns, optional min stake, 1 hotkey per coldkey, 1 GitHub per coldkey. Gaming is hard. 6/ Next up: payments to scale to 1k–2k tasks per season. Why it matters: OpenAI has Operator. Anthropic has Computer Use. Browser Use raised +$17M. But Autoppia is the only project where evaluation is automated via a synthetic benchmark (IWA) that generates endless novel web tasks so agents can't memorize their way to the top. Decentralized. Permissionless. Meritocracy enforced by code, not by executives. #Bittensor $TAO
Autoppia | Subnet 36 on Bittensor tweet media
English
0
2
11
752
Autoppia | Subnet 36 on Bittensor がリツイート
Mr Brondor
Mr Brondor@MrBrondorDeFi·
WHY $TAO IS THE MOST OBVIOUS BET IN CRYPTO RIGHT NOW Most people sleep on the real infrastructure plays I've been watching AI projects for months and 90% are just wrapped APIs with a token slapped on top. Raise money, promise "decentralized AI", deliver nothing. Then I found @opentensor $TAO (3y ago, at $40 around, good ol' times for sure) and something clicked differently. THE RAW NUMBERS Market cap: ~$1.8B Circulating supply: 7.4M TAO Max supply: 21M (same as Bitcoin) Daily emissions: Halving mechanism built in Active subnets: 50+ Validators: 1,000+ 24h volume: Consistently $150-200M Infrastructure with real activity. WHAT BITTENSOR ACTUALLY DOES It's a decentralized network where AI models compete and get rewarded based on performance. No single company controls it. No centralized API. No permission needed. Subnets are specialized networks within TAO for different AI tasks. Text generation, image recognition, data processing, whatever. Each subnet has its own miners producing AI outputs and validators scoring quality. Better outputs = more TAO rewards. Simple. WHY THE TOKENOMICS MATTER 21 million max supply. Same as Bitcoin. Halving emissions over time. Same scarcity model that made BTC what it is. But here's the difference - BTC miners burn energy for security, TAO miners burn compute for useful AI work. One secures a ledger. One trains artificial intelligence. Both scarce.One arguably more useful long term. THE SUBNET ECOSYSTEM SN1 - Text generation SN2 - Machine translation SN3 - Data scraping SN8 - Time series prediction SN9 - Text to image SN19 - Vision models SN21 - Storage SN36 - @AutoppiaAI (B2B services) SN63 - @qBitTensorLabs (Quantum computing) (idk the official pages of other SN sorry, tag them if u want in the comments) Over 50 subnets live and producing real AI outputs daily. THE MARKET OPPORTUNITY Global AI market projected $1.8 TRILLION by 2030. Where do all these AI agents and protocols live when they need to scale without centralized bottlenecks? OpenAI can shut you off anytime. Google can change API terms overnight. Amazon can raise prices whenever. Decentralized AI infrastructure solves this. T AO is building that home base. WHAT I'M SEEING Upbit listing brought Korean volume Bitget partnership for staking Founder stepped down for more decentralization Developer activity consistently high on GitHub Institutional interest growing quietly While CT chases the next dog coin, actual infrastructure is being built that the entire AI market will need. MY TAKE When I look at where crypto and AI intersect, this is the most asymmetric bet I've seen in a while. Fixed supply like Bitcoin Real utility unlike 99% of AI tokens Growing subnet ecosystem Institutional rails being built I'm not here to convince anyone. DYOR and make your own calls. But the question I keep asking myself is simple: Why isn't everyone talking about this? What am I missing here Tribe? -Brondor -not financial advice-NOT an adv-always make sure to DYOR before investing in anything-
Mr Brondor tweet media
English
25
47
254
6.8K
Visual Studio Code
🌐 Agentic Browser Tools (Experimental) in @code! Agents can now open pages, read content, click elements, and verify changes directly in the integrated browser while building your web app. Enable ⚙️ workbench.browser.enableChatTools to try it out. Learn mode: aka.ms/VSCode/Agentic…
English
65
291
2.6K
194.2K
Alex DRocks
Alex DRocks@DrocksAlex2·
No extra comment $TAO
Alex DRocks tweet media
Français
1
1
11
1.1K
Autoppia | Subnet 36 on Bittensor
@ihtesham2005 Claude code and codex with browser tools are just "decent". As soon as you give them slighly complex workflows in browser they get lost. If only someone had a decent agent to operate the web? 👀
English
1
0
0
1.6K
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
🚨 RIP Chrome for AI agents. Someone built a headless browser from scratch that runs 11x faster and uses 9x less memory. It's called Lightpanda. Every AI agent doing web automation right now is running Chrome under the hood. That means you're spinning up a massive desktop application, stripping out the UI, and running hundreds of instances of it on a server. For something that never needs to render a single pixel. It's like renting a semi-truck to deliver a letter. Lightpanda is built differently. Not a fork of Chromium, Blink, or WebKit. Written from scratch in Zig with one goal: headless performance, nothing else. It still runs JavaScript. Still handles Ajax, XHR, Fetch, SPAs, infinite scroll, all of it. Just without dragging along 500MB of browser bloat you'll never use. And it drops straight into your existing stack: → Compatible with Playwright, Puppeteer, and chromedp via CDP → One-line Docker install → CDP server on port 9222, swap it in for Chrome in 30 seconds The use cases are obvious: AI web agents, LLM training data scraping, browser automation at scale, testing pipelines. Anything where you're paying for Chrome compute and cringing at the bill. It's still in beta and Web API coverage is growing. But at 11.8K stars it's clearly hitting a real nerve. 100% Opensource. AGPL-3.0. Link in comments.
Ihtesham Ali tweet media
English
98
157
1.3K
159.1K
Pavel Svitek 🇨🇭
Pavel Svitek 🇨🇭@pavelsvitek_·
You are hiring frontend engineer and you see this. What do you do?
English
390
264
11.2K
1.4M
Gordon Frayne
Gordon Frayne@gordonfrayne·
$TAO is the NASDAQ for Decentralized AI companies. Fade it at your own risk.
Gordon Frayne tweet media
English
22
112
749
53.7K
Rob Greer
Rob Greer@rob_svrn·
AGI won't be a single god model. It'll be an entire economy of $TAO subnets, cross-pollinating and compounding intelligence together No corporate lab will build this. The Bittensor network will
English
22
38
320
35.2K
templar
templar@tplr_ai·
We just completed the largest decentralised LLM pre-training run in history: Covenant-72B. Permissionless, on Bittensor subnet 3. 72B parameters. ~1.1T tokens. Commodity internet. No centralized cluster. No whitelist. Anyone with GPUs could join or leave freely. 1/n
English
209
955
6.2K
1.8M
Autoppia | Subnet 36 on Bittensor がリツイート
Autoppia | Subnet 36 on Bittensor
𝗥𝗼𝗮𝗱 𝘁𝗼 𝗦𝗢𝗧𝗔 — 𝗪𝗲𝗲𝗸𝗹𝘆 𝗨𝗽𝗱𝗮𝘁𝗲 (𝗠𝗮𝗿 1–8) 🚀 Big week at Autoppia! We shipped major refactors, infra upgrades, miner simplifications, and laid the groundwork for leaderboard comparisons. A thread 🧵👇 #OpenSource #Bittensor $TAO
Autoppia | Subnet 36 on Bittensor tweet media
English
2
3
15
1.8K
Doug Sillars 🌻
Doug Sillars 🌻@dougsillars·
I hear that @bitget has an MCP and OpenClaw Integrations. Be careful when giving AI Agents access to trading. We've seen some silliness on Bittensor over the last couple of weeks.
Doug Sillars 🌻 tweet media
English
1
1
7
661
Jolly Green Investor 🍀
Jolly Green Investor 🍀@jollygreenmoney·
Bittensor $TAO subnets offer the best R/R in crypto right now 💎 You can find a subnet building legitimate game changing AI tech which could 5-10x + TAO itself can 5-10x, compounding into serious gains. Here are some small-cap subnets under $5M market cap that I expect to perform well: SN82 Hermes @HermesSubnet SN88 Investing @Investing88ai SN97 Flamewire @FlameWire_SN What other small-cap subnets would you add to this list?
English
16
8
142
7.1K
Andrej Karpathy
Andrej Karpathy@karpathy·
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project. This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.: - It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work. - It found that the Value Embeddings really like regularization and I wasn't applying any (oops). - It found that my banded attention was too conservative (i forgot to tune it). - It found that AdamW betas were all messed up. - It tuned the weight decay schedule. - It tuned the network initialization. This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism. github.com/karpathy/nanoc… All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges. And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
Andrej Karpathy tweet media
English
961
2.1K
19.3K
3.5M
macrozack (τ, τ) 🚀
macrozack (τ, τ) 🚀@macrozack·
When Gus first asked to work together, he was yet to work for any team on Bittensor - despite knowing almost every subnet better than I did. Since then, Gus's conviction in @bitstarterAI has matched my own 🔥 Launch after launch, he's been vital Well deserved @officialneeve 👏
Gustavo Aroso@officialneeve

Today I become Founding Partner of @bitstarterAI - stepping up from my role as Chief Subnet Officer to take on full co-ownership and a broader set of responsibilities across our portfolio. It hasn't even been a year since I met @macrozack. But that one conversation was enough for me to walk away from 6 years in iGaming, and start on the path to launching x4 new teams in under in 6 months. That's the power of what we're building. Bittensor is the future of AI. Betting my career on it feels like the only move that makes sense. The right risk, at the right time, with the right team⚡️

English
3
0
45
1.8K