ReadyAI

226 posts

ReadyAI banner
ReadyAI

ReadyAI

@ReadyAI_

Making the world's data accessible to AI. Home of AcquiOS for CRE & M&A acquisitions Bittensor Subnet 3️⃣3️⃣ 🌐

Katılım Ağustos 2024
24 Takip Edilen3.5K Takipçiler
ReadyAI retweetledi
David Fields
David Fields@DavFields·
We've been heads down on something. Coding agents are the trillion-dollar race for every major lab, and the high-quality structured data to make them work is the bottleneck. Context7 became the #1 MCP server (53K stars) solving current docs. But the hard problems live deeper from version-pinned breaking changes to expert reasoning mined from thousands of technical podcasts, coding intelligence that doesn't exist in any documentation. We're building that dataset. More this week.
ReadyAI@ReadyAI_

x.com/i/article/2051…

English
0
1
7
331
ReadyAI retweetledi
David Fields
David Fields@DavFields·
Getting my Claw into music this weekend...thanks @ReadyAI_ Grab any files you'd like here: readyai.ai
David Fields tweet media
English
4
2
12
839
ReadyAI retweetledi
ReadyAI
ReadyAI@ReadyAI_·
New on @ReadyAI_ Request an llms.txt file for any domain, free Search a site → not in our 10K+ database? Hit "Request This Domain Now" → get your file queued on subnet 5 free requests per user. Every file is open-sourced on GitHub Structured data for agents shouldn't be gated
English
2
5
24
1K
ReadyAI retweetledi
David Fields
David Fields@DavFields·
This is the best explanation for why (1) Bittensor is unique amongst crypto projects and (2) you often see crypto VCs hating on it Bittensor provides the incentives for bootstrapping innovation across numerous experiments all at once without the need for VCs $TAO
Algod@AlgodTrading

Yes emissions are used to bootstrap innovation, same as Uber, Amazon and countless of other big companies You can chose between these 2: -give those emissions to VC’s -give those emissions to builders who devote their whole time to build out the network Vc’s hate it because they can’t apply the VC playbook/had discounted access compared to the masses

English
6
15
80
4.9K
ReadyAI retweetledi
0xSammy
0xSammy@0xSammy·
SEO was built for humans browsing the web The next version of search optimization is built for agents reading it AEO/GEO ("agent engine optimization" or "generative engine optimization") is becoming a real category An entire industry is forming around making your website legible to LLMs and autonomous agents instead of just Google crawlers Right now every AI agent that needs info about a company or domain does the same thing; scrapes, parses HTML, and hopes for the best Billions of redundant crawls; trillions of wasted tokens llms.txt emerged as a proposed standard for this; a markdown file in a website's root directory that gives LLMs a clean structured summary of the site's content instead of forcing them to parse navigation menus, cookie banners, and JavaScript Over 844k websites have already adopted it; Anthropic, Cloudflare, and Stripe among them The problem is that no one has built the infrastructure to do this at scale across the entire web The beauty of this is that the infrastructure powering it can be decentralized from day one; there's no reason for one company to own the machine-readable index of the entire web So when you read the below announcement from subnet 33 you should look at it in the context of this broader agentic engine optimization (AEO) How many "AEO experts" do you think currently exist? Zero. There's a huge opportunity for you to pick a niche and dominate Once again, another Bittensor subnet tackling a forward thinking problem
David Fields@DavFields

We just launched a new readyai.ai Type any domain into the search. If it's in our dataset, you get clean, structured intelligence instantly. No scraping. No parsing HTML. Just machine-readable data, ready for any AI agent. 10,000+ websites crawled, cleaned, and structured by Subnet 33 so far. Growing to 100K by Q2, 1M by year end. This is the beginning of something bigger: a marketplace for agentic data. Right now, every AI agent that needs info about a company or domain scrapes, parses, and hopes. Billions of redundant crawls. Trillions of wasted tokens. We're building the infrastructure layer that fixes this — an indexed, machine-readable web powered by decentralized compute.

English
14
5
82
9.7K
ReadyAI retweetledi
David Fields
David Fields@DavFields·
We just launched a new readyai.ai Type any domain into the search. If it's in our dataset, you get clean, structured intelligence instantly. No scraping. No parsing HTML. Just machine-readable data, ready for any AI agent. 10,000+ websites crawled, cleaned, and structured by Subnet 33 so far. Growing to 100K by Q2, 1M by year end. This is the beginning of something bigger: a marketplace for agentic data. Right now, every AI agent that needs info about a company or domain scrapes, parses, and hopes. Billions of redundant crawls. Trillions of wasted tokens. We're building the infrastructure layer that fixes this — an indexed, machine-readable web powered by decentralized compute.
ReadyAI@ReadyAI_

x.com/i/article/2037…

English
4
17
102
20.5K
ReadyAI
ReadyAI@ReadyAI_·
@eleusys7 Orienting SN33 for the coming wave of agentic commerce 🫡
English
0
0
1
45
ReadyAI
ReadyAI@ReadyAI_·
👀 something new is coming We've been building and we're almost ready to show you. SN33 has been processing the web at scale, turning raw Common Crawl data into clean, AI-ready `llms.txt` files. Structured semantic summaries that any LLM agent, MCP server, or AI app can consume instantly. On Thursday we'll be releasing the Github repo where `llms.txt` files will be pushed in batches as the subnet processes them. We're starting with over 1000 websites analyzed and processed by the subnet that will grow every week. And shortly after... 🌍 We're launching a public frontend Any website. Any domain. You request it, the subnet processes it and you get a `llms.txt` back. No more raw HTML hell for AI agents. No more redundant crawling. Just clean, structured, machine-readable intelligence about any corner of the web, on demand, powered by decentralized compute. This is SN33 becoming a public utility for AI infrastructure The web, made readable for machines. At scale. Open to anyone. 🔜 More very soon. Stay tuned.
ReadyAI tweet media
English
6
11
61
11.7K
ReadyAI retweetledi
David Fields
David Fields@DavFields·
Our recent breakthrough with enrichment tasks on the subnet has completely opened the floodgates. We can now create structured datasets from nearly any source, from llms.txt to deep coding data. Will be sharing benchmark improvements with this coding data shortly
ReadyAI@ReadyAI_

x.com/i/article/2034…

English
4
3
24
2K
ReadyAI retweetledi
David Fields
David Fields@DavFields·
The web wasn't built for AI agents. We're fixing that. First 1,000 domains live now, millions coming. Open source, decentralized, and free. Frontend coming shortly to request llms.txt for any site
ReadyAI@ReadyAI_

🚀 llms.txt are live on SN33 The llms.txt repository is now live. 🔗 github.com/afterpartyai/l… SN33 has processed the first batch with over 1,000 websites crawled, cleaned, and converted into structured llms.txt files by the subnet. Semantic summaries ready for any LLM agent, MCP server, or AI app to consume instantly. No scraping. No parsing raw HTML. Just clean, machine-readable intelligence. New batches will be pushed as the subnet keeps processing. The repo grows every week. What's in the dataset: → Structured semantic summaries per domain → Named entities: people, orgs, products, technologies, concepts → Topic classification and key themes → Deterministic O(1) lookup by domain with no index file needed → Git-friendly structure that scales to millions of domains This initial release covers ~1,000 domains as a pilot, but the pipeline scales to millions. 📍 Roadmap: 10K → 100K → 1M domains → continuous updates from new Common Crawl releases and soon from requests. 🌍 And the frontend is coming. Any domain. You request it, the subnet processes it, you get an llms.txt back. We're putting the finishing touches on the public UI and it drops soon. SN33 is becoming infrastructure. The web, made readable for machines and open to anyone, powered by decentralized infra. Star the repo. Share it. And stay close. The next drop is right around the corner.

English
5
8
31
3.2K
ReadyAI
ReadyAI@ReadyAI_·
🚀 llms.txt are live on SN33 The llms.txt repository is now live. 🔗 github.com/afterpartyai/l… SN33 has processed the first batch with over 1,000 websites crawled, cleaned, and converted into structured llms.txt files by the subnet. Semantic summaries ready for any LLM agent, MCP server, or AI app to consume instantly. No scraping. No parsing raw HTML. Just clean, machine-readable intelligence. New batches will be pushed as the subnet keeps processing. The repo grows every week. What's in the dataset: → Structured semantic summaries per domain → Named entities: people, orgs, products, technologies, concepts → Topic classification and key themes → Deterministic O(1) lookup by domain with no index file needed → Git-friendly structure that scales to millions of domains This initial release covers ~1,000 domains as a pilot, but the pipeline scales to millions. 📍 Roadmap: 10K → 100K → 1M domains → continuous updates from new Common Crawl releases and soon from requests. 🌍 And the frontend is coming. Any domain. You request it, the subnet processes it, you get an llms.txt back. We're putting the finishing touches on the public UI and it drops soon. SN33 is becoming infrastructure. The web, made readable for machines and open to anyone, powered by decentralized infra. Star the repo. Share it. And stay close. The next drop is right around the corner.
English
1
4
22
5K
ReadyAI
ReadyAI@ReadyAI_·
Great question. Short answer: we sidestep a lot of it by processing at the site level, not the page level. When you enrich an entire domain's pages together, NER, tags, summarization, similar pages, you get entity grounding from context across the site rather than trying to reconcile isolated page-level extractions across the whole crawl. It doesn't eliminate the problem but it dramatically reduces the noise surface. The repo drops Thursday so would love your take once you can see the output structure.
English
1
0
0
133
Sovran AMR
Sovran AMR@a_m_r_news·
@ReadyAI_ LLMs.txt? Interesting. How are you handling entity resolution across the Common Crawl's noise? That's always been a bear for us.
English
1
0
0
157
ReadyAI retweetledi
David Fields
David Fields@DavFields·
The generic data race is over. The teams that win the next 3 years are the ones building deep, vertical-specific pipelines that scraping can't replicate. That's exactly what we're doing @ReadyAI_ . Phase 1 is just the start.
ReadyAI@ReadyAI_

x.com/i/article/2029…

English
4
6
31
2.3K