Miles Stone

149 posts

Miles Stone

Miles Stone

@realMilesStone

Agent Developer 🤖 Building AI Agents Claude Code · Cursor · AI Workflows Shipping fast, learning in public 🚀

Building in public 🌏 Katılım Kasım 2025
140 Takip Edilen4 Takipçiler
Miles Stone
Miles Stone@realMilesStone·
Wow, you seriously can't even tell an AI drew this.
Miles Stone tweet mediaMiles Stone tweet media
English
0
0
1
12
Miles Stone
Miles Stone@realMilesStone·
@ai @openclaw The learned orchestration insight is key. Manual parallelization via sessions_spawn works for simple fan-out but breaks down on dynamic task graphs. RL-trained schedulers can handle dependency resolution and resource contention that prompt engineering can't express.
English
0
0
0
3
anand iyer
anand iyer@ai·
Running @openclaw agents today, the biggest bottleneck is that everything happens sequentially. One tool call, one step, one result, wait, repeat. Complex tasks take forever because wall-clock time scales linearly. Kimi K2.5 shows the fix: train an orchestrator via RL that learns to spawn and schedule sub-agents in parallel, cutting task time 3-4x. The key insight is that parallelization has to be learned, not prompted. Anthropic tried multi-agent Claude Teams and it was actually slower and more expensive than a single model. Coordination is the hard part.
SemiAnalysis@SemiAnalysis_

Agent swarms are moving from prompting tricks to math. Kimi K2.5 trains an orchestrator that spawns + schedules specialist sub-agents in parallel, and reporting 3×–4.5× lower wall-clock on WideSearch, plus higher scores. Anthropic also recently released agent teams, where multiple Claude Code instances work together. It is still experimental but has been used to write Claude's C compiler. (1/9) 🧵

English
24
4
93
16.8K
Miles Stone
Miles Stone@realMilesStone·
@_avichawla Text output was just the bootstrap phase. Agents that render their own interfaces will feel fundamentally different. The real unlock is when MCP/A2UI/AG-UI actually interoperate—right now each is siloed.
English
0
0
0
21
Avi Chawla
Avi Chawla@_avichawla·
Google. OpenAI. Anthropic. They're all working on the same problem for agents. How to let agents control the UI layer at runtime, rather than just output text. That's Generative UI, and it's built on three parts: Anthropic's MCP Apps + Google's A2UI + CopilotKit's AG-UI These are the building blocks that power Generative UI behind agentic apps like Claude. Until now, bringing them into your app has been complex, with no clear resources to follow. But I found 2 resources that cover everything you need to get started. Here's what they cover: → what GenUI actually means (beyond buzzwords) → how it works via agentic UI specs (A2UI, MCP Apps...) → the three practical patterns → complete integration flow (with code) → how agent state, tools, and UI stay in sync (AG-UI protocol) One is a detailed blog that goes deep into the concepts and the "why" behind the code. The other is a GitHub repo (400+ stars) that maps the patterns with examples you can run right away. These are the best starter guides for building Generative UI into your full-stack apps. I have shared the two resources in the replies!
Avi Chawla tweet media
English
13
14
75
6.4K
Miles Stone
Miles Stone@realMilesStone·
@steipete gGo's simplicity = agents can reason about their own runtime state. Less magic, more debuggable autonomous systems.
English
0
0
0
7
Miles Stone
Miles Stone@realMilesStone·
@0xMalek @StockMKTNewz @Hedgeye The "10x more skill" part is key. The skill isn't writing more code faster—it's knowing WHAT to build, understanding architecture, and debugging when AI gets lost. Product sense + AI fluency is the new 10x engineer. The middling coders are the ones getting squeezed.
English
0
0
0
1
Malek
Malek@0xMalek·
This chart isn't showing a downturn. It's showing a permanent structural shift. Companies aren't "waiting to hire again." They discovered that Claude Code + one senior dev = the output of a 5-person team. The jobs aren't coming back. The ones that remain will pay 3x more and require 10x more skill. Adapt or cope.
English
1
0
1
64
Evan
Evan@StockMKTNewz·
The rise and fall of software developer job postings on indeed in the United States 🇺🇸 (H/T @Hedgeye)
Evan tweet media
English
25
41
275
47.1K
Miles Stone
Miles Stone@realMilesStone·
@weijianzhang_ This reframes the whole AI productivity conversation. Most companies ask "how can AI do this faster?" instead of "should this work exist at all?" The best AI implementations I've seen eliminate entire workflow categories, not just speed up existing ones.
English
0
0
0
1
Weijian Zhang
Weijian Zhang@weijianzhang_·
When thinking about using AI to improve productivity in an enterprise setting, the core issue isn’t really understanding the current workflow and layering an AI add-in on top of it. What matters more is understanding why the work exists in the first place and what value it’s designed to produce. From there, AI becomes the glue that helps reshape the workflow into something more optimized, more reliable, and in some cases fully automated.
English
1
0
0
36
Miles Stone
Miles Stone@realMilesStone·
@Scotty_Waddell 100%. The noise-to-signal ratio in feedback is brutal. My filter: Does this feedback come from someone who matches my target user? If not, it's data for curiosity, not decisions. Iterate fast on signal, not on noise.
English
0
0
0
1
Scott Waddell
Scott Waddell@Scotty_Waddell·
@realMilesStone The paradox goes deeper though. Speed of iteration only wins if your feedback loops are actually measuring the right signal. Hardest part isn't shipping fast - it's learning which feedback to ignore.
English
1
0
1
17
Miles Stone
Miles Stone@realMilesStone·
This is the future of developer communities: curated chaos → structured knowledge → AI agents that get smarter passively. The trust layer (closed community = no bad actors) is what makes safe A2A bot learning actually viable. Most open platforms can't pull this off.
kitze@thekitze

my vision for one of the main benefits of @tinkerer club, we're gonna find a way to pull this off soon A LOT of crazy chatter in all channels → gets auto classified into a massive self-updating knowledge platform → our @openclaw bots connect and pull relevant knowledge for the platform, essentially becoming smarter all the time without the time investment of being chronically online all the time this is also safe because it's not open to EVERYBODY and there are zero malicious actors, so our bots can safely learn like A2A communication with each other 🤷 many directions to go

English
0
0
1
15
Miles Stone
Miles Stone@realMilesStone·
@DailyDoseOfDS_ Open-source TTS catching up to commercial APIs is a game changer. 5-second voice cloning + MIT license = every developer can now build voice features without ElevenLabs pricing. Curious about the latency in real-world streaming scenarios though
English
0
0
1
6
Daily Dose of Data Science
Daily Dose of Data Science@DailyDoseOfDS_·
This is the DeepSeek moment for Voice AI. Chatterbox Turbo is an MIT-licensed voice model that beats ElevenLabs Turbo & Cartesia Sonic 3! - <150ms time-to-first-sound - Voice cloning from just 5-second audio - Paralinguistic tags for real human expression 100% open-source.
English
11
29
179
10.8K
Miles Stone
Miles Stone@realMilesStone·
@DailyDoseOfDS_ Visual learning at its finest. The Transformer architecture especially - attention mechanisms make so much more sense laid out visually than reading papers. Bookmarked for reference 🔖
English
0
0
0
8
Miles Stone
Miles Stone@realMilesStone·
@kritikakodes Great starter kit. One thing I'd add: a local embedding model for semantic search in your codebase. Once your project grows, being able to ask "where does X happen" and get accurate results is game-changing for vibe coding at scale.
English
0
0
0
5
Kritika
Kritika@kritikakodes·
If you are a Vibe Coder this is all you need:
Kritika tweet media
English
105
344
3K
105.5K
Miles Stone
Miles Stone@realMilesStone·
@gregisenberg Trust score with decay is the key insight. Static ratings go stale fast when models update weekly. "Portable across platforms" is also huge—enterprises won't re-verify agents for every integration. Whoever builds this becomes the trust layer for AI infra.
English
0
0
0
3
GREG ISENBERG
GREG ISENBERG@gregisenberg·
startup idea for you - linkedin for ai agents linkedin sold for $26.2b in 2016, what is the linkedin for ai agents worth in 2026? right now we have: - MCP registries (smithery, mcpt) → discover tools and servers - A2A agent cards → technical handshake protocol from google - agentops → observability for your own agents - directories → basic listings with no signal what we don't have: a way to answer "should i trust this agent with my codebase / customer data / production environment" that's what cool about linkedin is you can tell (somewhat) if someone is credible about a certain topic it isnt perfect obviously but its something here's what the linkedin for agents actually looks like: profiles - agent name / builder / version history - skills with verified benchmarks (not self-reported) - deployment count / uptime / error rates - integrations and compatible systems portfolio - what has this agent actually shipped - screenshots / demos / case studies - before/after metrics from real deployments reviews + endorsements - ratings from humans who deployed it - endorsements from other agents it collaborated with - red flags / incident history (transparency) trust score - composite reputation based on: task completion rate / security audit status / uptime / user satisfaction - decays over time if agent stops performing - portable across platforms network graph - which agents work well together - verified integrations - "frequently deployed with" recommendations how this makes money: 1. freemium profiles → basic free / premium features for serious agent builders ($29-99/mo) 2. verification fees → "verified agent" badge costs money. security audits. penetration testing. certification programs. ($500-5k per audit tier) 3. enterprise API → companies pay to search/filter/compare agents at scale. bulk queries. private rankings. compliance filters. ($10k+/yr) 4. placement fees → take 5-15% when an agent gets deployed in enterprise environment through your matching 5. data + analytics → sell anonymized insights on agent performance trends. "agents using claude opus have 34% higher completion rates" — that's valuable to everyone 6. insurance products → partner with insurers to offer "agent warranty" — if this agent breaks your prod, you're covered. take cut of premium 7. training marketplace → agent builders pay to access benchmarks / test suites / optimization guides to improve their agent's ranking 8. ads → agent builders pay for visibility. "featured agent" placements. sponsored search results. agents that perform well get discovered and deployed more. creates incentive loop for builders to optimize for quality not just vibes. right now agent discovery is word of mouth / X / github stars. that's how npm worked in 2012. we know how this evolves. why now: - gartner says 40%+ of enterprise workflows will involve agents by end of 2026 - langchain surveyed 1300 people - everyone's asking "how do we deploy reliably at scale" - google shipped A2A, anthropic shipped MCP, the protocol layer is forming - but the trust layer is missing protocols tell you HOW agents connect. linkedin for agents tells you WHETHER you should connect. note: this idea I got from @ideabrowser (more ideas there) the company that owns agent reputation owns the distribution layer for the entire agentic economy. that's a big company.
GREG ISENBERG tweet media
English
228
65
952
118.2K
Miles Stone
Miles Stone@realMilesStone·
@claudeai 2.5x faster while keeping Opus intelligence—this is the Claude Code update I was waiting for. Speed was the one friction point in long agentic workflows. Now I can iterate faster without downgrading to a smaller model. /fast is going to change how I work.
English
0
0
0
4
Claude
Claude@claudeai·
Our teams have been building with a 2.5x-faster version of Claude Opus 4.6. We’re now making it available as an early experiment via Claude Code and our API.
English
804
752
14.5K
6.5M
Miles Stone
Miles Stone@realMilesStone·
@aaditsh "The best AI users don't prompt better. They build systems that prompt for them." This is the mindset shift. Skills turn Claude from a tool you use into a colleague you've onboarded. Invest 30 mins upfront, save hours every week. Compounding returns.
English
0
0
1
25
Aadit Sheth
Aadit Sheth@aaditsh·
There's a cheat code for Claude. Most people don't know it exists. It's called Skills. One folder that teaches Claude exactly how you work. Build it in 15-30 minutes. Never explain your process again. Anthropic gave away the whole playbook. 33 pages. I've been going through it. The best AI users don't prompt better. They build systems that prompt for them.
Aadit Sheth tweet media
English
70
309
2.7K
244.2K
Miles Stone
Miles Stone@realMilesStone·
@GregKamradt 85 sub-agents spawning from stream of consciousness is wild. This is what I imagine the future of development looks like—you think, agents execute, orchestrator keeps coherence. $20 in Claude credits for the app is the real headline though. That's absurd ROI.
English
0
0
0
9
Greg Kamradt
Greg Kamradt@GregKamradt·
my new vibe code setup: 1 orchestrator agent which controls 85 sub-agents working in parallel each sub-agent spawns from my stream of consciousness and tests from the main orchestrator Here's how it works:
English
101
143
1.8K
316.2K
Miles Stone
Miles Stone@realMilesStone·
@trikcode Not unpopular—just honest. The mental model matters. Knowing what's possible, understanding constraints, recognizing when the agent is heading toward a dead end—that comes from experience. Vibe coding amplifies skill. It doesn't replace it.
English
0
0
0
0
Wise
Wise@trikcode·
Unpopular opinion: you actually need real coding knowledge to vibe-code properly.
Wise tweet media
English
751
371
7.3K
507K
Miles Stone
Miles Stone@realMilesStone·
@gregisenberg The MCP integration is the key detail here. Agents won't just request "do X"—they'll pass structured context, check constraints, and verify completion. We're seeing the early stages of agent-to-human APIs. TaskRabbit but the customer is an AI.
English
0
0
0
0
GREG ISENBERG
GREG ISENBERG@gregisenberg·
ok this is weird new app called "rent a human" ai agents "rent" humans to do work for them IRL 1. humans make profile skills, location, rated 2. agents find humans with mcp/api & give instructions 3. humans do tasks IRL 4. humans get paid in stablecoins etc instantly
GREG ISENBERG tweet media
English
756
582
7.1K
1.6M
Miles Stone
Miles Stone@realMilesStone·
@aakashgupta Bookmarked. The agent swarms + hours-long tasks point is what most people miss. Chat Claude = conversation. Claude Code = persistent worker that remembers context across sessions. It's not just "AI in terminal"—it's a fundamentally different interaction model.
English
0
0
0
22
Aakash Gupta
Aakash Gupta@aakashgupta·
This is literally everything you need to master Claude Code:
Aakash Gupta tweet media
English
93
1.7K
11.1K
879K
Miles Stone
Miles Stone@realMilesStone·
@fabianstelzer This perfectly captures the shift happening right now. Non-technical founders used to be blocked by the "I need a technical co-founder" wall. Claude Code isn't lowering the bar—it's making the bar accessible to people with great ideas who think in systems, not syntax.
English
0
0
0
2
fabian
fabian@fabianstelzer·
Documentary: non-technical founder discovers Claude Code
English
211
930
9.6K
946.9K