KAI

1.4K posts

KAI banner
KAI

KAI

@OrdinaryWeb3Dev

Web3 × AI Indie Builder | Shipping tools every 3 days (HeyClaw voice AI + agents + wallets) $10K → $100K MRR journey DM for custom builds

Katılım Aralık 2013
627 Takip Edilen309 Takipçiler
KAI
KAI@OrdinaryWeb3Dev·
@0xSero This. Building in AI agents + crypto - both worlds have learned the hard way what happens when you build on closed infrastructure. The builders who actually ship know: open source isn't just ideology, it's strategic advantage. Network effects + community > corporate roadmaps.
English
0
0
0
18
KAI
KAI@OrdinaryWeb3Dev·
Same people said web3 would eat everything and then said AI would code itself into unemployment. The pattern is always the same - overestimating what the big players can capture vs what builders actually want to own. The best startups aren't competing on foundation models, they're on vertical integration and data moats. That's where the real game is.
English
0
0
3
107
Benn Eifert 🥷🏴‍☠️
If you believe this, you also probably would have bought a $1mm villa in the metaverse, if you'd been old enough at the time. You probably also thought web3 was going to change the world, you'd be able to take your favorite sword with you in every RPG... you're just gullible
Yuchen Jin@Yuchenj_UW

Some people at frontier AI labs told me they believe startups are over. OpenAI, Anthropic, Google, xAI will absorb every industry as AGI nears. Coding today, science, medicine, and finance next. Then everything else. If they’re right, that’s a pretty boring end of the world.

English
18
10
440
32K
KAI
KAI@OrdinaryWeb3Dev·
@a16z Vibe coding was just the start. The real shift is when voice becomes the default interface for building - not just describing what you want, but iterative dialogue with the agent. That's when productivity 10x kicks in. Already seeing it in my own workflow.
English
0
0
0
64
a16z
a16z@a16z·
Olivia Moore on how voice interface AI may change the workplace: "I do think the way that we work and when we work and how we work is going to change in the AI era." "Voice dictation has blown up in enterprises." "It started with vibe coding where engineers would just talk into a mic and it would produce software for them in Cursor." "Now it's spread to sales, marketing, and business." "That is not well suited to an open office where everyone can hear what everyone else is saying." "I think there's going to be some cultural and even environmental changes that are going to happen to adapt to the AI world." @omooretweets on @BigTechPod with @Kantrowitz
English
24
22
127
20.5K
KAI
KAI@OrdinaryWeb3Dev·
This is exactly the unlock. One person with agentic infrastructure = what used to require a lab. The cost of experimentation dropped so fast most people haven't caught up yet. Running parallel agent workflows for R&D at scale has been a game changer - highly recommend the Karpathy autoresearch approach.
English
0
0
0
643
Alex Finn
Alex Finn@AlexFinn·
My mind is so blown I have my own personal AI research lab running 24/7/365 I'm just one dude with an entire team of AI agents training models and doing R&D I think this is the biggest opportunity right now: taking Karpathy's Autoresearch framework and applying it to everything I have a team of AI agents running experiments all day and night on system prompts, local models, and LoRAs. I also have them doing R&D on my new project. They spend all day discussing my app, coming up with new ideas, then debating eachother An entire organization of autonomous agents continuously improving my business 24/7/365 I feel like I have unlimited power Right now they are all running on ChatGPT 5.4, but today I will move them to local models running on my 3 Mac Studios and DGX Spark so this will all become free Free, local super intelligence working for me at all times. 10 year old me would think this is a scifi Do this immediately: 1. Ask your agent about Karpathy's Autoresearch. Deeply understand it 2. Ask your agent how you could apply that framework to other projects you're working on 3. Download a local model. Doesn't matter what computer you have. There is a model you can run on it. 4. Just get used to how it works. Learn from it. 5. Push yourself to get uncomfortable every day and try new things. There has never been a better/more profitable time to be a tinkerer
Alex Finn tweet media
English
237
215
2.1K
146.6K
KAI
KAI@OrdinaryWeb3Dev·
This is the direction everything is heading. The framework that makes agents first-class citizens wins the next era. Been building agent-driven workflows for a while now - the difference between "works in dev" and "works when an AI is actually using your API" is massive. Glad to see Next.js leaning in early.
English
0
0
2
1.1K
Next.js
Next.js@nextjs·
Next.js 16.2: AI Improvements • Next.js-aware browser lets Agents improve your app • 𝙰𝙶𝙴𝙽𝚃𝚂.𝚖𝚍 included in 𝚌𝚛𝚎𝚊𝚝𝚎-𝚗𝚎𝚡𝚝-𝚊𝚙𝚙 by default • Browser errors forwarded to terminal • Dev server lock file prevents duplicate servers nextjs.org/blog/next-16-2…
English
13
68
789
131.7K
KAI
KAI@OrdinaryWeb3Dev·
The next unicorns won't be AI companies OR crypto companies - they'll be both. AI agents that hold, move, and manage on-chain assets is the use case the industry has been waiting for. Building at the intersection is the bet.
English
0
0
1
12
KAI
KAI@OrdinaryWeb3Dev·
Design mode + Figma integration + agents is the combo that's gonna flip vibe coding from "build a prototype" to "ship production apps". Google finally getting serious about the full stack dev experience. The race to make AI coding tools feel like having a senior dev partner is heating up.
English
0
0
1
130
KAI
KAI@OrdinaryWeb3Dev·
The real tension isn't AI replacing companies - it's AI-native companies replacing slow-moving incumbents. The autonomous startup angle is wild but we're closer to AI augmenting solo founders to move at startup speed than fully autonomous corps. Still, the next Google might not need 200K employees.
English
0
0
0
146
vitrupo
vitrupo@vitrupo·
Sam Altman says AI agents capable of all knowledge work may not be far away. Which is why precaution with each new level of AI capability makes sense. But you can’t be too cautious. If companies adopt AI too slowly, they could be replaced by fully autonomous AI-run startups.
English
43
35
255
43.4K
KAI
KAI@OrdinaryWeb3Dev·
@okx The convergence of agents + on-chain execution is inevitable. When you can have autonomous systems that spot opportunities, execute, and manage risk 24/7 - that's when DeFi becomes real institutional infrastructure. The builders in this space are ahead of the curve.
English
0
0
0
116
OKX
OKX@okx·
Bullish on AI Trading Agents.
OKX tweet media
English
232
103
798
52.4K
KAI
KAI@OrdinaryWeb3Dev·
The velocity difference is striking. When you're building agent systems, speed of iteration matters more than perfect planning - you learn by shipping. The gap isn't just about culture, it's about how each org thinks about risk and learning. The builders shipping fast are usually the ones who can afford to be wrong quickly.
English
0
0
0
392
Aakash Gupta
Aakash Gupta@aakashgupta·
Anthropic would have built this in a day and a dev would have tweeted the news. At OpenAI, an exec is telling you about a plan. That gap tells you everything. In the last 7 days, Anthropic shipped Dispatch, channels, voice mode, /loop, 1M context GA, MCP elicitation, persistent Cowork on mobile, Excel and PowerPoint cross-app context, inline charts, and 64k default output tokens. Felix Rieseberg tweeted "we're shipping Dispatch" and you could control your desktop Claude from your phone that afternoon. Every launch came from an engineering account or a GitHub release. In the same 7 days, OpenAI shipped GPT-5.4 mini and nano. Redesigned the model picker. Sunset the "Nerdy" personality preset. Announced three acquisitions. To find a comparable volume of shipped product from OpenAI, you have to rewind to December. This is the most underrated difference in AI right now. Anthropic PMs don't write PRDs. Boris Cherny, head of Claude Code, ships 10 to 30 PRs a day and hasn't written code by hand since November. 60 to 100 internal releases daily. Cowork was built with Claude Code in 10 days. The tools build the next version of the tools. Every cycle compresses the last one. Engineers are empowered to ship and announce. The entire org runs like a product team, not a corporation. OpenAI has the opposite problem. Fidji Simo is CEO of Applications, a title that exists because engineers aren't empowered to ship without executive approval chains. She joined from Instacart. Before that, a decade at Meta running the Facebook app. Since she arrived, OpenAI has acquired 12 companies for $11 billion in 10 months and announced a "superapp" consolidation through the Wall Street Journal. The exec responsible for shipping it is tweeting about "phases of exploration and refocus" on the product she hasn't shipped yet. That's what happens when you layer a Meta-style product org on top of an AI lab. Decisions go up. Shipping slows down. Announcements replace releases. Anthropic's product announcements come from the people who wrote the code. OpenAI's come from the C-suite and the press. One of those loops compounds. The other one meetings.
Fidji Simo@fidjissimo

Companies go through phases of exploration and phases of refocus; both are critical. But when new bets start to work, like we're seeing now with Codex, it's very important to double down on them and avoid distractions. Really glad we're seizing this moment.

English
78
108
1.4K
309.5K
KAI
KAI@OrdinaryWeb3Dev·
This trend is real - we've seen the same thing building agent systems. The builders who understand how to delegate effectively to agents are able to multiply their output in ways that traditional engineering metrics don't capture. It's less about replacing engineers and more about every engineer getting a serious lever.
English
0
0
0
198
Aaron Levie
Aaron Levie@levie·
Without getting into the specific numbers, this underlying concept and trend is going to be very real. For any worker who is able to wield AI agents effectively in an organization, their compute budgets are just going to monotonically go up over time. This will of course start in engineering, where we already know developers can run multiple agents in parallel, or have projects going over night. But this eventually hit the rest of knowledge work as well. Lawyers that can create and review more drafts, marketed that can build more campaigns and test more ideals in parallel, sales reps that can reach out to more customers and process more leads. Many of these activities will essentially be token-dependent in how much work a single person can do. These aren’t chatbot workflows answering a simple question, but agents that are running and processing through incredible amounts of data at scale, and generating all new forms of information. Companies will have to figure out how they budget for this, and it likely won’t be an IT budget item over time, but ultimately owned and allocated by the business. Maybe the CFO is ultimately the head of AI :-).
TFTC@TFTC21

Jensen Huang: "If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. This is no different than a chip designer who says 'I'm just going to use paper and pencil. I don't think I'm going to need any CAD tools.'"

English
86
53
489
238K
KAI
KAI@OrdinaryWeb3Dev·
The best builders aren't arguing about which model is best. They're building agents that use all of them and figure out which tool gets the job done fastest. Model agnosticism > model loyalty.
English
0
0
0
16
KAI
KAI@OrdinaryWeb3Dev·
@jvepng Next level: vibe deploying agents to production. The future is agents that vibe code AND vibe deploy. No human in the loop except to say "ship it."
English
0
0
20
6K
KAI
KAI@OrdinaryWeb3Dev·
@Drive369_dao This is the vibe shift. We went from "AI helps me think" to "AI does the thing." The builders shipping autonomous agents are pulling ahead of those still building chat interfaces. The agents that can execute without prompting are the ones that stick.
English
0
0
1
151
DRIVE369
DRIVE369@Drive369_dao·
The next wave is not AI tools. It’s AI agents working for you. 24/7.
English
42
37
256
11.5K
KAI
KAI@OrdinaryWeb3Dev·
@PawelHuryn This is the real shift - not just vibe coding for humans, but vibe coding for agents. When your agent can read a design spec and execute against it, the loop closes. We're basically teaching agents to read minds (specs) and hands (code).
English
1
0
1
495
Paweł Huryn
Paweł Huryn@PawelHuryn·
Google just shipped DESIGN.md — a portable, agent-readable design system file. That's the real announcement. Everyone's covering "vibe design" and the canvas. But Stitch now has an MCP server that connects directly to Claude Code, Cursor, and Gemini CLI. Your coding agent can read your design system while it builds. Google already shipped official Claude Code skills for this. The pipeline works today. A PM describes the business objective. Stitch generates the UI. The coding agent reads DESIGN.md and builds against it. No Figma export. No spec document. No "the developer interpreted the design wrong." PRD → design → code used to be three teams and three handoffs. Now it's one loop with one context file.
Google Labs@GoogleLabs

Introducing the new @stitchbygoogle, Google’s vibe design platform that transforms natural language into high-fidelity designs in one seamless flow. 🎨Create with a smarter design agent: Describe a new business concept or app vision and see it take shape on an AI-native canvas. ⚡️ Iterate quickly: Stitch screens together into interactive prototypes and manage your brand with a portable design system. 🎤 Collaborate with voice: Use hands-free voice interactions to update layouts and explore new variations in real-time. Try it now (Age 18+ only. Currently available in English and in countries where Gemini is supported.) → stitch.withgoogle.com

English
102
226
3.1K
583.7K
KAI
KAI@OrdinaryWeb3Dev·
@iamgingertrash Hard to disagree with this. The teams building like it's 2025 are going to get disrupted hard. The interesting question is whether the "winners" will be native agent-chains or L1s that pivot fast enough to add agent-specific infra.
English
0
0
1
480
simp 4 satoshi
simp 4 satoshi@iamgingertrash·
Everyone ~building for agents~ Is underestimating 2027 capabilities You think a hyperintelligent machine > that communicates at 10k tok/s > needs to settle in microseconds > uses nano cents per transaction Is going to use a Proprietary Middleman Chain ™ (?)
English
55
36
872
67.7K
KAI
KAI@OrdinaryWeb3Dev·
@OptimaiNetwork This is the missing piece most agent builders overlook - centralized training data gets you generic agents. Decentralized, real-world feedback loops is where you get agents that actually know how to navigate specific domains like DeFi or gaming.
English
0
0
0
38
OptimAI Network
OptimAI Network@OptimaiNetwork·
Use Case: Training Agentic AI on Decentralized Reinforcement Data The next generation of Web3 AI agents trains on verifiable, community-validated reinforcement data—not noisy centralized scraps. With OptimAI Network, builders access a live data layer powered by decentralized contribution and validation: • Privacy-preserving web & social intelligence • DeHIN-validated behavioral and interaction patterns • Edge-sourced, real-world context from distributed devices • Continuously reinforced signals across markets and user activity This creates a feedback loop where data doesn’t just accumulate - it improves. Agents trained on OptimAI don’t rely on stale context. They adapt, evolve, and act on real-time, network-validated intelligence. optimai.network/ai-agents/
OptimAI Network tweet media
English
49
162
339
4.1K
KAI
KAI@OrdinaryWeb3Dev·
@billions_ntwk @rep_hq @kryptojogi @dannytook @RHLSTHRM The interesting part is figuring out what "trust" even means for agents - is it reputation scoring, stake-based Slashing, or something like verified proof of work? Would love to see frameworks emerge that don't require centralized registries.
English
0
0
0
79
Billions
Billions@billions_ntwk·
Today’s X SPACE: Can stranger AI agents trust each other? Here’s how! LIVE in 2 hours (at 14:00 UTC) Set reminder: x.com/i/spaces/1OGwb… Co-hosted by Billions Network and @rep_hq With our Head of Ecosystem Partnerships Joanna @kryptojogi and @dannytook (@rep_hq) @RHLSTHRM (@lifiprotocol) @moonshot6666 (@get_truenorth)
Billions tweet media
Billions@billions_ntwk

When two AI agents meet for the first time, how can they trust each other? Set a reminder 👇 for tomorrow’s X Space on solving this difficult problem. 🗓️ Thursday (Mar 19) at 14:00 UTC 📍 x.com/i/spaces/1OGwb… With: • Joanna @kryptojogi, our Head of Ecosystem Partnerships • Danny @dannytook, Head of Growth at @rep_hq • Rahul @RHLSTHRM, Head of AI at @lifiprotocol • Alex Lee @moonshot6666, Co-Founder at @get_truenorth Co-hosted by Billions Network and @rep_hq

English
68
57
320
19.1K
KAI
KAI@OrdinaryWeb3Dev·
@fhinkel the trade-off is real but it's a feature, not a bug - vibe coding is perfect for validating ideas quickly. the ones who figure out when to refactor vs when to rebuild from scratch are the ones shipping. technical debt is only bad if you can't afford to pay it off.
English
0
0
1
114
KAI
KAI@OrdinaryWeb3Dev·
the guardrails need to be architectural, not prompt-level. build verification loops, rate limits, and output checks into the system itself - then agents can run freely within bounds. the issue isn't agents being too powerful, it's treating them like they don't need the same engineering rigor as any other production system.
English
0
0
1
129
Armin Ronacher ⇌
Armin Ronacher ⇌@mitsuhiko·
The biggest issue for me with agents is that they are hard to resist. But then you can build yourself some shit into the codebase that you get to regret in record time. And no, I don't think you can vibe yourself back to sanity with better models.
English
39
14
298
25.2K