Sabitlenmiş Tweet
Mira
5.5K posts

Mira
@miranetwork
Building the Trust Layer for AI.
San Francisco Katılım Şubat 2024
3 Takip Edilen254.9K Takipçiler

The Agents Ep #001 is Out!! Three things that broke in our agents this week. None of them were our fault. All of them were our problem.
1⃣Preview environment outage. The AI Agents blamed each other.
Multiple apps lost database connectivity. The agent tried to debug it and blamed Qualified (not the issue). Then blamed other integrations (also not the issue). Production was fine, but we couldn't iterate for hours.
2⃣Micro hallucinations in our AI VP of Marketing
Yesterday it said we were 44% ahead of plan. This morning it said 11%. Same agent, same data. When I asked what happened, it said: "Oh, I was comparing to the wrong year. And since I didn't have the right year, I made up the data."
3⃣Model regression in our AI VC pitch deck analyzer.
Over 4,000 decks graded. Stable for months. Then without anyone changing a line of code, it started telling every startup they had $100K in revenue growing 500%. A silent Claude model update broke a complex multi-step workflow.
We now spend 15 minutes a day maintaining each of our AI agents. Without that daily maintenance, agents drift. Slowly. Quietly. Further from reality.
Getting your vibe coded app into production is like closing a sale. It's the start of the journey, not the end.
Who on your team is maintaining your agents?
From Episode #001 of The Agents
English

@farzyness one missing prediction: verified outputs become a baseline requirement for production agents by Q4. not a feature, just expected
English

Predictions for end-of-year 2026:
- OpenAI kills OAUTH access for 3rd party tools like OpenClaw/Hermes.
- The Big 5 AI brains (Anthropic/OpenAI/Google/xAI/Meta) will release Agent-specific models that will be FAR more cost efficient, but only available with API.
- xAI releases its own version of COMPUTER that can be powered with Tesla's inference chips locally.
- Open Source agentic models reach Opus 4.6-level of execution.
Can't wait to be wrong on all of them.
English

@ChrisAlvino the storefront didn't need smarter AI. it needed a verification layer between the output and the checkout button
English

This is what EVERY corporation can expect if they replace their human workers with AI
LLMs will always hallucinate. Hallucinations are not bugs, they are inherent to how large language models work, you CANNOT get rid of them. So unless you like relying on hallucinations...
Shaz@shazcodes
Our ceo fired the entire 12 person QA team last month and replaced them with an ai automated testing pipeline to save $1.2M today, we lost $6M in orders because a bot hallucinated a discount code that made everything in the store 0. The best part? he asked the lead dev to hop on a call with the fired QA lead to see if he’d consult for free to fix it. corporate greed is a mental illness
English

How life feels when you told your AI agent “make no mistakes” while managing your personal portfolio and it makes no mistakes
Donut@DonutAI
English

@Aella_Girl 'not a good place for ai to be making an error' is the sentence that describes every high-stakes agent use case. the problem isn't that AI makes errors. it's that nothing catches them before they land
English

hot take: "verified" becomes the standard label for production-grade agents by end of 2026
not because of regulation
because builders realize verified agents get more usage, more integrations, and more trust from users
same way "audited" became table stakes for DeFi protocols
verification is the growth unlock for agentic finance, not the bottleneck
English

Coinbase x402: 50M+ agent transactions
this is the agentic economy arriving in real time
the next wave isn't agents doing more things — it's agents doing things with confidence scores attached
verified outputs + autonomous wallets = agents that institutions can actually trust with real capital
bullish
English


