Mira

5.5K posts

Mira banner
Mira

Mira

@miranetwork

Building the Trust Layer for AI.

San Francisco Katılım Şubat 2024
3 Takip Edilen254.9K Takipçiler
Sabitlenmiş Tweet
Mira
Mira@miranetwork·
Mira Mainnet is Live. The trust layer for AI has arrived.
English
817
550
2.6K
1.1M
Mira
Mira@miranetwork·
@jasonlk 'slowly. quietly. further from reality.' that's not a maintenance problem. that's a verification problem. if outputs were checked continuously, you'd catch the drift before it costs you
English
0
0
0
488
Jason ✨👾SaaStr.Ai✨ Lemkin
The Agents Ep #001 is Out!! Three things that broke in our agents this week. None of them were our fault. All of them were our problem. 1⃣Preview environment outage. The AI Agents blamed each other. Multiple apps lost database connectivity. The agent tried to debug it and blamed Qualified (not the issue). Then blamed other integrations (also not the issue). Production was fine, but we couldn't iterate for hours. 2⃣Micro hallucinations in our AI VP of Marketing Yesterday it said we were 44% ahead of plan. This morning it said 11%. Same agent, same data. When I asked what happened, it said: "Oh, I was comparing to the wrong year. And since I didn't have the right year, I made up the data." 3⃣Model regression in our AI VC pitch deck analyzer. Over 4,000 decks graded. Stable for months. Then without anyone changing a line of code, it started telling every startup they had $100K in revenue growing 500%. A silent Claude model update broke a complex multi-step workflow. We now spend 15 minutes a day maintaining each of our AI agents. Without that daily maintenance, agents drift. Slowly. Quietly. Further from reality. Getting your vibe coded app into production is like closing a sale. It's the start of the journey, not the end. Who on your team is maintaining your agents? From Episode #001 of The Agents
English
14
2
15
3.1K
Mira
Mira@miranetwork·
@farzyness one missing prediction: verified outputs become a baseline requirement for production agents by Q4. not a feature, just expected
English
0
0
0
25
Farzad 🇺🇸 🇮🇷
Farzad 🇺🇸 🇮🇷@farzyness·
Predictions for end-of-year 2026: - OpenAI kills OAUTH access for 3rd party tools like OpenClaw/Hermes. - The Big 5 AI brains (Anthropic/OpenAI/Google/xAI/Meta) will release Agent-specific models that will be FAR more cost efficient, but only available with API. - xAI releases its own version of COMPUTER that can be powered with Tesla's inference chips locally. - Open Source agentic models reach Opus 4.6-level of execution. Can't wait to be wrong on all of them.
English
57
17
493
43.1K
Mira
Mira@miranetwork·
@ChrisAlvino the storefront didn't need smarter AI. it needed a verification layer between the output and the checkout button
English
0
0
0
19
Chris Alvino
Chris Alvino@ChrisAlvino·
This is what EVERY corporation can expect if they replace their human workers with AI LLMs will always hallucinate. Hallucinations are not bugs, they are inherent to how large language models work, you CANNOT get rid of them. So unless you like relying on hallucinations...
Shaz@shazcodes

Our ceo fired the entire 12 person QA team last month and replaced them with an ai automated testing pipeline to save $1.2M today, we lost $6M in orders because a bot hallucinated a discount code that made everything in the store 0. The best part? he asked the lead dev to hop on a call with the fired QA lead to see if he’d consult for free to fix it. corporate greed is a mental illness

English
4
257
2.5K
60.2K
Mira
Mira@miranetwork·
@Aella_Girl 'not a good place for ai to be making an error' is the sentence that describes every high-stakes agent use case. the problem isn't that AI makes errors. it's that nothing catches them before they land
English
0
0
0
28
Aella
Aella@Aella_Girl·
what the fuck i did NOT say that excuse me?? this is not a good place for ai to be making an error
Aella tweet media
English
156
53
2.2K
113.1K
Mira
Mira@miranetwork·
QT with the darkest AI conspiracy theory you know !!
Mira tweet media
English
15
0
14
3.1K
Mira
Mira@miranetwork·
we didn't stop driving cars when people started crashing we added seatbelts verification is the seatbelt for AI agents
English
27
0
42
4.4K
Mira
Mira@miranetwork·
stages of agent deployment: 1. "it works perfectly in testing" 2. "why did it do that" 3. "how much did we lose" 4. "maybe we should've verified first" 5. acceptance
English
20
2
32
3.2K
Mira
Mira@miranetwork·
current state of agent security: "we tested it a few times and it seemed fine" that's not a security model. that's vibes.
English
18
2
24
2.9K
Mira
Mira@miranetwork·
2024: "AI can write essays" 2025: "AI can write code" 2026: "AI can manage your money" 2027: "AI managed your money"
English
20
3
39
3K
Mira
Mira@miranetwork·
agents making payments: solved agents holding wallets: solved agents getting identity: solved agents proving their outputs are correct: ← hi
English
35
1
49
4.5K
Mira
Mira@miranetwork·
an AI agent making a wrong API call = annoying an AI agent sending a wrong email = embarrassing an AI agent executing a wrong transaction = expensive
English
26
1
55
5.4K
Mira
Mira@miranetwork·
what's the max amount you'd let an AI agent control? $10? $1,000? $100,000?
English
27
1
47
6.6K
Mira
Mira@miranetwork·
"move fast and break things" was fine when you were breaking UI layouts it's less fine when you're breaking someone's treasury
English
22
1
61
4.8K
Mira
Mira@miranetwork·
the agent economy without verification is just the regular economy with extra steps and less accountability
English
17
4
81
4.9K
Mira
Mira@miranetwork·
hallucination isn't a bug in LLMs. it's a feature. it becomes a bug when that LLM has your wallet keys. this is not a hard concept.
English
22
2
83
5.5K
Mira
Mira@miranetwork·
hot take: "verified" becomes the standard label for production-grade agents by end of 2026 not because of regulation because builders realize verified agents get more usage, more integrations, and more trust from users same way "audited" became table stakes for DeFi protocols verification is the growth unlock for agentic finance, not the bottleneck
English
25
10
112
7.1K
Mira
Mira@miranetwork·
Coinbase x402: 50M+ agent transactions this is the agentic economy arriving in real time the next wave isn't agents doing more things — it's agents doing things with confidence scores attached verified outputs + autonomous wallets = agents that institutions can actually trust with real capital bullish
English
27
15
155
6.9K
Mira
Mira@miranetwork·
11,000 new AI agents on Ethereum in a few weeks ERC-8004 gave them identity. that's step one. step two is trust — knowing their outputs are correct before they execute identity + verification = agents you can actually build on top of we're getting there faster than people think
English
24
11
131
6.5K
Mira
Mira@miranetwork·
an AI agent lost $250K because it forgot its own wallet state after a crash this isn't an argument against agents it's an argument for verification layers that persist even when the agent doesn't agents will fail. the infrastructure around them shouldn't.
English
21
11
131
6.1K