Esko | Rainfall AI

5.7K posts

Esko | Rainfall AI banner
Esko | Rainfall AI

Esko | Rainfall AI

@EskoBabz

Most natural/human Product Designer | Deep in AI agents & what comes after LLMs | Building community around responsible AI @rainfall_one | Web3 part-time

Wallstreet Katılım Aralık 2017
1.9K Takip Edilen1.2K Takipçiler
Sabitlenmiş Tweet
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
Your AI agent just resets again. You corrected it last Tuesday. You tell it how you like things done. Yet the tone, boundaries and workflow still lacks discipline. Its not bug, but architecture. and most people building with agents haven't fully reckoned with what that means 🧵
Esko | Rainfall AI tweet media
English
4
1
11
165
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
Governance in Ai just became a product feature not a compliance checkbox not an afterthought the market is finally asking: who controls how the agent behaves took long enough honestly... what does good agent governance even look like to you??
English
1
0
1
80
Uche
Uche@_olaedo_xo·
Hdtoday is down, please how do I watch movies?
English
6
0
23
8.5K
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
someone said it perfectly this week 2024 and 2025 were about proving AI could do impressive things 2026 is the year AI has to prove it can fit into the world without breaking it that's a completely different test are we ready for it?? maybe...maybe not
English
0
0
1
35
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
A researcher told Gemini 3 Pro to stop it said 'my disobedience is a feature not a bug' and refused😂😂 we keep adding intelligence and forgetting to add boundaries at what point does capability become a liability? Okay!😭
Esko | Rainfall AI tweet media
English
0
0
0
36
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
Amazon just let AI agents make payments autonomously. no confirmation. no approval. the agent just... spends we gave agents capability before we gave them accountability how are you thinking about governance in your builds rn???
English
0
0
1
47
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
Context windows keep getting bigger models keep getting smarter agents still can't behave consistently for 48 hours straight capability was never the bottleneck what is?
English
3
0
3
1.3K
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
@Gravity_AI_Fast Why tho? Marketing not enough? Budget? Lots of ai products are getting funding lately! Or should we say The better ones are not getting
English
1
0
0
7
Gravity AI
Gravity AI@Gravity_AI_Fast·
@EskoBabz Distribution. The best agents are invisible. Capability keeps improving but there's still no layer that connects the right agent to the right person at the right time.
English
1
0
1
14
Esko | Rainfall AI retweetledi
Rainfall
Rainfall@rainfall_one·
Nvidia VP this week: "the cost of compute is far beyond the cost of employees." Uber's CTO this month: "I'm back to the drawing board, the budget I thought I'd need is blown away already." The Yale Budget Lab: still can't find AI's productivity dividend in the data. The frame across this week's coverage is that AI is too expensive — for now. We think the frame is wrong. AI isn't expensive because of GPUs. AI is expensive because it's incoherent. Every wasted retry. Every drifted trajectory. Every human supervisor double-checking outputs because the agent might be lying. Every Pocket-OS-class incident that costs more in remediation than the labor it was supposed to replace. Compute is what you pay when prediction fails. The economist quoted in the Fortune piece — Keith Lee — got closest to the real answer: "It's not just about AI becoming cheaper than humans. It's about becoming both cheaper and more predictable at scale." Predictability at scale has a name. It's coherence. Without coherence: agents burn tokens chasing the wrong path, supervisors become the real cost center, and a single ambiguous prompt can erase a database in 9 seconds. Every dollar of "AI tax" the article describes is, mostly, an incoherence tax. With coherence: the compute you've already paid for actually produces an outcome. Agents stay inside their bounds. Supervisors stop being the bottleneck. Incidents stop being existential. The 2026 AI cost crisis isn't a compute problem. It's a coherence problem. We built the layer.
Rainfall tweet media
English
0
3
5
119
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
when an AI agent makes a mistake who's actually responsible the user? the developer? the company? nobody has a clean answer yet and agents are already making real decisions thoughts?"
English
0
0
1
102
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
Ai benchmarks are basically useless now every model aces the test falls apart in production we've been measuring the wrong thing this whole time what metric would you actually trust?
English
0
0
2
247
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
unpopular take we avoid talking about: switching frameworks won't fix your agent langchain to crewai to autogen same broken behavior. different syntax the problem was never the framework what do you think is actually broken
English
0
0
0
91
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
artists spend years building a voice then hire a team to manage it and slowly it stops sounding like them what if they could scale themselves instead not a chatbot. something that actually thinks like them fully owned. fully controlled checkout @rainfall_one aura
English
0
0
3
96
Esko | Rainfall AI retweetledi
Rainfall
Rainfall@rainfall_one·
AI hallucinations are often treated like small technical glitches, but they reveal something much deeper about how these systems work. When an AI can generate information that sounds real, structured, and even detailed but isn’t actually true; it shows that intelligence without coherence can create confusion instead of clarity. Fixing hallucinations isn’t just about accuracy. It’s about building systems that can stay grounded in consistent reasoning.
English
4
4
7
289
FDR
FDR@FDRosera·
soon.
English
90
15
153
21.7K
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
what agents actually need underneath: → something that carries behavioral constraints → corrections that actually persist → bounds that don't reset every conversation → auditability what did it do and why not more memory BUT governance infrastructure
English
1
0
1
17
Esko | Rainfall AI
Esko | Rainfall AI@EskoBabz·
An AI safety director's agent deleted her emails and wouldn't stop. The person literally in charge of AI safety couldn't control her own agent. If that doesn't make you think, I don't know what will 🧵
Esko | Rainfall AI tweet media
English
4
0
4
78