Most AI streams fail the moment your user switches apps or walks into an elevator.
Standard HTTP streams are fragile. If the connection drops, the generation is lost.
We built a better way for production AI apps. 🧵
Introducing Realtime v2 streams.
Unlike a standard fetch stream, these are durable subscriptions.
If a client disconnects (network drop, tab close), the AI task keeps running in the background. Trigger buffers the tokens.
When the client reconnects? We automatically flush the buffered tokens and resume the stream exactly where it left off.
No "Network Error". No lost compute costs.
It works seamlessly with the Vercel AI SDK.
The code is simple. You don't manage websockets.
Backend:
`return stream.toResponse()`
Frontend (React):
`useRealtimeRun(runId)`
It handles the re-connection logic for you.