
Manveer Chawla
286 posts

Manveer Chawla
@manveerc
Building @tryzenithai | Runner | Alum @ Confluent, Dropbox, Facebook, IIT Bombay




As we head into 2026, we expect agentic AI SREs to move more from hype to production. Though that fail, won’t fail on models. They’ll fail on data. Manveer explains why ClickHouse solves this problem and fits this workload: - Long retention, full-fidelity telemetry - High-cardinality data without pain - Fast SQL for agentic investigation loops - Built for correlation, not dashboards AI SREs need better observability, not bigger models. clickhou.se/4qyHaQg





(I'm from the mcp-ui team) While full websites/webapps will be obsolete in the long term (access will be fully consolidated by chats/assistants) - we can agree that text based chats are not proper replacements for them. While assistants today can generate basic UI visualizations according to data - the amount of actual UI interactions on the web is vast and most are very complex. And we already have apps and services that are "domain experts" in generating these domain specific UIs. If you build an assistant you'll definitely won't invent booking, seat selection, ecommerce, bug investigation and 1000s other very specific UIs from scratch. The philosophy behind mcp-ui is that of the fragmented web - where MCP servers can return their own UI resources, defined by the server, but to be rendered by the client. It's not just about visualization - it's about interactive UI chunks that integrate seemlessly in the assistant experience and flow (can be in the chat or in a sidebar, modal, it's for the clients decision). The server defines the UI, the client decides if and how to render it, and how to react to user events. This allows the providers to maintain a place in the UI value chain (and not be limited to data) - and to the clients to immediately use 1000s of services without reimplementing all of the complex UI that exists in all of the apps. mcp-ui allows for the entire spectrum - from "black box" UI that can be used as-is - to the server just returning *what* to render but not *how* to render it (remote-dom style). We work in the UI working group of the MCP committee, and with major stakeholders, the adaptive cards creators, remote-dom creators, major component libraries, major clients and providers to align the standards around UI over MCP. For example this fully functional MCP UI server for *every* Shopify store: mcpstorefront.com can provide immediate, ready-to-use commerce experience for *every* chat. Note that it's not opinionated on *how* to show the UI chinks - it just returns them and throws "events" and the chat can decide how to render and what to do with them. The future of the Web is being defined right now, in 1-2 years we won't open 10 tabs to visit 10 websites, we will use our own personal assistant that will access these "apps" for us. We have to define the new model for UI in this "fragmented web" future, where UI can no longer be owned only by the providers - but also cannot be *fully* generated from scratch for every usage by every assistant. mcp-ui allows providers to define UI for their MCP tool responses - a UI that can be immediately rendered and integrated in *any* consumer, be it 3rd party or internal. We're happy to answer any questions, of course. (CC @idosal1 - the author)





