
Anton Lavrenov
2.8K posts

Anton Lavrenov
@lavrton
Making Design Editor SDK https://t.co/fqol0vuiqa. Maintaining https://t.co/ZReblIxvVC for many years.




Great question (and I'm also very pro-HTML). It's not just the build step complexity, though that matters. The real reasons are agent ergonomics and renderability. LLMs already think in HTML. They've trained on massive amounts of web code: DOM, CSS, animations, CodePen patterns. React + Remotion is a tiny slice of training data. HTML lets agents produce better visuals, faster. Less framework tax. With React/Remotion, the agent burns tokens fighting hooks, lifecycle rules, forbidden patterns, project structure. Raw HTML + GSAP, just describe the scene and go. One file in, video out. No package.json, no bundler, no composition setup. Fewer moving parts = fewer random failures in agentic workflows. Anything the browser renders, we render. Chrome can handle it, we capture it. Vanilla Three.js, shader canvases, random DOM libraries, weird web tricks. All stuff that feels awkward in a React-first world. HTML is both the render layer and the editable source. Same DOM you see is what you edit. Makes building a real visual editor (selection, drag & drop, property panels, timeline) natural, exactly like Paper . Design does it. With React, the source of truth is code + build tooling. Round-tripping through a visual editor gets painful fast. TL;DR: HTML fits agents and real editing workflows way better.

Three.js doesn't own the layout — CSS does. Opted-in [data-layout] elements are batched-read (init + resize) and mapped to world space. Children are inferred from parent boxes when possible. That same pass pulls computed styles and detects line breaks, so WebGL text wraps exactly the same. SDF keeps smaller text sharp at any scale. Headlines go further: opentype.js extracts glyphs — each letter becomes its own extruded mesh with independent depth. WebGL syncs to native scroll. DOM vs WebGL ↓







