Vlad Kamelskii@kusnizza
I’m a designer, and I built this feature end to end in 1.5 weeks, from design to development, using AI the whole way. I didn’t even touch Figma.
A few takeaways from this process:
AI is incredible. For the first time in my career, I don’t have a middleman between the idea in my head and the final result. I can design things the way I want and deliver the quality I need. No more endless back and forth with developers over small visual fixes, missing states, or tiny UI details.
I’m honestly so excited about this technology that I want to stop people on the street and talk about Codex. Yes, I use Codex. Its UI output is not great, but that doesn’t really matter for me. I design the UI myself from scratch anyway. Whether AI produces complete garbage or slightly better garbage is not the point, because my goal is still to make it perfect. I expect to adjust it heavily either way.
When I work on a feature, I usually start in Plan mode. Since I’m not an engineer, I try to go deeper into the architecture, ask AI a lot of basic questions, and sometimes verify things with our developers. After a few rounds of planning, I move into implementation.
To avoid silly mistakes and code style issues, our developers maintain an Agents.md file and a set of skills that help a lot. The first result usually has weak UI, but it can already be a very useful prototype. At that stage, I focus less on polish and more on UX: the flow, the behavior, and the overall logic of the feature.
During that process, I try to give AI more “vision” so it can work more independently. I let it run tests, build the app, read dev server logs, and even check behavior in the browser using agent-browser CLI. It can literally open the app, click through it, and reproduce scenarios on its own to verify that things work.
Once the architecture is in place, I move into UI polish step by step. That usually means very specific requests like: increase the margin to 4px, change the font size to 16px, add opacity to this container, and so on. At this stage, I rely heavily on React Grab, which lets me select an element in the browser and get its file path, so AI spends less time searching and more time fixing.
One more really useful AI workflow: while working on our Style Guide feature, I needed to adjust design tokens across more than 200 components. Checking how those tokens behave across so many components would be painful in any tool, even in Figma. So I asked AI to build a temporary page inside our app with a canvas that displayed all components, grouped by category, with different prop variations. That gave me a fast way to see how the style guide applied everywhere at once and quickly spot problems.
After every major iteration, I ask AI to review the code and check for edge cases and security issues. We also have a large test suite, which helps prevent breaking parts of the app outside the feature itself. If you want designers to ship code directly into production, the environment around them matters a lot.
This feature took 1.5 weeks and touched hundreds of files. At the end, I asked AI to generate a big PDF report comparing the branch against main, summarizing what changed and explaining architecture decisions, so it would be easier for both AI and developers to review before merging.
I’m still learning every day about AI and coding while designing features directly in the product. And honestly, I find it fascinating. I don’t really want to go back to the days when I had to design everything in an intermediate tool that doesn’t ship code.