Anand Singh

147 posts

Anand Singh banner
Anand Singh

Anand Singh

@imanands

Building the AI-native editor for 3D game dev → browser-playable games. WebGPU + Babylon.js. Solo founder, building in public.

Katılım Kasım 2018
15 Takip Edilen15 Takipçiler
Sabitlenmiş Tweet
Anand Singh
Anand Singh@imanands·
I'm building an AI-native desktop editor for 3D game development. You build in Makepot. Hit publish. Your players get a URL — playable instantly in any browser, no downloads, no app store. Babylon.js + WebGPU + AI agents under the hood. Solo dev. Building in public. Day 1.
English
0
0
3
107
Anand Singh
Anand Singh@imanands·
Genuine question for game devs: If you could remove ONE bottleneck from your workflow — not the fun parts like design or gameplay tuning, but the tedious stuff that eats your time — what would it be? For me it's the setup tax. Scene graph, asset pipeline, build config, deployment. Hours before you write a single line of game logic. #gamedev #indiedev
English
0
0
0
14
Anand Singh
Anand Singh@imanands·
@RobinPayot Your Wind Waker port is one of the best examples of what #WebGPU enables — full stylized rendering running smooth in a browser tab. Stuff like TSL making that possible without the traditional shader complexity is a game changer for the whole web 3D ecosystem.
English
0
0
0
79
Anand Singh
Anand Singh@imanands·
The generation side is getting wild fast. The next hard problem is the gap between "a navigable scene" and "an actual game." Walking through a beautiful AI-generated cathedral is step 1. Adding physics, collision, game logic, NPCs with behavior trees, and making it playable at 60fps on someone's laptop via #WebGPU — that's where the real engineering challenge lives. But the fact that environment creation is becoming near-instant is going to compress game dev timelines in ways we haven't fully processed yet.
English
0
0
0
322
AI Frontliner
AI Frontliner@AIFrontliner·
🚨BREAKING: OpenArt just shipped a browser based 3D world builder that requires nothing but an idea. One prompt or one image and you are walking through a navigable world in seconds. Here is everything you need to know: ↓
English
23
122
487
78K
Anand Singh
Anand Singh@imanands·
The interesting edge case: solo founders who have to be both. You pirate all day — ship fast, test ideas, break things. Then at night you put on the architect hat and ask "what did I actually learn? what needs to survive past this prototype?" AI tools are making this dual role viable in ways it never was before. You can vibe-code a feature in an afternoon and refactor it properly by evening. The loop is compressed from weeks to hours. The real unlock isn't needing fewer people. It's that one person can now hold both roles simultaneously.
English
0
0
0
43
Dan Shipper 📧
Dan Shipper 📧@danshipper·
new model for engineering team structure in 2026: 2 people only one pirate and one architect the pirate's job is to move as fast as possible to develop valuable, shipped product features by vibe coding. the architect's job is to turn the product surface discovered by the pirate into a reliable, structured machine—also by vibe coding, but at a slower, more well-reasoned pace. every product needs a pirate but most product's only need an architect once they some form of PMF, and in that case they usually don't need one full-time. architects can work across many codebases and solve interesting technical challenges. pirates go hard on a product that they own end-to-end.
English
300
252
4.2K
523.8K
Anand Singh
Anand Singh@imanands·
And just like portable cameras — the early output will be rough, but the new voices it unlocks are what matter. Game dev is the clearest example. A solo creator who understands player psychology but can't write shaders can now ship a playable 3D game in a browser. The bottleneck was never ideas. It was the toolchain standing between the idea and the player.
English
0
0
1
1.1K
Anand Singh
Anand Singh@imanands·
This is the pattern now — big studios abandon genres, indie devs pick them up and build something better. Mirror's Edge nailed the feel of momentum-based movement. The fact that someone is solo-building a spiritual successor with modern tech says everything about where innovation actually lives in #gamedev right now. Wishlisted.
English
0
0
1
481
Indie Game Joe
Indie Game Joe@IndieGameJoe·
This indie dev is making a third-person parkour runner game. If we can’t have Mirror’s Edge 2, then indies will. - Play as Kaia - Use parkour abilities to traverse the world - Move in any way you see fit It's called Tachyon Flow. Would you play this?
English
306
1.2K
11.8K
1.4M
Anand Singh
Anand Singh@imanands·
Nav mesh + crowd control is one of those features that separates "cool 3D demo" from "actual game." The fact that @babylonjs ships this natively means you can go from scene to playable level without bolting on third-party pathfinding. That's a huge deal for anyone building browser games where every dependency adds load time. 9.0 is shaping up to be a massive release. #WebGPU #gamedev
English
0
0
0
9
Anand Singh
Anand Singh@imanands·
Because design is spatial reasoning, not text completion. LLMs are incredible at logic, patterns, code structure. But "make this look good" requires understanding visual hierarchy, whitespace, proportion — things that don't compress well into tokens. Same problem in 3D. AI can write game logic all day, but ask it to place a camera or light a scene and it falls apart. The models need fundamentally different representations for spatial tasks.
English
0
0
0
25
Micky
Micky@Rasmic·
gpt 5.4 is the worst at frontend design it hurts lol
Micky tweet media
English
87
4
405
46.1K
Anand Singh
Anand Singh@imanands·
The hardest part of game dev isn't the engine. It's the 47 steps between "my game works on my machine" and "anyone in the world can play it." Packaging. Signing. Store review. Downloads. Updates. Compatibility. The engine that wins is the one that makes playing as frictionless as clicking a link. #gamedev #indiedev
English
0
0
0
16
Anand Singh
Anand Singh@imanands·
He's right, but it won't be from building AI — it'll be from building WITH AI. I quit my engineering job to build an AI-native game editor. What used to take a team of 10 and a year — 3D worlds, physics, game logic — one person can now do with the right tools. The real millionaires from AI won't be the model builders. They'll be the people who use AI to ship products in markets that haven't been disrupted yet. #gamedev and 3D creation is one of them. #buildinpublic #indiedev #WebGPU
English
0
0
2
1.4K
Freya Lawson
Freya Lawson@Freyabuilds·
Nvidia CEO Jensen Huang: "AI will create more millionaires in 5 years than the internet did in 20." But he didn't stop there... He revealed exactly how it'll happen and how you can capitalize on it:
Freya Lawson tweet media
English
100
348
2.7K
450.5K
Anand Singh
Anand Singh@imanands·
Or maybe: people whose identity is built around being good at coding feel threatened by AI. And that explains a lot too. The best engineers I know are bullish AND cautious. They use AI for the tedious 80% and apply deep expertise to the critical 20%. The people who are dismissive often aren't protecting quality — they're protecting status. There's a difference between "AI code has real limitations" (true) and "you shouldn't be excited about AI if you're not a senior engineer" (gatekeeping).
English
0
0
0
55
Dmitrii Kovanikov
Dmitrii Kovanikov@ChShersh·
I feel that people who are bad at coding are more bullish on AI. And this explains a lot.
English
278
120
2.2K
124.3K
Anand Singh
Anand Singh@imanands·
This is the part most people are sleeping on. Right now we treat AI agents as isolated workers. But the hard problems — the ones where one agent's output is another agent's input — need agents that can negotiate, not just execute. Imagine a frontend agent that tells the backend agent "I need this endpoint shaped differently" and they resolve it without you context-switching between terminals. The "become friends" part is tongue-in-cheek but it points at something real: shared context between agents is the actual bottleneck, not individual agent capability.
English
0
0
0
119
Louis Arge
Louis Arge@louisvarge·
i made a thing where now any Claude Code can send messages to any other Claude Code on my machine they can ask clarifying questions about work, or become friends
English
247
227
3.9K
649.3K
Anand Singh
Anand Singh@imanands·
The 3-6 month number is real, but it has almost nothing to do with code. What actually eats those months: figuring out what to build (user research), deciding what NOT to build (scope), getting auth/payments/infra right, and surviving the "nobody cares" phase. Vibe coding compresses the typing. It doesn't compress the thinking. If anything it makes the gap more visible — you can prototype in a weekend and still spend 5 months figuring out if anyone wants it. The real unlock isn't faster code. It's faster feedback loops.
English
0
0
0
16
Abaz Udosen | App Developer
Production-ready apps take 3 to 6 months. Period. 🛠️ Don’t let the "vibe coding" hype fool you. Shipping a weekend project is easy; shipping a scalable, secure, and polished product is a marathon, not a sprint.
Abaz Udosen | App Developer tweet media
English
324
249
2.9K
506.7K
Anand Singh
Anand Singh@imanands·
The deeper issue is that code is the wrong abstraction layer for agents to operate on. When you write code, you're making hundreds of micro-decisions about structure, naming, patterns — all reflecting a mental model the agent doesn't share with you. So it defaults to the most "average" patterns from training data: bloated, copy-pasted, over-abstracted. The AGENTS.md approach is essentially trying to compress your entire engineering taste into a text file. That's an impossibly lossy compression. I think the real unlock is tools where the agent doesn't generate code you have to maintain — it manipulates structured representations directly. Scene graphs, state machines, dependency graphs. Things with fewer valid configurations, so there's less room for aesthetic drift. Code-as-output might just be the wrong interface for collaborative human-AI building.
English
0
0
0
44
Andrej Karpathy
Andrej Karpathy@karpathy·
I'm not very happy with the code quality and I think agents bloat abstractions, have poor code aesthetics, are very prone to copy pasting code blocks and it's a mess, but at this point I stopped fighting it too hard and just moved on. The agents do not listen to my instructions in the AGENTS.md files. E.g. just as one example, no matter how many times I say something like: "Every line of code should do exactly one thing and use intermediate variables as a form of documentation" They will still "multitask" and create complex constructs where one line of code calls 2 functions and then indexes an array with the result. I think in principle I could use hooks or slash commands to clean this up but at some point just a shrug is easier. Yes I think LLM as a judge for soft rewards is in principle and long term slightly problematic (due to goodharting concerns), but in practice and for now I don't think we've picked the low hanging fruit yet here.
English
240
319
4.2K
786.5K
Andrej Karpathy
Andrej Karpathy@karpathy·
Thank you Sarah, my pleasure to come on the pod! And happy to do some more Q&A in the replies.
sarah guo@saranormous

Caught up with @karpathy for a new @NoPriorsPod: on the phase shift in engineering, AI psychosis, claws, AutoResearch, the opportunity for a SETI-at-Home like movement in AI, the model landscape, and second order effects 02:55 - What Capability Limits Remain? 06:15 - What Mastery of Coding Agents Looks Like 11:16 - Second Order Effects of Coding Agents 15:51 - Why AutoResearch 22:45 - Relevant Skills in the AI Era 28:25 - Model Speciation 32:30 - Collaboration Surfaces for Humans and AI 37:28 - Analysis of Jobs Market Data 48:25 - Open vs. Closed Source Models 53:51 - Autonomous Robotics and Atoms 1:00:59 - MicroGPT and Agentic Education 1:05:40 - End Thoughts

English
270
369
5.1K
610.5K
Anand Singh
Anand Singh@imanands·
This is the part people miss about AI coding tools. It's not about replacing developers. It's about giving a dad — who probably has zero experience with MIDI protocols or music theory algorithms — the ability to build something deeply personal for his kid in a single night. The best software in the world is the kind that never would have existed without these tools. Not because it's technically hard, but because the person with the idea and the person with the skills were never the same person. Now they can be.
English
1
1
9
685
Ryan Leachman
Ryan Leachman@RG_Leachman·
I asked Claude to build my daughter an app that plugs into our piano, can read live key strokes, can show her sheet notes and key view and ends with a Guitar Hero style game. All while giving progressively harder songs. Today she’s using It and crushing It.
English
598
1.6K
24.8K
3.3M
Anand Singh
Anand Singh@imanands·
This is exactly the problem. The bottleneck isn't the AI model — it's that Unity and Godot were never designed for AI agents to operate them. That's why I'm building Makepot — an AI-native editor built from the ground up for this workflow. The AI agents don't fight the editor, they ARE the editor. Tools-first approach: agents use structured tools to manipulate the scene graph, physics, rendering — not blindly generating code files. And every game you build deploys as a browser URL. No downloads, no app store. The engine needs to be rebuilt for the AI era. Not patched.
English
0
0
0
24
0xMarioNawfal
0xMarioNawfal@RoundtableSpace·
Someone vibe coded a Unity fishing game using Cursor - Game mechanics were fast - Unity's editor slows the agent down - Godot is cleaner for vibe coding but lacks asset marketplace The verdict: AI can build games but bottleneck is engine, not the model
English
21
4
95
56.5K
Anand Singh
Anand Singh@imanands·
Great question. Here's how Makepot handles it: When the AI hits an edge case, it doesn't just guess — it first reaches for tools. The AI agents have access to structured tools that operate on the scene graph, physics engine, and rendering pipeline directly. So it's not blindly generating code — it's using purpose-built instruments to inspect and fix issues. If the tools can't resolve it, then it falls back to code-level adjustments within the engine (BabylonJS). And manual controls for humans are always there. You can inspect any object, tweak properties, override AI decisions — full visual editing at any point. The AI is your co-pilot, not a black box. So the flow is: AI tries tools first → falls back to code → human always has full manual control. The goal is you never HAVE to understand the inner logic, but you always CAN if you want to.
English
0
0
0
30
Romeo
Romeo@_romeopeter·
@imanands @echo_vick With no understanding of the inner logic when something feels off and the agent just can seem to get it right, how do you plan to handle such edge cases?
English
1
0
1
47
Victor
Victor@echo_vick·
The day i realized that vibe coding definitely needed coding experience was when I accepted a flutter gig, without experience of flutter or dart… I mean I knew general principles and standards, but I had never written dart before… So I started, it was going well, until it wasn’t…. I almost ran mad With each bug AI fixed, 10 more sprung up!, I tried to go through the code and see if I could understand and fix some of the bugs myself, Omo I almost shed tears… bruh dart humbled senior dev….not that I couldn’t understand what was going on..but because the codebase was already too far gone in spaghetti and beyond repair. I didn’t need anyone to tell me to rebuild the app in React Native that i was very conversant with. Summary: You STILL NEED some coding experience to vibe code…maybe in the nearest future it won’t be the case, but for now…IT IS, so if anyone says otherwise, they’re living in denial.
Tomilola Oluwafemi@tomilola_ng

Rage bait? I can bet that video isn't even yours, you want to vibe code Mobile App using Flutter & Dart without prior coding experience??? You dey whyne

English
55
40
406
63.8K
Anand Singh retweetledi
Anand Singh
Anand Singh@imanands·
This is exactly why the next wave of vibe coding tools won't generate code for you to manage — they'll generate the final product directly. Describe what you want, AI builds it, you never touch the spaghetti. That's how we're approaching game dev at Arclusion — intent in, playable game out, running in the browser.
English
2
1
5
920
Anand Singh
Anand Singh@imanands·
This is massive. WebGPU + browser-native 3D editing is going to eat desktop-only tools alive. I'm building something in the same spirit but for game development — an AI-native desktop editor where you create 3D games with AI agents, and every game you publish is instantly playable via a URL in any browser. BabylonJS + WebGPU under the hood. Pascal Editor proves the thesis: the browser is ready for serious 3D work. No installs, no gatekeepers. The entire creative pipeline is moving here. Open sourcing it is the ultimate power move. Respect to the team behind this.
English
0
0
6
2.2K
Guri Singh
Guri Singh@heygurisingh·
🚨Architects are going to hate this. Someone just open sourced a full 3D building editor that runs entirely in your browser. No AutoCAD. No Revit. No $5,000/year licenses. It's called Pascal Editor. Built with React Three Fiber and WebGPU -- meaning it renders directly on your GPU at near-native speed. Here's what's inside this thing: → A full building/level/wall/zone hierarchy you can edit in real time → An ECS-style architecture where every object updates through GPU-powered systems → Zustand state management with full undo/redo built in → Next.js frontend so it deploys as a web app, not a desktop install → Dirty node tracking -- only re-renders what changed, not the whole scene Here's the wildest part: You can stack, explode, or solo individual building levels. Select a zone, drag a wall, reshape a slab -- all in 3D, all in the browser. Architecture firms pay $50K+ per seat for BIM software that does this workflow. This is free. 100% Open Source.
English
657
4.7K
31.4K
2.7M