
Khiem Vuong
40 posts

Khiem Vuong
@kvuongdev
Doing PhD @CMU_Robotics | Prev @Apple | Vision, Robotics



We’re excited to share LuxRemix: interactive light editing for indoor scenes! 🏠💡 Capture a room once, then turn individual lights on/off, change colors, and adjust intensity – all in real-time 3D from any viewpoint. 💡 luxremix.github.io 📄 arxiv.org/abs/2601.15283

Introduce CRISP, a real-to-sim pipeline that recovers human motion and simulatable scene geometry from monocular video! CRISP builds contact-faithful 3D scene for simulation - 8× fewer sim failures, +43% faster sim, and improves human motion! Interactive demos👉: crisp-real2sim.github.io/CRISP-Real2Sim/ Exciting collaboration w/ @JiashunWang @jefftan969 @_Tsukasane @ Jessica Hodgins @shubhtuls @RamananDeva

Introducing Any4D, a unified transformer for fully feed-forward, dense, metric-scale 4D reconstruction from flexible inputs! Any4D regresses per-pixel motion + geometry across frames in one pass — 15× faster, 2–3× more accurate reconstructions ⚡📈 Details + code below 👇 Exciting collab with @Nik__V__ @YuchenZhan54250 Tanisha Gupta @akashshrm02 @smash0190 @RamananDeva













Meet MapAnything – a transformer that directly regresses factored metric 3D scene geometry (from images, calibration, poses, or depth) in an end-to-end way. No pipelines, no extra stages. Just 3D geometry & cameras, straight from any type of input, delivering new state-of-the-art results 🚀 One universal model enables SoTA for: 🔥 Mono Depth Estimation 🔥 Multi-View SfM 🔥 Multi-View Stereo 🔥 Depth Completion 🔥 Registration … and many more possibilities! – plus everything is metric 🎯 We release code for data processing, training, benchmarking & ablations – everything Apache 2.0! Details & Links 👇



My PhD is over, but I still had a few big and small research projects I thought were very promising. I don't work on computer vision anymore, so I'll post these detailed ideas here over the next weeks: feel free to work on them and claim them as yours. (1/2)


[1/6] Recent models like DUSt3R generalize well across viewpoints, but performance drops on aerial-ground pairs. At #CVPR2025, we propose AerialMegaDepth (aerial-megadepth.github.io), a hybrid dataset combining mesh renderings with real ground images (MegaDepth) to bridge this gap.






Discover the right 3D Geometric Foundation Model for your task—whether it’s stereo matching, multi-view depth estimation, video depth, pose estimation, semantic understanding, or novel view synthesis. Explore more insights in our #E3DBench #FoundationModel #3D #GaussianSplatting. Project Webpage: e3dbench.github.io





[1/6] Recent models like DUSt3R generalize well across viewpoints, but performance drops on aerial-ground pairs. At #CVPR2025, we propose AerialMegaDepth (aerial-megadepth.github.io), a hybrid dataset combining mesh renderings with real ground images (MegaDepth) to bridge this gap.

