
Changil Kim
29 posts



Excited to share our new paper on improving Gaussian splatting novel-view synthesis quality through augmenting Gaussians with RGBA texture maps! (1/n)


Introducing VR-NeRF (@SIGGRAPHAsia 2023): 🤩 multi-camera rig for multi-view HDR capture 🤓 perceptual HDR optimization + level of detail 😎 real-time multi-GPU VR rendering Project: vr-nerf.github.io Paper: arxiv.org/abs/2311.02542 Dataset: github.com/facebookresear…

Introducing OmnimatteRF! From a coarse mask as input, our method produces detailed video matting (RGB alpha) of the subject *and* their associate effects, e.g., soft cast shadows. This enables a variety of cool video editing applications.

Turn your casual videos into immersive 3D rendering! How? 👉 Modeling scenes with multiple LOCAL radiance fields 👉 Optimizing poses PROGRESSIVELY See you at AM poster session today! 👋 Video 📼: youtube.com/watch?v=VizL7q… Web 🔗: localrf.github.io #CVPR2023

🔥 New video 🔥 Check out HyperReel: High-Fidelity 6-DoF Video! (#CVPR2023 highlight) HyperReel achieves ✅ Fast rendering ✅ High-fidelity free-view synthesis ✅ Memory efficiency youtube.com/watch?v=8vi5K8…

SUPER excited to present ROBUST Dynamic Radiance Fields! Existing methods cannot handle casual videos where SfM does not work. Our work improves the robustness and can create 3D videos from ANY video. Come find us in the AM session! 👋 🔗 robust-dynrf.github.io #CVPR2023



🎉 New video 🎉 Check out the video to learn how to create immersive 3D rendering from casual videos! 🤩 youtube.com/watch?v=VizL7q…







