Ruofan Liang
52 posts

Ruofan Liang
@RfLiang
Ruofan Liang, PhD Student @UofT (#OpentoWork)
Katılım Ağustos 2013
207 Takip Edilen221 Takipçiler

Immediately makes HOI-Blender result much more photorealistic lol

AK@_akhaliq
DLSS-5 anything for free app: huggingface.co/spaces/victor/…
English

I also took a picture of this classic place for the teaser demo of our #CVPR26 paper LuxRemix (luxremix.github.io) 😄
I used to love to sit there on Saturday afternoon, hoping Einstein could give me some research ideas 🤣
Credits also to @c_richardt for composing the vid.
Chris Offner@chrisoffner3d
You can view Einstein’s locker in 3D here: x.com/chrisoffner3d/…
English
Ruofan Liang retweetledi

Announcing NVIDIA DLSS 5, an AI-powered breakthrough in visual fidelity for games, coming this fall.
DLSS 5 infuses pixels with photorealistic lighting and materials, bridging the gap between rendering and reality.
Learn More → nvidia.com/en-us/geforce/…
English
Ruofan Liang retweetledi

📢Introducing 360Anything, our method for lifting any perspective image or video to gravity-aligned 360° panoramas without using any camera or 3D information. This enables consistent novel view synthesis and 3D scene reconstruction.
Project page: 360anything.github.io
🧵
English

Excited to share our new work: LuxRemix ✨
We leverage diffusion models to decompose complex light transport into individual sources. You can interactively remix room lighting in 2D images or 3D GSplats 💡🎛️
Try the interactive light controls here: 🔗 luxremix.github.io
Christian Richardt@c_richardt
We’re excited to share LuxRemix: interactive light editing for indoor scenes! 🏠💡 Capture a room once, then turn individual lights on/off, change colors, and adjust intensity – all in real-time 3D from any viewpoint. 💡 luxremix.github.io 📄 arxiv.org/abs/2601.15283
English

The generated data can be used for training generative rendering models such as our Diffusion Renderer, UniRelight, and LuxDiT.
🌐 Github Repo:
github.com/nexuslrf/compo…
English

#InfiniHuman: Infinite 3D Human Generation with Precise Control
How do you want to generate a 3D avatar?
From text description? With clothing images? Or some desired body shape? All can be done at once with InfiniHuman!
🔗Page: yuxuan-xue.com/infini-human/
#SIGGRAPHAsia2025 #AI
English
Ruofan Liang retweetledi

🕹️We are excited to introduce "ChronoEdit: Towards Temporal Reasoning for Image Editing and World Simulation"
ChronoEdit reframes image editing as a video generation task to encourage temporal consistency. It leverages a temporal reasoning stage that denoises with “video reasoning tokens” to "reason" on physically plausible edits.
See the attached video for results.
Project Page: research.nvidia.com/labs/toronto-a…
Arxiv: arxiv.org/abs/2510.04290
Code and model are coming.
English
Ruofan Liang retweetledi

📢 SceneComp @ ICCV 2025 🏝️
🌎 Generative Scene Completion for Immersive Worlds
🛠️ Reconstruct what you know AND 🪄 Generate what you don’t!
🙌 Meet our speakers
@angelaqdai, @holynski_, @jampani_varun, @ZGojcic @taiyasaki, Peter Kontschieder
scenecomp.github.io
#ICCV2025
English

This work is done during my internship at @NVIDIAAI , with other amazing collaborators @Kai__He @ZGojcic @igilitschenski @FidlerSanja @nanditav17† @zianwang97†
🌐 Project page: research.nvidia.com/labs/toronto-a…
English

LuxDiT, like our earlier works #DiffusionRenderer and #UniRelight, is another exploration into using generative models for inverse rendering, enabling high-quality lighting estimation from casually captured footage.
English

💡 Introducing LuxDiT: a diffusion transformer (DiT) that estimates realistic scene lighting from a single image or video.
It produces accurate HDR environment maps, addressing a long-standing challenge in computer vision.
🔗Paper: arxiv.org/abs/2509.03680
English

