Junxuan Li
23 posts

Junxuan Li
@JunxuanL
Research Scientist in Meta Reality Labs.
Pittsburgh, PA, US Katılım Ağustos 2016
226 Takip Edilen262 Takipçiler

LCA is accepted at CVPR 2026! 🚀
We introduce a pre/post-training paradigm for 3D avatars (1M in-the-wild videos ➡️ studio data).
The result? High-fidelity full-body avatars with emergent relightability and zero-shot stylization.
Project: junxuan-li.github.io/lca/
#CVPR2026
English

Joint work with an incredible team at the Codec Avatars Lab, Meta! @rawal_khirodkar, @psyth91, et al
English
Junxuan Li retweetledi

✨ Excited to share that I’ll be giving an 🎤 oral presentation of our work "𝐇𝐚𝐢𝐫𝐂𝐔𝐏: 𝐇𝐚𝐢𝐫 𝐂𝐨𝐦𝐩𝐨𝐬𝐢𝐭𝐢𝐨𝐧𝐚𝐥 𝐔𝐧𝐢𝐯𝐞𝐫𝐬𝐚𝐥 𝐏𝐫𝐢𝐨𝐫 𝐟𝐨𝐫 𝟑𝐃 𝐆𝐚𝐮𝐬𝐬𝐢𝐚𝐧 𝐀𝐯𝐚𝐭𝐚𝐫𝐬" at #ICCV2025
🎤 Oral 3B @ Kalakaua Ballroom
🗓️ Wed, Oct 22 | 8:30–8:45 a.m.
English
Junxuan Li retweetledi

💡Check out our #SIGGRAPHASIA 2024 paper, URAvatar: Universal Relightable Gaussian Codec Avatars. Now high-quality relightable avatars can be created by 𝐞𝐯𝐞𝐫𝐲𝐛𝐨𝐝𝐲!
We learn a universal relightble prior and show how to use it for quick adaptation from a phone scan.
English

Try our interactive viewer! Visit our website now to play with Relightable Gaussian Codec Avatars!
Shunsuke Saito@psyth91
📢 Check out 𝗥𝗲𝗹𝗶𝗴𝗵𝘁𝗮𝗯𝗹𝗲 𝗚𝗮𝘂𝘀𝘀𝗶𝗮𝗻 𝗖𝗼𝗱𝗲𝗰 𝗔𝘃𝗮𝘁𝗮𝗿𝘀! Our latest codec avatars using 3D Gaussians generalize to novel lighting (OLAT, envmap) with *all-frequency* reflection (see video for hair and eye reflection) in real-time! shunsukesaito.github.io/rgca/
English
Junxuan Li retweetledi
Junxuan Li retweetledi

Here's my conversation with Mark Zuckerberg, his 3rd time on the podcast, but this time we talked in the Metaverse as photorealistic avatars. This was one of the most incredible experiences of my life. It really felt like we were talking in-person, but we were miles apart 🤯 It's hard to put into words how awesome this was for someone like me who values the intimacy of in-person conversation. It gave me a glimpse of an exciting future with many new possibilities and fascinating questions about the nature of reality and human connection ❤
Timestamps:
0:00 - Introduction
0:52 - Metaverse
15:27 - Quest 3
30:16 - Nature of reality
34:54 - AI in the Metaverse
51:51 - Large language models
57:49 - Future of humanity
English
Junxuan Li retweetledi

📢 Come to see our #CVPR23 poster (#40) of MEGANE this afternoon if you are interested in interaction-aware generative modeling, relighting, and digital humans!!
Shunsuke Saito@psyth91
🚨 Excited to introduce our #CVPR2023 paper, MEGANE🤓!! MEGANE is an interaction-aware compositional 3D morphable eyeglasses and head model, supporting photorealistic rendering & relighting. (1/7) 👉Project: junxuan-li.github.io/megane/
English
Junxuan Li retweetledi

🚨 Excited to introduce our #CVPR2023 paper, MEGANE🤓!!
MEGANE is an interaction-aware compositional 3D morphable eyeglasses and head model, supporting photorealistic rendering & relighting. (1/7)
👉Project: junxuan-li.github.io/megane/
English

I'm thrilled to announce that two of my papers have been accepted at #CVPR23! Huge thanks to my amazing intern manager @psyth91 and collaborators for their hard work and dedication. See project website for more details: junxuan-li.github.io/megane and stay tuned for more information!
English
Junxuan Li retweetledi
Junxuan Li retweetledi

Happy to announce DreamFusion, our new method for Text-to-3D!
dreamfusion3d.github.io
We optimize a NeRF from scratch using a pretrained text-to-image diffusion model. No 3D data needed!
Joint work w/ the incredible team of @BenMildenhall @ajayj_ @jon_barron
#dreamfusion
English
Junxuan Li retweetledi

Neural fields are emerging as useful signal representations in computer vision & beyond. Our full-day introductory @CVPR tutorial on the topic is now public.
Video: youtu.be/PeRRp1cFuH4
Slides: drive.google.com/drive/folders/…
Web: neuralfields.cs.brown.edu/cvpr22

YouTube

English
Junxuan Li retweetledi

Multiface: A Dataset for Neural Face Rendering
abs: arxiv.org/abs/2207.11243
github: github.com/facebookresear…
present Multiface, a new multi-view, high-resolution human face dataset collected from 13 identities at Reality Labs Research for neural face rendering
English

Our #ECCV2022 camera-ready is available now: arxiv.org/abs/2207.07815.
In this work, we jointly estimate the light sources directions, light intensities, object surface shape, and reflectance by neural inverse rendering. Code and Project page: github.com/junxuan-li/SCP…
GIF
English




