Wesley Chang

17 posts

Wesley Chang

Wesley Chang

@_WesChang

PhD student @ucsd_cse. Computer graphics, rendering, and differentiable rendering.

San Diego, CA Katılım Temmuz 2022
148 Takip Edilen181 Takipçiler
Sabitlenmiş Tweet
Wesley Chang
Wesley Chang@_WesChang·
In our #SIGGRAPH2025 work, we enable artist-level editing of hair reconstructed from video for the first time! We convert 3D strands into a procedural model controllable via guide strands and intuitive operators like curl and bend. 1/4 weschang.com/publications/i… (Code available!)
English
1
2
23
1.2K
Wesley Chang retweetledi
Kehan Xu
Kehan Xu@kehan0730·
I’m excited to announce our #SIGGRAPHAsia2025 paper on making the rendering of Gaussian process implicit surfaces (GPIS) practical: cs.dartmouth.edu/~wjarosz/publi…. We achieve this with a novel procedural noise formulation and by enabling next-event estimation for specular BRDFs. [1/7]
Kehan Xu tweet media
English
6
9
29
3.8K
Wesley Chang retweetledi
Peter Yichen Chen
Peter Yichen Chen@peterchencyc·
🌍 We are hiring! Would you please help us spread the word? We welcome talents from every corner of the world — postdocs, PhD students, and paid, full-time research interns — to join us in exploring AI and physics simulation. #3D #Robotics #Graphics #Hiring #UBC
Peter Yichen Chen tweet mediaPeter Yichen Chen tweet mediaPeter Yichen Chen tweet media
English
18
112
529
49.7K
Wesley Chang retweetledi
Bing Xu
Bing Xu@_bingxu·
🌲Introducing our paper A generalizable light transport 3D embedding for Global Illumination arxiv.org/pdf/2510.18189. Just as Transformers learn long-range relationships between words or pixels, our new paper shows they can also learn how light interacts and bounces around a 3D scene. Building on the analogy of light transport operator and Attention, our fully scalable model learns a generalizable light transport 3D embedding that captures global illumination. It takes as input the 3D scene assets—geometry, materials, and lighting represented as a point cloud—and encodes the complex light interaction into a 3D latent space, with codes anchored at the sampled scene points. Remarkably, the model is independent of both viewpoint and resolution, enabling view-consistent rendering across diverse scenes. (See teaser and attention figure.) (Our team work: with my mentor Marco @marcosalvi, labmates Mukund @mukundvermar, Cheng, NV folks Bart @BartWronsk, Lifan @winmad4869, and Tzu-Mao @tzumaoli & Ravi.) 🧵
Bing Xu tweet media
English
4
33
130
18.1K
Wesley Chang retweetledi
Nithin Raghavan
Nithin Raghavan@nithin_raghavan·
If you’re at SIGGRAPH 2025 in Vancouver, join us Thu 2 PM for our talk “Generative Neural Materials”! We introduce a universal neural material model for bidirectional texture functions and a complementary generative pipeline. 1/2
English
1
5
19
1.5K
Wesley Chang
Wesley Chang@_WesChang·
I'll be presenting this at the conference on Wed Aug 13 in the Avatars session at 10:45am in West Building, Rooms 220-222. See you there! 4/4
English
0
0
1
117
Wesley Chang
Wesley Chang@_WesChang·
Thanks to my amazing collaborators at UCSD and Meta Reality Labs: Andrew Russell, Stephane Grabli, Matt Chiang, Christophe Hery, Doug Roble, Ravi Ramamoorthi, @tzumaoli, and Olivier Maury. 3/4
English
1
0
1
182
Wesley Chang
Wesley Chang@_WesChang·
In our #SIGGRAPH2025 work, we enable artist-level editing of hair reconstructed from video for the first time! We convert 3D strands into a procedural model controllable via guide strands and intuitive operators like curl and bend. 1/4 weschang.com/publications/i… (Code available!)
English
1
2
23
1.2K
Wesley Chang retweetledi
Tzu-Mao Li
Tzu-Mao Li@tzumaoli·
weschang.com/publications/s… Check out Wesley, Xuanda, and Yash's latest work on combining cross bilateral filtering and Adam to make optimization in graphics more robust and fast, through better preconditioning the gradient.
Tzu-Mao Li tweet mediaTzu-Mao Li tweet media
English
0
15
88
7.8K
Wesley Chang retweetledi
Bing Xu
Bing Xu@_bingxu·
What a pleasant big surprise. Free to share a $1M secret to get a best paper award: to get Iliyan presenting the paper. @EGSympRendering @iliyang. Congrats to @tzumaoli, Trevor and Ravi.
Bing Xu tweet media
English
7
14
93
10.2K
Wesley Chang retweetledi
Ryusuke Sugimoto
Ryusuke Sugimoto@RyusukeSugimoto·
Better than never! I finally recorded and uploaded my short presentation of our SIGGRAPH "2023" paper, A Practical Walk-on-Boundary Method for Boundary Value Problems, on YouTube. youtu.be/5SDhXJ3Sjmo?si…
YouTube video
YouTube
English
1
4
34
1.2K
Wesley Chang retweetledi
Hang "Hesper" Yin
Hang "Hesper" Yin@hyin2147483647·
Checkout our #SIGGRAPH2023 paper “Fluid Cohomology” 🌊🐇🍩 We show that, despite its applications, the current formulation of vorticity-streamfunction is insufficient at simulating fluids on non-simply-connected domains. [1/6] youtu.be/eY8RUi5mrhc
YouTube video
YouTube
English
1
13
57
9.3K
Wesley Chang retweetledi
Kai-En Lin
Kai-En Lin@kaien_lin·
The code for our EGSR 2023 paper, Personalized Video Prior for Editable Dynamic Portraits using StyleGAN, has been released! It allows you to create a personalized dynamic portrait from a single video! Project page: cseweb.ucsd.edu//~viscomp/proj… Github: github.com/ken2576/pvp
GIF
English
2
6
21
2.3K
Wesley Chang
Wesley Chang@_WesChang·
Thanks to my amazing advisors and collaborators: @tzumaoli, Ravi Ramamoorthi, Toshiya Hachisuka, @DerekRenderling, and Venkataram Sivaram! 3/3
English
0
0
9
839
Wesley Chang
Wesley Chang@_WesChang·
This allows us to more efficiently recover texture parameters of the complex Disney BRDF. We also propose an extension to resampled importance sampling, which allows it to sample arbitrary real-valued functions, broadening its applicability outside of rendering. 2/3
English
1
0
8
975
Wesley Chang
Wesley Chang@_WesChang·
Excited to share our #SIGGRAPH2023 work on accelerating inverse rendering with ReSTIR. Since we render a sequence of frames during optimization, we can reuse samples from previous frames, just like in real-time rendering. 1/3 weschang.com/publications/r…
English
2
37
171
16.5K