Selena Ling 凌子涵

152 posts

Selena Ling 凌子涵 banner
Selena Ling 凌子涵

Selena Ling 凌子涵

@seleniumlzh

U of Toronto CS PhD at DGP | Prev. @AdobeResearch @NVIDIA : )

Toronto, Canada Beigetreten Ağustos 2012
925 Folgt1.1K Follower
Angehefteter Tweet
Selena Ling 凌子涵
Selena Ling 凌子涵@seleniumlzh·
Our #Siggraph25 work found a simple, nearly one-line change that greatly eases neural field optimization for a wide variety of existing representations. “Stochastic Preconditioning for Neural Field Optimization” w/ @merlin_ND @_AlecJacobson @nmwsharp
Selena Ling 凌子涵 tweet media
English
8
45
313
56.7K
Selena Ling 凌子涵 retweetet
Google DeepMind
Google DeepMind@GoogleDeepMind·
Our short film Dear Upstairs Neighbors is previewing at @sundancefest. 🎬 It’s a story about noisy neighbors, but behind the scenes, it’s about solving a huge challenge in generative AI: control. Developed by Pixar alumni, an Academy Award winner, researchers, and engineers, here’s how it came together. 🎨
English
366
411
3.4K
2.2M
Selena Ling 凌子涵 retweetet
Jeff Dean
Jeff Dean@JeffDean·
This is absolutely shameful. Agents of a federal agency unnecessarily escalating, and then executing a defenseless citizen whose offense appears to be using his cell phone camera. Every person regardless of political affiliation should be denouncing this.
Ryan Grim@ryangrim

Drop Site obtained harrowing footage of the latest killing which appears to be from the perspective of the woman in pink filming from the sidewalk

English
249
956
8.4K
976.6K
Selena Ling 凌子涵 retweetet
Blender 🔶
Blender 🔶@Blender·
Did Blender help you this year? Help back! If every active user contributed $5 this month, Blender would be funded for the entire year 2026. Professional 3D software. No subscriptions. No limits. Just your support. Do your part. Donate today. blender.org/news/give-back… #b3d
Blender 🔶 tweet media
English
325
2.7K
9.5K
1.8M
Selena Ling 凌子涵 retweetet
Sherwin Bahmani
Sherwin Bahmani@sherwinbahmani·
📢 Lyra: Generative 3D Scene Reconstruction via Video Diffusion Model Self-Distillation Got only one or a few images and wondering if recovering the 3D environment is a reconstruction or generation problem? Why not do it with a generative reconstruction model! We show that a camera-conditioned video diffusion model can be transformed into a generative reconstruction model that directly outputs a high-quality 3D Gaussian Splatting representation through self-distillation, without requiring real-world training data. Check out our results in the video (wait for dynamic scenes in the second half!) : Project Page: research.nvidia.com/labs/toronto-a… Code and Models: github.com/nv-tlabs/lyra Paper: arxiv.org/abs/2509.19296
English
20
68
258
66K
Selena Ling 凌子涵 retweetet
Sid
Sid@sid_srk·
Season 1 of Toronto School of Foundation Modelling kicks off this Thursday at New Stadium!!! 60 people will be attending weekly sessions for 3 months, learning to build Foundation Models from scratch. Around 10 guest speakers (more to come) will be flying to Toronto to talk about what they do best. I'm grateful for the support of Cohere, New, Vengeance and all the donors. I will make this a series worth your time. And apologies to the people I didn't get back to - spots are all taken but will reach out if that changes.
New@newsystems_

Continuing to press forward with the range and depth of learning opportunities. This week we have several workshops, meet-ups, a deep-dive seminar, the beginning of a new lecture series, as well as an exhibit happening at New Stadium. Links below.

English
4
13
55
7.2K
Selena Ling 凌子涵 retweetet
Keenan Crane
Keenan Crane@keenanisalive·
“Everyone knows” what an autoencoder is… but there's an important complementary picture missing from most introductory material. In short: we emphasize how autoencoders are implemented—but not always what they represent (and some of the implications of that representation).🧵
Keenan Crane tweet media
English
46
427
3.1K
555.7K
Selena Ling 凌子涵 retweetet
Lily Goli
Lily Goli@lily_goli·
Check out our new paper on robust motion segmentation! Wanna run your SfM pipeline on dynamic scenes? Consider using our RoMo masks to get improvements!! 🚀
Andrea Tagliasacchi 🇨🇦@taiyasaki

📢📢📢 RoMo: Robust Motion Segmentation Improves Structure from Motion romosfm.github.io arxiv.org/pdf/2411.18650 TL;DR: boost your SfM pipeline on dynamic scenes. We use epipolar cues + SAMv2 features to find robust masks for moving objects in a zero-shot manner. 🧵👇

English
3
3
45
8.2K
Selena Ling 凌子涵
Selena Ling 凌子涵@seleniumlzh·
We show many more experiments across different implicit surface representations in our paper. Please check out our #SGP25 paper here arxiv.org/pdf/2506.05268 and reach out if you have any questions! Code coming soon! (9/9)
English
0
0
6
577
Selena Ling 凌子涵
Selena Ling 凌子涵@seleniumlzh·
With uniformly sampled points, one can also easily perform importance sampling using curvature or other quantities like losses, and construct geometry-aware regularization terms to improve neural implicit optimization. (8/9)
Selena Ling 凌子涵 tweet media
English
1
0
5
627