Felix Heide retweetledi
Felix Heide
96 posts

Felix Heide
@_FelixHeide_
Princeton Computational Imaging Lab: https://t.co/n8gRRpdvr4 Head of AI at Torc Robotics: https://t.co/7RonQDi1MJ
Katılım Ağustos 2020
59 Takip Edilen1.7K Takipçiler

Excited to host ICCP 2026 in Princeton this summer!
#ICCP2026@ICCP_conference
ICCP 2026 is coming to @Princeton, July 13-15! Paper submissions are open, deadline April 10. Accepted papers published in ICCP Proceedings or IEEE PAMI Special Issue. Take a sneak peek at already confirmed speakers on our website: iccp2026.iccp-conference.org! #ICCP2026
English

Are we really done with autonomous driving 🚚? Remember the massive winter storm in the US last week❄️!
We’re excited to share a large adverse weather driving dataset which includes small, distant road hazards, pushing perception beyond clear-weather and in-domain assumptions!
light.princeton.edu/datasets/autom…
We collect with different imaging modalities, spanning LiDAR, RGB, gated imaging, stereo, polarization, and depth — collected across diverse weather, lighting, and range conditions, including rare adverse events like heavy rain (~5×/year) and dense fog (~12×/year in North America & Europe) that are typically underrepresented in standard driving benchmarks.
What’s included:
• Seeing Through Fog – labeled adverse weather dataset captured in over 10,000km of driving.
• Gated2Depth / Gated2Gated – gated imaging for dense depth estimation.
• Pixel-Accurate Depth Benchmark – ultra-high-resolution depth ground truth.
• Long-Range Stereo (Gated Stereo) – large-scale sequential dataset with LiDAR and stereo (RGB, RCCB, gated).
• Fogchamber Benchmark – long-range fog/rain depth benchmark.
• Too Tiny To See – lost cargo benchmark with captures on snowy Lapland roads.
• ScatterNeRF – scene reconstruction under atmospheric scattering.
• Polarization Wavefront LiDAR – polarimetric LiDAR data.
Exciting work coming out of a collaboration in the AI-SEE Project, @torc_robotics , @MercedesBenz, and @Princeton .
English

@AgroXCloud @torc_robotics Yes, we generate camera/lidar and radar also under heavy weather and occlusions. Come to our demo ;)
English

@_FelixHeide_ @torc_robotics Excellent demo! I'd like to know how you simulate camera/LiDAR/radar data under extreme conditions. Do you have any further information or resources?
English

Physically-grounded video generations at #CES2026 without hallucinations! This week, we demo our end-to-end neural rendering developed at @torc_robotics, which allows us to simulate camera/lidar/radar data for edge cases, such as crash scenarios, road debris, or unforeseen crash sites, without hallucinations!
We show our stack together with our friends from @ForetellixHQ for end-to-end testing of driving models with reconstructed, generated, or conventional mesh assets.
Join us at the Foretellix Booth at LVCC West Hall Booth 4767 from Tuesday to Friday in Las Vegas to see the demo!
English

Starting the new year without human labeling 🎉!! Multimodal lidar-camera data is a gold mine of dense 3D geometry hiding in plain sight. For supervised pretraining and validation at scale at @torc_robotics, we rely on fully automated pseudo-labeling pipelines. Exploiting geometric priors from temporally accumulated LiDAR maps and an iterative update rule enforces joint geometric–semantic consistency while detecting moving objects via inconsistencies.
We achieve 3D semantic labels and 3D bounding boxes with human-like quality at 200m+ range required for highway driving.
Paper: light.princeton.edu/unilips/
Exciting work with @torc_robotics with Filippo Ghilotti, Samuel Brucker, Nahku Saidy, Matteo Matteucci, Mario Bijelic.
English
Felix Heide retweetledi

Felix Heide's goal was big: help computers see.
He did not know he was starting on a path to develop a new way of thinking about optics.
“The question for me was always how can we use algorithms to sense and understand the world?” said @_FelixHeide_.
bit.ly/4j035NQ

English

Excited to share our #NeurIPS2025 work on learning motion hierarchies! We introduce a general hierarchical graph learning method that learns structured, interpretable motion directly from data, no prior structure or assumptions needed!!!
Project and Paper: light.princeton.edu/HEIR
Amazing work led by William Koch, @ChengZh1005, and @MotaLee5 ! See us in San Diego for #NeurIPS2025!
English

@Meta Here’s the key idea:
❌ Smooth Phase = Brittle. High image quality, but cannot adapt to obstructions not seen during optimization.
✅ Random Phase = Robust. Creates more speckle, but the light inherently diffracts around obstructions, self-healing the image.
English

Holography with Eyelashes in the Way!! Future holographic glasses struggle with eyelashes that can cast image-destroying shadows. In a collaboration with @Meta, we train a model to generate "eyelash-proof" holograms at 80 FPS to fix this.
Project: light.princeton.edu/artifact-resil…
Collaboration with @bkv2chu, @opueyociutad , @_EthanTseng_, @FlorianSchiffe4 , Grace Kuo, @nathanmatsuda, Albert Redo-Sanchez, @douglaslanman , and Oliver Cossairt.
English
Felix Heide retweetledi

Imagine AR as immersive as Vision Pro and as light as Meta Ray Bans.
That's the promise of holographic displays.
The flaw? Eyelashes can cast image-destroying shadows.
In our #SIGGRAPHAsia2025 paper, we train an AI to generate "eyelash-proof" holograms at 80 FPS to fix this.
English

Excited to present editable Neural Atlas Graphs at #NeurIPS 2025 (Spotlight)! We introduce a learned atlas representation in which each dynamic object is a 2D planar layer (the atlas). All time-dependent appearance and fine motion are captured directly within this 2D layer using a learned planar flow field and a view-dependent field. Neural Atlas Graphs allow for texture-editable neural representations at high resolution!
Project page: …ceton-computational-imaging.github.io/nag/
Amazing project led by @jaypschneider, with Pratik Bisht, @_ilya_c, Andreas Kolb, Michael Moeller, with the Universität Siegen, @Princeton, the Lamarr Institute, and @torc_robotics.
English
Felix Heide retweetledi

At the event, @_FelixHeide_, an expert in imaging and computer vision, won the Dean for Research Distinguished Innovation Award, which recognizes a faculty member and their team.
Read the story: bit.ly/4hxZNAt
English

@peter_godman Exactly, the lights are learned as baked-in radiance for the nighttime prompts!
English

@_FelixHeide_ Love it.
I notice that the parked cars have their lights on.
So it knows about parked cars, and cars having lights on at sunset, and then just connects the dots? Or is it something else?

English

Large-scale 3D Scene Generation (all scenes are real-time rendered)!!
Physically-grounded generative data without hallucinations is the missing link for robot learning and testing at scale. We introduce a method that directly generates large-scale 3D driving scenes with accurate geometry, allowing for causal view synthesis and generation with object permanence and explicit 3D geometry. This also allows for extreme trajectory extrapolation without failure! We also show that we can build fully data-driven simulators for end-to-end learning with this approach.
Project: light.princeton.edu/lsd-3d/
with the amazing team of Julian Ost, @amogh7joshi , Andrea Ramazzina, Maximilian Bömer, Mario Bijelic.
English

Learning Transient Lidar Sensing! Transient imaging has been around a while, but we finally made it work for lidar sensing (#ICCV2025 highlight)! We find that a transformer-based DSP allows us to learn directly from the spatio-temporal histograms of a SPAD array! The method can finally see in fog, rain, low-reflectance, and tiny lost cargo scenarios!
Project and Paper: light.princeton.edu/lidar-transfor…
Wonderful collaboration on next-gen software-defined lidars with Dominik Scheuble, Hanno Holzhüter, Steven Peters, Mario Bijelic.
English

3D Object Tracking without Training Data? In our @Nature Machine Intelligence paper (nature.com/articles/s4225…), we recast 3D tracking as an inverse neural rendering task where we fit a scene graph to an image that best explains this image. The method generalizes to completely unseen datasets and is explainable.
Project and Code: light.princeton.edu/publication/in…
Fun collaboration between @PrincetonCS and Torc Robotics, with Julian Ost and Tanushree Banerjee leading this project.
English
