

David Nordström
116 posts

@davnords
PhD. Student @ Chalmers Computer Vision and Deep Learning Code: https://t.co/R54G3HJNlo






@anand_bhattad I'd rephrase it to "We _think_ we know what the algorithm should be doing." because, if we fully knew what it should be doing, we wouldn't need ML. I love this interpretability work but it runs the risk of seducing people into imposing classical methods onto learned models.



This simple pytorch trick will cut in half your GPU memory use / double your batch size (for real). Instead of adding losses and then computing backward, it's better to compute the backward on each loss (which frees the computational graph). Results will be exactly identical











We're releasing DiffusionHarmonizer, an online diffusion enhancer bridging neural reconstruction and photorealistic simulation by correcting artifacts, and harmonizing inserted objects so they truly belong in the scene: matching shadows, lighting & color research.nvidia.com/labs/sil/proje…


A SINGLE encoder + decoder for all the 4D tasks! We release 🎯 D4RT (Dynamic 4D Reconstruction and Tracking). 📍 A simple, unified interface for 3D tracking, depth, and pose 🌟 SOTA results on 4D reconstruction & tracking 🚀 Up to 100x faster pose estimation than prior works

Deleting all my config dataclasses today. Each hparam is now just a default argument value and an `# hparam` comment so it’s greppable. Sweeps are just an agent running a sed script on the repo, launching, and reverting. Model variant configs are all bash scripts of sed calls.
