Basile Van Hoorick

22 posts

Basile Van Hoorick

Basile Van Hoorick

@basilevanh

Katılım Ağustos 2018
295 Takip Edilen188 Takipçiler
Sabitlenmiş Tweet
Basile Van Hoorick
Basile Van Hoorick@basilevanh·
Excited to share our new paper on large-angle monocular dynamic novel view synthesis! Given a single RGB video, we propose a method that can imagine what that scene would look like from any other viewpoint. Website: gcd.cs.columbia.edu Paper: arxiv.org/abs/2405.14868 🧵(1/5)
English
5
31
136
33.5K
Basile Van Hoorick retweetledi
Junjie Ye
Junjie Ye@JunjieYe9·
🚀 𝑨𝒏𝒄𝒉𝒐𝒓𝑫𝒓𝒆𝒂𝒎 accepted to #ICRA2026! Video diffusion often "hallucinates" robot motion. We ground diffusion in kinematics to synthesize high-fidelity, embodiment-consistent training data. 📖 Paper: arxiv.org/pdf/2512.11797 🌐 Project: junjieye.com/AnchorDream/
English
1
3
18
3.8K
Basile Van Hoorick
Basile Van Hoorick@basilevanh·
We are grateful to be awarded an oral presentation -- please come by Wed 10/2 at 1:30pm (I believe we are the first talk in the oral session) as well as the poster session afterward (number 156) at 4:30pm! #ECCV2024 🎉
Basile Van Hoorick@basilevanh

Excited to share our new paper on large-angle monocular dynamic novel view synthesis! Given a single RGB video, we propose a method that can imagine what that scene would look like from any other viewpoint. Website: gcd.cs.columbia.edu Paper: arxiv.org/abs/2405.14868 🧵(1/5)

English
3
3
26
2.4K
Basile Van Hoorick
Basile Van Hoorick@basilevanh·
Apart from robotics and related scenes, it also works quite well on driving scenarios! In general, we believe our framework can help unlock powerful applications in rich dynamic scene understanding, perception for embodied AI, and interactive 3D video viewing. 🧵(4/5)
English
2
0
7
467
Basile Van Hoorick
Basile Van Hoorick@basilevanh·
Excited to share our new paper on large-angle monocular dynamic novel view synthesis! Given a single RGB video, we propose a method that can imagine what that scene would look like from any other viewpoint. Website: gcd.cs.columbia.edu Paper: arxiv.org/abs/2405.14868 🧵(1/5)
English
5
31
136
33.5K
Basile Van Hoorick
Basile Van Hoorick@basilevanh·
Specifically, we finetune Stable Diffusion, which already has useful 2D image priors thanks to being trained on billion-scale data. This pipeline allows us to successfully achieve strong zero-shot performance on objects with complex geometry and artistic styles. 🧵(3/n)
English
1
0
3
448
Basile Van Hoorick
Basile Van Hoorick@basilevanh·
P.S. Also check out our earlier related work on Revealing Occlusions with 4D Neural Fields (arxiv.org/abs/2204.10916)! This paper is essentially about video-to-4D generation, but requires depth input. On the other hand, we demonstrate that TCOW works in the wild too. 🧵 (7/7)
English
0
1
0
177