KeBingxin

46 posts

KeBingxin banner
KeBingxin

KeBingxin

@KBingxin

CV+ML+3D/RS PhD student @ Photogrammetry and Remote Sensing, ETHZurich.

Zurich, Switzerland Katılım Mart 2020
113 Takip Edilen208 Takipçiler
Sabitlenmiş Tweet
KeBingxin
KeBingxin@KBingxin·
Super excited to introduce our new work Marigold 🌼 — an universal affine-invariant depth estimator. Try out the demo and you will find how amazing it is! Project page: marigoldmonodepth.github.io
Anton Obukhov@AntonObukhov1

Introducing Marigold 🌼 - a universal monocular depth estimator, delivering incredibly sharp predictions in the wild! Based on Stable Diffusion, it is trained with synthetic depth data only and excels in zero-shot adaptation to real-world imagery. Check it out: 🌐 Website: marigoldmonodepth.github.io 🤗 Hugging Face Space: huggingface.co/spaces/toshas/… 📄 Paper: arxiv.org/abs/2312.02145 👾 Code: github.com/prs-eth/marigo… The team: Bingxin Ke (@KBingxin), yours truly (@AntonObukhov1), Shengyu Huang (@ShengyHuang), Nando Metzger (@NandoMetzger), Rodrigo Caye Daudt (@rcdaudt), and Konrad Schindler. #ComputerVision #PRS #ETHZurich

English
3
1
19
2.3K
KeBingxin retweetledi
Kwang Moo Yi
Kwang Moo Yi@kwangmoo_yi·
Ke et al., "CAPA: Depth Completion as Parameter-Efficient Test-Time Adaptation" Fine-tune your foundational model at test time with sparse measurements. Makes a lot of sense if you have, e.g. Lidar measurements with you.
English
2
11
86
5.8K
KeBingxin retweetledi
Sven Elflein
Sven Elflein@s_elflein·
🚀 Exciting news! We’re introducing VGG-T³: a scalable model for offline feed-forward 3D reconstruction that finally tackles the "quadratic bottleneck." Ever wanted to have VGGT reconstruct a 1,000-image scene in seconds instead of 10 minutes and use it for visual localization?
GIF
English
7
69
467
32.4K
KeBingxin retweetledi
Anton Obukhov
Anton Obukhov@AntonObukhov1·
Introducing StereoSpace -- our new end-to-end method for turning photos into stereo images without explicit geometry or depth maps. This makes it especially robust with thin structures and transparencies. Try the demo below
English
1
22
147
8.5K
KeBingxin retweetledi
KeBingxin retweetledi
Jiahui Huang
Jiahui Huang@huangjh_hjh·
[1/N] 🎥 We've made available a powerful spatial AI tool named ViPE: Video Pose Engine, to recover camera motion, intrinsics, and dense metric depth from casual videos! Running at 3–5 FPS, ViPE handles cinematic shots, dashcams, and even 360° panoramas. 🔗 research.nvidia.com/labs/toronto-a…
English
13
104
450
61.9K
KeBingxin retweetledi
Anton Obukhov
Anton Obukhov@AntonObukhov1·
Introducing ⇆ Marigold-DC — our training-free zero-shot approach to monocular Depth Completion with guided diffusion! If you have ever wondered how else a long denoising diffusion schedule can be useful, we have an answer for you! Details 🧵
English
7
53
361
39.3K
KeBingxin retweetledi
Gradio
Gradio@Gradio·
🔥 Rolling-Depth - A new state-of-the-art depth estimator for videos in the wild! Accurately estimating depth from videos using AI is now possible. No flickering, No Temporal inconsistency 💪
English
4
41
183
15.5K
KeBingxin retweetledi
Anton Obukhov
Anton Obukhov@AntonObukhov1·
Introducing 🛹 RollingDepth 🛹 — a universal monocular depth estimator for arbitrarily long videos! Our paper, “Video Depth without Video Models,” delivers exactly that, setting new standards in temporal consistency. Check out more details in the thread 🧵
English
9
106
636
49.8K
KeBingxin retweetledi
Anton Obukhov
Anton Obukhov@AntonObukhov1·
BetterDepth is a NeurIPS accept! Congrats to the team and thanks to everyone involved! x.com/AntonObukhov1/…
Anton Obukhov@AntonObukhov1

Unveiling BetterDepth — a plug-and-play diffusion-based refiner for zero-shot monocular depth estimation, compatible with many established depth prediction models. 📕 Paper: huggingface.co/papers/2407.17… 🧩 Other: TBA Fantastic collaboration between ETH Zurich and Disney Research|Studios, by Xiang Zhang (xiangz-0.github.io), @KBingxin, @chrysmun, @NandoMetzger, @AntonObukhov1, @MarkusGross63, Konrad Schindler, and Christopher Schroers.

English
3
18
162
12.5K
KeBingxin retweetledi
Karim Knaebel
Karim Knaebel@karimknaebel·
Check out our work on fine-tuning of image-conditional diffusion models for depth and normal estimation. Widely used diffusion models can be improved with single-step inference and task-specific fine-tuning, allowing us to gain better accuracy while being 200x faster!⚡ 🧵(1/6)
Karim Knaebel tweet media
English
5
50
271
41.2K
KeBingxin
KeBingxin@KBingxin·
@ducha_aiki @Gonzalo_MartinG @kacodes @thecschmidt4 @dcdegeus @Pandoro_o This is indeed a very good finding that by changing to another scheduler setting (literally just one setting in the config), the 1-step result gets very good. However, Marigold was using the implementation in Diffusers that the community is using, so it's not "a bug in Marigold".
English
0
0
7
512
KeBingxin retweetledi
Anton Obukhov
Anton Obukhov@AntonObukhov1·
Unveiling BetterDepth — a plug-and-play diffusion-based refiner for zero-shot monocular depth estimation, compatible with many established depth prediction models. 📕 Paper: huggingface.co/papers/2407.17… 🧩 Other: TBA Fantastic collaboration between ETH Zurich and Disney Research|Studios, by Xiang Zhang (xiangz-0.github.io), @KBingxin, @chrysmun, @NandoMetzger, @AntonObukhov1, @MarkusGross63, Konrad Schindler, and Christopher Schroers.
English
7
67
289
34.2K
KeBingxin retweetledi
Nando Metzger
Nando Metzger@NandoMetzger·
Spice up your favorite SOTA monodepth network with a diffusion model! We introduce *BetterDepth*, a plug-and-play refiner for zero-shot monodepth estimation. Paper: huggingface.co/papers/2407.17…
Nando Metzger tweet media
English
7
27
179
14.5K
KeBingxin
KeBingxin@KBingxin·
and (partial) PRS team
KeBingxin tweet media
English
0
1
9
407
KeBingxin
KeBingxin@KBingxin·
Special mention for our hero behind the scene @AntonObukhov1 who couldn’t come to Seattle for an obvious reason.
English
0
0
2
148
KeBingxin
KeBingxin@KBingxin·
(and Albert Einstein?)
Deutsch
1
0
1
156