Daniel Gehrig

26 posts

Daniel Gehrig

Daniel Gehrig

@DanielGehrig6

#ComputerVision and #Robotics PostDoc @GRASPlab and @Penn. Interested in the intersection of #DeepLearning, #ComputerVision and #Robotics.

Philadelphia, USA شامل ہوئے Ekim 2018
141 فالونگ267 فالوورز
Daniel Gehrig ری ٹویٹ کیا
Litu Rout
Litu Rout@litu_rout_·
Continuous diffusion had a good run—now it’s time for Discrete diffusion! Introducing Anchored Posterior Sampling (APS) APS outperforms discrete and continuous baselines in terms of performance & scaling on inverse problems, stylization, and text-guided editing.
English
2
69
427
39.6K
Daniel Gehrig
Daniel Gehrig@DanielGehrig6·
Happy to attend #CVPR2024 this year and meet new and old friends. Want to learn more about minimal solvers for event camera-based motion estimation? Check out poster 183 Thu at 17:15 - 18:45 in Arch 4A-E, and Oral #2 in Orals 4C Thu 13:00 - 14:30!
Davide Scaramuzza@davsca1

Meet us at #CVPR2024 this week! We will present several papers on #Nerf deblurring, #StateSpaceModels, and Motion Estimation on Manifolds for #eventcameras at the main conference and workshops! Full list with times, rooms, and links to PDFs, Code, and Videos: docs.google.com/document/d/1eq… @marcocannici @DanielGehrig6 @NikolaZubic5

English
0
0
1
334
Daniel Gehrig ری ٹویٹ کیا
Davide Scaramuzza
Davide Scaramuzza@davsca1·
We are thrilled to share our groundbreaking paper published today in @Nature: "Low Latency Automotive Vision with Event Cameras." Paper: nature.com/articles/s4158… Video: youtu.be/dwzGhMQCc4Y Code & Dataset: github.com/uzh-rpg/dagr Frame-based sensors such as the RGB cameras used in the automotive industry face a bandwidth–latency trade-off: higher frame rates reduce perceptual latency but increase bandwidth demands, whereas lower frame rates save bandwidth at the cost of missing vital scene dynamics due to increased perceptual latency (see Fig. 1a of the paper). Event cameras have emerged as alternative vision sensors to address this trade-off. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements. Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, we propose a hybrid event- and frame-based object detector based on Deep Asynchronous GNNs, which preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. In doing so, it emulates the slow-fast pathways in biological neural networks and uses them to its advantage. We show that using a 20-Hz RGB camera plus an event camera achieves the same latency as a 5,000-Hz camera with the bandwidth of a 50-Hz camera, i.e., an over 100-fold bandwidth reduction, without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras. We release the code and the dataset (DSEC-Detection) to the public. Kudos to @DanielGehrig6, who, with this work, also received the UZH Annual Award for the Best PhD thesis! **Reference** Daniel Gehrig, Davide Scaramuzza Low Latency Automotive Vision with Event Cameras Nature, May 29, 2024. DOI: 10.1038/s41586-024-07409-w PDF (Open Access): nature.com/articles/s4158… Video (Narrated): youtu.be/dwzGhMQCc4Y Code & Datasets: github.com/uzh-rpg/dagr @UZH_en @UZH_Science @UZHspacehub @ERC_Research @nccrrobotics
YouTube video
YouTube
Davide Scaramuzza tweet media
English
13
71
466
35.2K
Daniel Gehrig ری ٹویٹ کیا
Davide Scaramuzza
Davide Scaramuzza@davsca1·
We are excited to unveil our #ICCV2023 paper “From Chaos Comes Order: Ordering Event Representations for Object Recognition and Detection”, where we tackle the problem of selecting the optimal event representation in event-based machine learning tasks. Code released! Today, state-of-the-art deep neural networks that process #eventcamera data first convert them into dense, grid-like input repre- sentations before using an off-the-shelf network. However, selecting the appropriate representation for the task traditionally requires training a neural network for each repre- sentation and selecting the best one based on the validation score, which is very time-consuming. Our work elimi- nates this bottleneck by selecting representations based on the Gromov-Wasserstein Discrepancy between raw events and their representation. It is about 200 times faster to compute than training a neural network and preserves the task performance ranking of event representations across multiple representations, network backbones, datasets, and tasks. We open up a brand-new avenue in explicit representation optimization. Our results indicate we've unlocked a faster, more efficient way to choose event representations for neural networks. We outperform state-of-the-art by 2.1 mAP on the Gen1 and state-of-the-art feed-forward methods by 6.0 mAP on the 1 Mpx datasets, both well-regarded object detection benchmarks. Moreover, we reach a 3.8% higher classification score on the mini N-ImageNet benchmark. Paper: rpg.ifi.uzh.ch/docs/ICCV2023_… Code: github.com/uzh-rpg/event_… Kudos to Nicola Zubic, Daniel Gehrig, Mathias Gehrig! @ICCVConference
Davide Scaramuzza tweet media
English
0
8
37
6.6K
Daniel Gehrig
Daniel Gehrig@DanielGehrig6·
@xiaojidan2011 @davsca1 @mapo1 @KostasPenn Hi Charles, sorry for seeing this so late. Sure I am happy to share. Can you send me your email address or should I send it via your Stanford email address (not sure if its up to date)?
English
1
0
0
29
Daniel Gehrig ری ٹویٹ کیا
Davide Scaramuzza
Davide Scaramuzza@davsca1·
Congratulations to my student @DanielGehrig6 for successfully defending his Ph.D. in “Efficient, Data-Driven Perception with Event Cameras”! Many thanks to the external reviewers Marc Pollefeys @mapo1, Kostas Daniilidis @KostasPenn, and Andreas Geiger! Daniel has contributed deep-learning methods that combine #eventcamera data with standard images to achieve efficient, low-latency perception. His applications span feature tracking, object detection, and video frame interpolation. In particular, he proposed techniques for adapting deep geometric learning methods based on convolutional and graph neural networks to perform efficient and asynchronous event-by-event computation without sacrificing accuracy, reaching unprecedented low latency in object detection tasks! Congratulations, Daniel; it has been an honor to work with you! - Video Recording of the PhD defense: youtu.be/ncNFqI44BnA - Daniel's webpage (publications, source code, slides): danielgehrig18.github.io - Google Scholar: scholar.google.com/citations?user… @uzh_en @ERC_Research @nccrrobotics
YouTube video
YouTube
Davide Scaramuzza tweet media
English
3
7
99
12.4K
Daniel Gehrig
Daniel Gehrig@DanielGehrig6·
Thanks @davsca1 and to all team members and collaborators who supported me during my PhD journey. Special thanks also to my reviewers who asked tough and thoughtful questions. I learned alot and made many friends along the way. Looking forward to what the future brings!
Davide Scaramuzza@davsca1

Congratulations to my student @DanielGehrig6 for successfully defending his Ph.D. in “Efficient, Data-Driven Perception with Event Cameras”! Many thanks to the external reviewers Marc Pollefeys @mapo1, Kostas Daniilidis @KostasPenn, and Andreas Geiger! Daniel has contributed deep-learning methods that combine #eventcamera data with standard images to achieve efficient, low-latency perception. His applications span feature tracking, object detection, and video frame interpolation. In particular, he proposed techniques for adapting deep geometric learning methods based on convolutional and graph neural networks to perform efficient and asynchronous event-by-event computation without sacrificing accuracy, reaching unprecedented low latency in object detection tasks! Congratulations, Daniel; it has been an honor to work with you! - Video Recording of the PhD defense: youtu.be/ncNFqI44BnA - Daniel's webpage (publications, source code, slides): danielgehrig18.github.io - Google Scholar: scholar.google.com/citations?user… @uzh_en @ERC_Research @nccrrobotics

English
0
0
14
548
Daniel Gehrig
Daniel Gehrig@DanielGehrig6·
Check out our paper "TimeLens: Event-based Video Frame Interpolation" at #CVPR2021 in Paper Session 12 at 12:00-14:30 CEST. I will be happy to answer any questions about our work! @StepanTulyakov @MathiasGehrig @davsca1
Davide Scaramuzza@davsca1

We release our newest work on event cameras: "Time Lens". We use events to upsample low-framerate RGB HD video by over 50 times with only 1/40th of the memory footprint! #CVPR2021 Paper, code, datasets: rpg.ifi.uzh.ch/timelens @DanielGehrig6 @MathiasGehrig

English
1
0
1
0