Duygu Ceylan

43 posts

Duygu Ceylan

Duygu Ceylan

@guerrera_desesp

Katılım Mayıs 2014
44 Takip Edilen444 Takipçiler
Duygu Ceylan
Duygu Ceylan@guerrera_desesp·
@kablotv @kablotv [3] Abonelik baslatirken gosterdiginiz her turlu kolaylik nedense abonelik iptal edilirken tamemen kayboluyor
Türkçe
2
0
1
90
Duygu Ceylan
Duygu Ceylan@guerrera_desesp·
@kablotv @kablotv [2] Musteri hizmetlerinizi her arayisimizda birbiriyle celisen bilgiler aliyoruz. Geri goturmemizi beklediginiz cihazi en sonunda bulup goturdugumuzde de sadece cihaza bakarak arizali raporu tutuyorsunuz.
Türkçe
2
0
1
105
Duygu Ceylan
Duygu Ceylan@guerrera_desesp·
@kablotv [1] 1 ay kadar once babam vefat etti. Vefatindan sonra annem bir suredir onun adina olan kablo tv uyeliklerini iptal icin ugrasiyor. 73 yasinda bir kadina yasattiginiz durum herhangi bir musteri anlayisiyla bagdasmiyor.
Türkçe
2
0
1
129
Aykut Erdem
Aykut Erdem@aykuterdemml·
Honored to receive the Outstanding Faculty Award at Koç University, recognizing research excellence. Many thanks to our President Prof. @metin_sitti & my Dean Prof. @attilagursoy for their nomination and support. Deepest thanks also to my students for their passion and hard work.
Aykut Erdem tweet media
English
17
2
99
3.4K
Duygu Ceylan
Duygu Ceylan@guerrera_desesp·
🧵We present #Track4Gen where augment video generators with additional point tracking supervision. This results in better spatial awareness and reduces appearance drift. Work led by our talented intern Hyeonho @Hyeonho_Jeong99 together with @paulchhuang and Niloy Mitra.
English
1
1
23
3.8K
Duygu Ceylan retweetledi
Paul Huang
Paul Huang@paulchhuang·
OpenSora is great but no viewpoint control? Check out our method which effectively moves the camera however you want for Video Diffusion Transformers. Key features: 1. It also controls speed. 2. Only 1 frame has input camera also works. Congrats @soon_yau for the amazing result🚀
Paul Huang tweet media
GIF
English
1
1
13
1.6K
Omid Taheri
Omid Taheri@omidtaherii·
What a day! Just defended my PhD with summa cum laude🎉 Huge thanks to my amazing advisors, @Michael_J_Black @dimtzionas, and committee @GerardPonsMoll1 @angelaqdai @AutoVisionGroup. Best feeling ever, esp. when they said it was the best presentation they’ve seen! #PhDone @MPI_IS
Omid Taheri tweet mediaOmid Taheri tweet mediaOmid Taheri tweet mediaOmid Taheri tweet media
Michael Black@Michael_J_Black

Congratulations to @omidtaherii on defending his PhD summa cum laude on the topic of “Modeling Dynamic 3D Human-Object Interactions: From Capture to Synthesis”. Omid’s work on the GRAB dataset, GRIP, GOAL, and more has changed the field of human-object interaction.

English
17
5
94
11.4K
MrNeRF
MrNeRF@janusch_patas·
This is completely nuts. Can't wait until the paper is released! "SuperGaussian: Repurposing Video Models for 3D Super Resolution" Project: supergaussian.github.io Paper video ⬇️ 1 I 2
English
11
108
699
78.2K
Justus Thies
Justus Thies@JustusThies·
I am very honored that I have received the prestigious #Eurographics2024 Young Researcher Award for my groundbreaking work on digital humans and neural rendering today. Thanks to all my collaborators, mentors and institutes @TUDarmstadt @MPI_IS for supporting me!
Justus Thies tweet media
English
26
7
230
15K
Duygu Ceylan
Duygu Ceylan@guerrera_desesp·
Combining the power of 3d tools and workflows with generative models opens up many exciting opportunities. A great first step with our intern Shengqu @prime_cai !
AK@_akhaliq

Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models paper page: huggingface.co/papers/2312.01… Traditional 3D content creation tools empower users to bring their imagination to life by giving them direct control over a scene's geometry, appearance, motion, and camera path. Creating computer-generated videos, however, is a tedious manual process, which can be automated by emerging text-to-video diffusion models. Despite great promise, video diffusion models are difficult to control, hindering a user to apply their own creativity rather than amplifying it. To address this challenge, we present a novel approach that combines the controllability of dynamic 3D meshes with the expressivity and editability of emerging diffusion models. For this purpose, our approach takes an animated, low-fidelity rendered mesh as input and injects the ground truth correspondence information obtained from the dynamic mesh into various stages of a pre-trained text-to-image generation model to output high-quality and temporally consistent frames. We demonstrate our approach on various examples where motion can be obtained by animating rigged assets or changing the camera path.

English
0
5
38
6K
Duygu Ceylan retweetledi
Prime (Shengqu) Cai
Prime (Shengqu) Cai@prime_cai·
Thanks @_akhaliq! I have been thinking about how to bridge the gap between traditional CG pipelines and generative model, now that’s a first attempt. We can get some interesting results using only a 2D model, without any video training! Project page: primecai.github.io/generative_ren….
AK@_akhaliq

Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models paper page: huggingface.co/papers/2312.01… Traditional 3D content creation tools empower users to bring their imagination to life by giving them direct control over a scene's geometry, appearance, motion, and camera path. Creating computer-generated videos, however, is a tedious manual process, which can be automated by emerging text-to-video diffusion models. Despite great promise, video diffusion models are difficult to control, hindering a user to apply their own creativity rather than amplifying it. To address this challenge, we present a novel approach that combines the controllability of dynamic 3D meshes with the expressivity and editability of emerging diffusion models. For this purpose, our approach takes an animated, low-fidelity rendered mesh as input and injects the ground truth correspondence information obtained from the dynamic mesh into various stages of a pre-trained text-to-image generation model to output high-quality and temporally consistent frames. We demonstrate our approach on various examples where motion can be obtained by animating rigged assets or changing the camera path.

English
1
4
33
15.8K