Radek Daněček

69 posts

Radek Daněček

Radek Daněček

@DanecekRadek

A PhD candidate at Max Planck Institute for Intelligent Systems in Tübingen. Research in vision, graphics and AI. A general purpose outdoor adventurer.

Katılım Ağustos 2019
77 Takip Edilen216 Takipçiler
Sabitlenmiş Tweet
Radek Daněček
Radek Daněček@DanecekRadek·
1/n Proud to present our latest attempt to give 3D faces a voice: “THUNDER: Supervising 3D Talking Head Avatars with Analysis-by-Audio-Synthesis” We synthesize audio from facial motion! Website: thunder.is.tue.mpg.de With @Michael_J_Black, Senya Polikovsky, Carolin Schmitt
English
5
12
41
24.9K
Radek Daněček retweetledi
Michael Black
Michael Black@Michael_J_Black·
NeuralFur wins Best Paper Runner Up at @3DVconf. From multi-view images, we create a strand-based hair groom for animals. Unlike human hair, fur varies in length across the body parts of animals. NeuralFur leverages a VQA approach to infer fur lengths and directions across the body and to create a furless mesh. We then reconstruct strand-based fur geometry from multi-view images, resulting in a realistic animal model that is ready for physics-based animation in game engines like Unreal. Code is online. Check out the project page link below. Congratulations to @ness_pirs @bernakabadayi @AYiannakidis @gfgbec and @JustusThies! neuralfur.is.tue.mpg.de
Michael Black tweet media
English
1
23
163
10K
Radek Daněček
Radek Daněček@DanecekRadek·
I will be presenting our paper THUNDER tomorrow (March 21st) at @3DVconf. Session 3, poster 26 We synthesize audio from facial motion and use it to improve speech-driven avatars! Project page: thunder.is.tue.mpg.de With @Michael_J_Black, Senya Polikovsky, Carolin Schmitt
Radek Daněček@DanecekRadek

1/n Proud to present our latest attempt to give 3D faces a voice: “THUNDER: Supervising 3D Talking Head Avatars with Analysis-by-Audio-Synthesis” We synthesize audio from facial motion! Website: thunder.is.tue.mpg.de With @Michael_J_Black, Senya Polikovsky, Carolin Schmitt

English
0
9
20
5.9K
Radek Daněček
Radek Daněček@DanecekRadek·
I'm at 3DV in Vancouver this weekend. Feel free to connect!
English
0
0
2
61
#ICCV2025
#ICCV2025@ICCVConference·
Following #CVPR2025, #ICCV2025 implemented a new policy targeting accountability and integrity. PCs identified 25 highly irresponsible reviewers, resulting in the desk rejection of 29 associated papers, including 12 submissions that otherwise would have been accepted.
English
2
13
132
20.1K
Radek Daněček
Radek Daněček@DanecekRadek·
@jack_r_saunders @Michael_J_Black I did try to just optimize to produce a certain lip animation. I.e. start with a still animation and produce the audio and just optimize until you get the audio right - this worked, but you need a motion prior or else you end up advisarially hacking mesh-to-speech.
English
1
0
1
66
Radek Daněček
Radek Daněček@DanecekRadek·
1/n Proud to present our latest attempt to give 3D faces a voice: “THUNDER: Supervising 3D Talking Head Avatars with Analysis-by-Audio-Synthesis” We synthesize audio from facial motion! Website: thunder.is.tue.mpg.de With @Michael_J_Black, Senya Polikovsky, Carolin Schmitt
English
5
12
41
24.9K
Radek Daněček retweetledi
Michael Black
Michael Black@Michael_J_Black·
This is such a cool idea and it's amazing that it works. To train a generative talking 3D head model to have good lip sync, perform mesh to speech generation on the output. Compare this with the input speech and use the difference in your training loss. It's a neat trick and, even though the mesh to speech generation isn't perfect, it still works. You can apply this idea to any of the current methods for training speech to 3D animation and get improved results.
Radek Daněček@DanecekRadek

1/n Proud to present our latest attempt to give 3D faces a voice: “THUNDER: Supervising 3D Talking Head Avatars with Analysis-by-Audio-Synthesis” We synthesize audio from facial motion! Website: thunder.is.tue.mpg.de With @Michael_J_Black, Senya Polikovsky, Carolin Schmitt

English
2
10
61
15.2K
Radek Daněček
Radek Daněček@DanecekRadek·
13/n In summary, we first design a novel mesh-to-speech model which can turn facial animation into speech. And then we show the mesh-to-speech model can be used for “analysis-by-audio-synthesis”, producing high quality lip animations. Check our thunder.is.tue.mpg.de for more
English
1
1
4
202
Radek Daněček
Radek Daněček@DanecekRadek·
12/n In addition to THUNDER, we also demonstrate the effectiveness of mesh-to-speech and analysis-by-audio-synthesis in other systems, such as determinstic FaceFormer (or FF) From left to right: FF-frozen, F-frozen with m2s, FF-trainable, FF-trainable with m2s
English
1
1
2
217