Anpei Chen

56 posts

Anpei Chen

Anpei Chen

@AnpeiC

Group head @Inception3D Lab Assistant Professor @Westlake_Uni https://t.co/ZIIpOtFKvd

Hangzhou Katılım Nisan 2021
381 Takip Edilen784 Takipçiler
AK
AK@_akhaliq·
Motion 3-to-4 3D Motion Reconstruction for 4D Synthesis
English
2
2
59
7K
Anpei Chen
Anpei Chen@AnpeiC·
#Motion324: 3D Motion Reconstruction for 4D Synthesis We offer a feed-forward framework that synthesizes high-quality 4D assets from just a single monocular video.  ✅ Mesh  ✅ 3D Motion  ✅ Feed-Forward  ✅ Motion Retargeting Check out 👇 motion3-to-4.github.io
English
8
50
277
24K
Anton Obukhov
Anton Obukhov@AntonObukhov1·
Loving the presentation!
Anpei Chen@AnpeiC

#Motion324: 3D Motion Reconstruction for 4D Synthesis We offer a feed-forward framework that synthesizes high-quality 4D assets from just a single monocular video.  ✅ Mesh  ✅ 3D Motion  ✅ Feed-Forward  ✅ Motion Retargeting Check out 👇 motion3-to-4.github.io

English
1
5
46
6.6K
Anpei Chen retweetledi
Anpei Chen
Anpei Chen@AnpeiC·
𝘽𝙚𝙞𝙣𝙜 𝙖𝙣𝙙 𝙏𝙞𝙢𝙚 Being-in-the-world is the basic state of human existence. by Martin Heidegger 𝙃𝙪𝙢𝙖𝙣𝟯𝙍 Inferencing via One model, One stage; Training in One day using One GPU. fanegg.github.io/Human3R/ by Yue Chen @faneggchen
Gerard Pons-Moll@GerardPonsMoll1

Real time online 3D reconstruction of 3D scene and humans represented with SMPL. fanegg.github.io/Human3R/ I don't get tired of looking at these results

English
0
6
52
6.3K
Anpei Chen
Anpei Chen@AnpeiC·
#TTT3R: 3D Reconstruction as Test-Time Training We offer a simple state update rule to enhance length generalization for #CUT3R — No fine-tuning required! 🔗Page: rover-xingyu.github.io/TTT3R 1/4 We rebuilt @taylorswift13’s "22" live at the 2013 Billboard Music Awards—in 3D
English
5
40
314
44.6K
Anpei Chen
Anpei Chen@AnpeiC·
3/4 Instead of updating all states uniformly, we incorporate image attention as per-token learning rates. High-confidence matches get larger updates, while low-quality updates are suppressed.
English
1
2
15
1.7K
lotfullb
lotfullb@lotfullb·
@AnpeiC When can we expect the code for Neural Shell Texture Splatting?
English
1
0
0
57
Anpei Chen
Anpei Chen@AnpeiC·
The fields are moving extremely fast, we tried to summarize them base on 3D representations. Please let us know if we missed anything :)
Zhenjun Zhao@zhenjun_zhao

Advances in Feed-Forward 3D Reconstruction and View Synthesis: A Survey Jiahui Zhang, Yuelei Li, @AnpeiC, Muyu Xu, Kunhao Liu, @jianyuan_wang, @xxlong0, @hx_liang95, @zexiangxu, @haosu_twitr, Christian Theobalt, Christian Rupprecht, Andrea Vedaldi, @hpfister, Shijian Lu, @fnzhan0507 tl;dr: in title arxiv.org/abs/2507.14501

English
2
11
71
5.5K
Anpei Chen
Anpei Chen@AnpeiC·
@LvZhaoyang Thank you, Zhaoyang, we will include it in our later version👍
English
0
0
1
119
Zhaoyang Lv
Zhaoyang Lv@LvZhaoyang·
That's a great survey and thanks for citing our DGS-LRM work as pioneering work in predicting feedforward deformable 3D Gaussians. We have another parallel work 4DGT (4dgt.github.io) which I am really excited about. Its potential in real world data scaling will be huge. Will appreciate if you can help the team also cover this work. Thanks!
English
1
0
0
175
Adarsh Baghel
Adarsh Baghel@adarsh_baghel_1·
@AnpeiC @youzn99 @stam_g @SiyuTang3 no but that’s not the only thing, does it solve for motion blur, rolling-shutter “jello,” depth-driven warping, sudden bumps, amplified noise, lost resolution and field-of-view?
English
1
0
0
144
Jia-Bin Huang
Jia-Bin Huang@jbhuang0604·
@AnpeiC @youzn99 @stam_g @SiyuTang3 Great results! Congrats! I would appreciate it if you could consider citing our prior work localrf.github.io as you used our video dataset, and it has the same core idea of local reconstruction and rendering for 3D stabilization.
English
1
0
8
1.2K