VINECS: Video-based Neural Character Skinning. In CVPR, 2024
YouTube Viewers YouTube Viewers
3.4K subscribers
87 views
0

 Published On Apr 25, 2024

Paper Abstract:
Rigging and skinning clothed human avatars is a challenging task and traditionally requires a lot of manual work and expertise. Recent methods addressing it either generalize across different characters or focus on capturing the dynamics of a single character observed under different pose configurations. However, the former methods typically pre- dict solely static skinning weights, which perform poorly for highly articulated poses, and the latter ones either re- quire dense 3D character scans in different poses or can- not generate an explicit mesh with vertex correspondence over time. To address these challenges, we propose a fully automated approach for creating a fully rigged character with pose-dependent skinning weights, which can be solely learned from multi-view video. Therefore, we first acquire a rigged template, which is then statically skinned. Next, a coordinate-based MLP learns a skinning weights field parameterized over the position in a canonical pose space and the respective pose. Moreover, we introduce our pose- and view-dependent appearance field allowing us to differentiably render and supervise the posed mesh using multi-view imagery. We show that our approach outperforms state-of- the-art while not relying on dense 4D scans.

Reference Publication: Zhouyingcheng Liao, Vladislav Golyanik, Marc Habermann, Christian Theobalt.

VINECS: Video-based Neural Character Skinning., CVPR, 2024.

Project Page: https://people.mpi-inf.mpg.de/~mhaber...

show more

Share/Embed