diff --git a/animatable_sdf/index.html b/animatable_sdf/index.html index b8cd23f..62ef698 100644 --- a/animatable_sdf/index.html +++ b/animatable_sdf/index.html @@ -4,7 +4,7 @@ - Animatable Neural Implicit Surfaces for Creating Avatars from Videos + Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos @@ -16,18 +16,22 @@
-

Animatable Neural Implicit Surfaces for Creating Avatars from Videos

+

Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos

+

TPAMI 2024, ICCV 2021


-
Sida Peng1, - Shangzhan Zhang1, - Zhen Xu1, - Chen Geng1, - Boyi Jiang2, - Hujun Bao1, - Xiaowei Zhou1
-

1State Key Lab of CAD & CG, Zhejiang University    - 2Image Derivative Inc +

+ Sida Peng1, + Zhen Xu1, + Junting Dong1, + Qianqian Wang2, + Shangzhan Zhang1, + Qing Shuai1, + Hujun Bao1, + Xiaowei Zhou1
+

+ 1State Key Lab of CAD & CG, Zhejiang University    + 2Cornell University

@@ -68,7 +72,9 @@
This paper is an extension of Animatable NeRF, which -

This paper aims to reconstruct an animatable human model from a video of very sparse camera views. Some recent works represent human geometry and appearance with neural radiance fields and utilize parametric human models to produce deformation fields for animation, which enables them to recover detailed 3D human models from videos. However, their reconstruction results tend to be noisy due to the lack of surface constraints on radiance fields. Moreover, as they generate the human appearance in 3D space, their rendering quality heavily depends on the accuracy of deformation fields. To solve these problems, we propose Animatable Neural Implicit Surface (AniSDF), which models the human geometry with a signed distance field and defers the appearance generation to the 2D image space with a 2D neural renderer. The signed distance field naturally regularizes the learned geometry, enabling the high-quality reconstruction of human bodies, which can be further used to improve the rendering speed. Moreover, the 2D neural renderer can be learned to compensate for geometric errors, making the rendering more robust to inaccurate deformations. Experiments on several datasets show that the proposed approach outperforms recent human reconstruction and synthesis methods by a large margin.

+

+ This paper addresses the challenge of reconstructing an animatable human model from a multi-view video. Some recent works have proposed to decompose a non-rigidly deforming scene into a canonical neural radiance field and a set of deformation fields that map observation-space points to the canonical space, thereby enabling them to learn the dynamic scene from images. However, they represent the deformation field as translational vector field or SE(3) field, which makes the optimization highly under-constrained. Moreover, these representations cannot be explicitly controlled by input motions. Instead, we introduce a pose-driven deformation field based on the linear blend skinning algorithm, which combines the blend weight field and the 3D human skeleton to produce observation-to-canonical correspondences. Since 3D human skeletons are more observable, they can regularize the learning of the deformation field. Moreover, the pose-driven deformation field can be controlled by input skeletal motions to generate new deformation fields to animate the canonical human model. Experiments show that our approach significantly outperforms recent human modeling methods. +

@@ -177,12 +183,21 @@
Ablation study on neural feature fie

Citation


-@article{peng2022animatable,
-  title={Animatable Neural Implicit Surfaces for Creating Avatars from Videos},
-  author={Peng, Sida and Zhang, Shangzhan and Xu, Zhen and Geng, Chen and Jiang, Boyi and Bao, Hujun and Zhou, Xiaowei},
-  journal={arXiv preprint arXiv:2203.08133},
-  year={2022}
-}
+ +@article{peng2024animatable, + title={Animatable Implicit Neural Representations for Creating Realistic Avatars from Videos}, + author={Peng, Sida and Xu, Zhen and Dong, Junting and Wang, Qianqian and Zhang, Shangzhan and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei}, + journal={TPAMI}, + year={2024}, + publisher={IEEE} +} +@inproceedings{peng2021animatable, + title={Animatable Neural Radiance Fields for Modeling Dynamic Human Bodies}, + author={Peng, Sida and Dong, Junting and Wang, Qianqian and Zhang, Shangzhan and Shuai, Qing and Zhou, Xiaowei and Bao, Hujun}, + booktitle={ICCV}, + year={2021} +} +