This is the offical Pytorch implementation of the paper:"
- Install dependencies. This project is developed on Ubuntu 18.04 with NVIDIA 3090 GPUs. We recommend you to use an Anaconda virtual environment.
# Create a conda environment.
conda create -n arts python=3.8
conda activate arts
# Install PyTorch >= 1.2 according to your GPU driver.
conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=11.3 -c pytorch -c conda-forge
# Install other dependencies.
sh requirements.sh
- Prepare SMPL layer.
- For the SMPL layer, We used smplpytorch. The repo is already included in
./smplpytorch
folder. - Download
basicModel_f_lbs_10_207_0_v1.0.0.pkl
,basicModel_m_lbs_10_207_0_v1.0.0.pkl
, andbasicModel_neutral_lbs_10_207_0_v1.0.0.pkl
from SMPL (female & male) and SMPL (neutral) to./smplpytorch/smplpytorch/native/models
.
Rename the ./data_final
to the ./data
. And the ./data
directory structure should follow the below hierarchy. Download all the processed annotation files from OneDrive
${Project}
|-- data
| |-- base_data
| | |-- J_regressor_extra.npy
| | |-- mesh_downsampling.npz
| | |-- smpl_mean_params.npz
| | |-- smpl_mean_vertices.npy
| | |-- SMPL_NEUTRAL.pkl
| | |-- spin_model_checkpoint.pth.tar
| |-- COCO
| | |-- coco_data
| | |-- __init__.py
| | |-- dataset.py
| | |-- J_regressor_coco.npy
| |-- Human36M
| | |-- h36m_data
| | |-- __init__.py
| | |-- dataset.py
| | |-- J_regressor_h36m_correct.npy
| | |-- noise_stats.py
| |-- MPII
| | |-- mpii_data
| | |-- __init__.py
| | |-- dataset.py
| |-- MPII3D
| | |-- mpii3d_data
| | |-- __init__.py
| | |-- dataset.py
| |-- PW3D
| | |-- pw3d_data
| | |-- __init__.py
| | |-- dataset.py
| |-- multiple_datasets.py
Stage 1 : Train the 3D pose estimation.
# Human3.6M
bash train_pose_h36m.sh
# 3DPW
bash train_pose_3dpw.sh
Stage 2: To train the all network for final mesh. Configs of the experiments can be found and edited in ./config
folder. Change posenet_path
in ./config/train_mesh_*.yml
to the path of the pre-trained pose model.
# Human3.6M
bash train_mesh_h36m.sh
# 3DPW & MPII3D
bash train_mesh_3dpw.sh
To test on a pre-trained pose estimation model (Stage 1).
# Human3.6M
bash test_pose_h36m.sh
# 3DPW
bash test_pose_3dpw.sh
To test on a pre-trained mesh model (Stage 2).
# Human3.6M
bash test_mesh_h36m.sh
# 3DPW
bash test_mesh_3dpw.sh
# MPII3D
bash test_mesh_mpii3d.sh
Change the weight_path
in the corresponding ./config/test_*.yml
to your model path.
Cite as below if you find this repository is helpful to your project:
@inproceedings{tang2024arts,
title={ARTS: Semi-Analytical Regressor using Disentangled Skeletal Representations for Human Mesh Recovery from Videos},
author={Tang, Tao and Liu, Hong and You, Yingxuan and Wang, Ti and Li, Wenhao},
booktitle={Proceedings of the 32nd ACM International Conference on Multimedia},
pages={1514--1523},
year={2024}
}
This repo is extended from the excellent work PMCE, Pose2Mesh, TCMR. We thank the authors for releasing the codes.