- [05/07/2023] We plan to release an re-implementation of training code through lab4d.
- [01/22/2023] This repo is under development. It will contain the pre-trained category models of cats, dogs, and human.
We recommend using mamba to install, which is much faster conda in resolving conflicts.
To install mamba, do
conda install -c conda-forge mamba -y
Then you may replace conda install
with mamba install
git clone [email protected]:gengshan-y/rac.git --recursive
cd rac
# base dependencies
mamba env create -f misc/rac-env.yml -y
# other dependencies
conda activate rac
pip install git+https://github.com/pytorch/functorch.git@a6e0e61
pip install git+https://github.com/facebookresearch/pytorch3d.git
cd quaternion; python setup.py install; cd -
# download model weights
wget https://www.dropbox.com/sh/h1w82lb4rg48jui/AACD8q-DCFjyDhRx0-j7EjWLa -O tmp.zip
mkdir -p logdir
unzip tmp.zip -d ./logdir
rm tmp.zip
python explore.py --flagfile logdir/dog80-v0/opts.log --nolineload --seqname dog80 --full_mesh --noce_color --svid 69 --tvid 45 --interp_beta
It interpolates the shape between the source video 69 and target video 45. Results are saved at logdir/dog80-v0/explore-interp-69.mp4
.
python explore.py --flagfile logdir/dog80-v0/opts.log --nolineload --seqname dog80 --full_mesh --noce_color --svid 69 --tvid 45
It retargets the source video 69 to target video 45. Results are saved at logdir/dog80-v0/explore-motion-69.mp4
.
See demo.ipynb
for an interactive demo visualizing learned morphology and articulations.