Paper | Project Page | arXiv | Pretrained Models | Minimal Datasets | EasyVolcap
Repository for our paper 4K4D: Real-Time 4D View Synthesis at 4K Resolution.
News:
- 24.02.27: 4K4D has been accepted to CVPR 2024.
- 23.12.18: The backbone of 4K4D, our volumetric video framework EasyVolcap has been open-sourced!
- 23.12.18: The inference code for 4K4D has also been open-sourced along with documentations.
4k4d_dance3_demo.mp4
4k4d_0008_01_demo.mp4
4k4d_0013_01_demo.mp4
For more high-resolution results and more real-time demos, please visit our project page.
Please refer to the installation guide of EasyVolcap for basic environment setup.
After setting up the environment, you should execute the installation command in this repository's root directory to register the modules:
# Run this inside the 4K4D repo
pip install -e . --no-build-isolation --no-deps
Note that it's not necessary for all requirements present in environment.yml and requirements.txt to be installed on your system as they contain dependencies for other parts of EasyVolcap. Thanks to the modular design of EasyVolcap, this missing packages will not hinder the rendering and training of 4K4D.
After the installation process, we're expecting PyTorch, PyTorch3D and tiny-cuda-nn to be present in the current system for the rendering of 4K4D to work properly.
For the training of 4K4D, you should also make sure that Open3D is properly installed.
Other packages can be easily installed using pip
if errors about their import are encountered.
Check that this is the case with:
python -c "from easyvolcap.utils.console_utils import *" # Check for easyvolcap installation. 4K4D is a fork of EasyVolcap
python -c "import torch; print(torch.rand(3,3,device='cuda'))" # Check for pytorch installation
python -c "from pytorch3d.io import load_ply" # Check for pytorch3d installation
python -c "import tinycudann" # Check for tinycudann installation
python -c "import open3d" # Check for open3d installation. open3d is only required for training (extracting visual hulls)
In this section, we provide instructions on downloading the full dataset for DNA-Rendering, ZJU-Mocap, NHR, ENeRF-Outdoor and Mobile-Stage dataset. If you only want to preview the pretrained models in the interactive GUI without any need for training, we recommend checking out the minimal dataset section because the full datasets are quite large in size. Note that for full quality rendering, you still need to download the full dataset as per the instructions below.
4K4D follows the typical dataset setup of EasyVolcap, where we group similar sequences into sub-directories of a particular dataset. Inside those sequences, the directory structure should generally remain the same: For example, after downloading and preparing the 0013_01 sequence of the DNA-Rendering dataset, the directory structure should look like this:
# data/renbody/0013_01:
# Required:
images # raw images, cameras inside: images/00, images/01 ...
masks # foreground masks, cameras inside: masks/00, masks/01 ...
extri.yml # extrinsic camera parameters, not required if the optimized folder is present
intri.yml # intrinsic camera parameters, not required if the optimized folder is present
# Optional:
optimized # OPTIONAL: optimized camera parameters: optimized/extri.yml, optimized: intri.yml
vhulls # OPTIONAL: extracted visual hull: vhulls/000000.ply, vhulls/000001.ply ... not required if the optimized folder and surfs folder are present
surfs # OPTIONAL: processed visual hull: surfs/000000.ply, surfs/000001.ply ...
Please refer to Im4D's guide to download ZJU-MoCap, NHR and DNA-Rendering datasets.
After downloading, the extracted files should be placed in to data/my_zjumocap
, data/NHR
and data/renbody
respectively.
If someone is interested in the processed data, please email me at [email protected] and CC [email protected] and [email protected] to request the processing guide.
For ZJU-MoCap, you can fill in this Google form to request the download link.
Note that you should cite the corresponding papers if you use these datasets.
If someone is interested in downloading the ENeRF-Outdoor dataset, please fill in this Google form to request the download link. Note that this dataset is for non-commercial use only.
After downloading, the extracted files should be placed in data/enerf_outdoor
.
If someone is interested in downloading the Mobile-Stage dataset, please fill in this Google form to request the download link. Note that this dataset is for non-commercial use only.
After downloading, the extracted files should be placed in data/mobile_stage
.
First, download the pretrained models.
After downloading, place them into data/trained_model
(e.g. data/trained_model/4k4d_0013_01/1599.npz
, data/trained_model/4k4d_0013_01_r4/latest.pt
and data/trained_model/4k4d_0013_01_mb/-1.npz
).
Note: The pre-trained models were created with the release codebase. This code base has been cleaned up and includes bug fixes, hence the metrics you get from evaluating them will differ from those in the paper. If yor're interested in reproducing the error metrics reported in the paper, please consider downloading the reference images.
Here we provide their naming convensions which corresponds to their respective config files:
4k4d_0013_01
(without any postfixes) is the real-time 4K4D model, corresponding toconfigs/projects/realtime4dv/rendering/4k4d_0013_01.yaml
. This model can only be used for rendering. When combined with the full dataset mentioned above, this is the full official 4K4D implementation.4k4d_0013_01_r4
(with the_r4
postfix) is the full pretrained model used during training, corresponding toconfigs/projects/realtime4dv/training/4k4d_0013_01_r4.yaml
. This model can only be used for training.r4
is short for realtime4dv.4k4d_0013_01_mb
(with the_mb
postfix) is an extension to 4K4D (Note: to be open-sourced) where we distill the IBR + SH appearance model into a set of low-degree SH parameters. This model can only be used for rendering and do not require pre-computation.mb
is short for mobile.
After placing the models and datasets in their respective places, you can run EasyVolcap with configs located in configs/projects/realtime4dv/rendering to perform rendering operations with 4K4D.
For example, to render the 0013_01 sequence of the DNA-Rendering dataset, you can run:
# GUI Rendering
evc -t gui -c configs/projects/realtime4dv/rendering/4k4d_0013_01.yaml,configs/specs/vf0.yaml # Only load, precompute and render the first frame
evc -t gui -c configs/projects/realtime4dv/rendering/4k4d_0013_01.yaml # Precompute and render all 150 frames, this could take a minute or two
# Testing with input views
evc -t test -c configs/projects/realtime4dv/rendering/4k4d_0013_01.yaml,configs/specs/eval.yaml,configs/specs/vf0.yaml # Only render some of the view of the first frame
evc -t test -c configs/projects/realtime4dv/rendering/4k4d_0013_01.yaml,configs/specs/eval.yaml # Only rendering some selected testing views and frames
# Rendering rotating novel views
evc -t test -c configs/projects/realtime4dv/rendering/4k4d_0013_01.yaml,configs/specs/eval.yaml,configs/specs/spiral.yaml,configs/specs/ibr.yaml,configs/specs/vf0.yaml # Render a static rotating novel view
evc -t test -c configs/projects/realtime4dv/rendering/4k4d_0013_01.yaml,configs/specs/eval.yaml,configs/specs/spiral.yaml,configs/specs/ibr.yaml # Render a dynamic rotating novel view
We provide a minimal dataset for 4K4D to render with its full pipeline by encoding the input images and masks into videos (typically less than 100MiB each).
This leads to almost no visual quality loss, but if you have access to the full dataset, it's recommended to run the model on the full dataset instead (Sec. Rendering).
Here we provide instructions on setting up the minimal dataset and rendering with it:
- Download the pretrained models. If you've already done so for the Rendering section, this step can be skipped.
- Pretrained models should be placed directly into the
data/trained_model
directory (e.g.data/trained_model/4k4d_0013_01/1599.npz
).
- Pretrained models should be placed directly into the
- Downlaod the minimal datasets.
- Place the compressed files inside their respective
data_root
(e.g.0013_01_libx265.tar.gz
should be placed intodata/renbody/0013_01
) and uncompressed them. - Note that if you've already downloaded the full dataset with raw images as per the Rendering section, there's no need for redownloading the minimal dataset with encoded videos.
- However, if you continue, no files should be replaced and you can safely run both kinds of rendering.
- After the uncompression, you should see two folders:
videos_libx265
andoptimized
. The former contains the encoded videos, and the latter contains the optionally optimized camera parameters. For some dataset, you'll seeintri.yml
andextri.yml
instead of theoptimized
folder. And for some others, you'll see avideos_masks_libx265
for storing the masks separatedly.
- Place the compressed files inside their respective
- Process the minimal datasets using these two scripts:
- scripts/realtime4dv/extract_images.py: Extract images from the encoded videos. Use
--data_root
to control which dataset to extract. - scripts/realtime4dv/extract_masks.py: Extract masks from the encoded videos. Use
--data_root
to control which dataset to extract. - After the extraction (preprocessing), you should see a
images_libx265
and amasks_libx265
inside yourdata_root
. Example processing scripts:
- scripts/realtime4dv/extract_images.py: Extract images from the encoded videos. Use
# For foreground datasets with masks and masked images (DNA-Rendering, NHR, ZJU-Mocap)
python scripts/realtime4dv/extract_images.py --data_root data/renbody/0013_01 --vcodec none --hwaccel none
python scripts/realtime4dv/extract_masks.py --data_root data/renbody/0013_01 --vcodec none --hwaccel none
# For datasets with masks and full images (ENeRF-Outdoor and dance3 of MobileStage)
python scripts/realtime4dv/extract_images.py --data_root data/mobile_stage/dance3 --vcodec none --hwaccel none
python scripts/realtime4dv/extract_images.py --data_root data/mobile_stage/dance3 --vcodec none --hwaccel none --videos_dir videos_masks_libx265 --images_dir masks_libx265 --single_channel
- Now, the minimal dataset has been prepared and you can render a model with it. The only change is to append a new config onto the command:
configs/specs/video.yaml
. Example rendering scripts:
# See configs/projects/realtime4dv/rendering for more
evc -t gui -c configs/projects/realtime4dv/rendering/4k4d_0013_01.yaml,configs/specs/video.yaml
evc -t gui -c configs/projects/realtime4dv/rendering/4k4d_sport1.yaml,configs/specs/video.yaml
evc -t gui -c configs/projects/realtime4dv/rendering/4k4d_my_313.yaml,configs/specs/video.yaml
evc -t gui -c configs/projects/realtime4dv/rendering/4k4d_dance3.yaml,configs/specs/video.yaml
evc -t gui -c configs/projects/realtime4dv/rendering/4k4d_actor1_4.yaml,configs/specs/video.yaml
- TODO: Finish up the web viewer for the mobile 4k4d.
- TODO: Add trainable models & training examples
- TODO: Add trainable models & examples on custom datasets
- TODO: Add trainable models & examples on custom full-scene datasets
We would like to acknowledge the following inspiring prior work:
- EasyVolcap: Accelerating Neural Volumetric Video Research (Xu et al.)
- IBRNet: Learning Multi-View Image-Based Rendering (Wang et al.)
- ENeRF: Efficient Neural Radiance Fields for Interactive Free-viewpoint Video (Lin et al.)
- K-Planes: Explicit Radiance Fields in Space, Time, and Appearance (Fridovich-Keil et al.)
If you find this code useful for your research, please cite us using the following BibTeX entry.
@article{xu20234k4d,
title={4K4D: Real-Time 4D View Synthesis at 4K Resolution},
author={Xu, Zhen and Peng, Sida and Lin, Haotong and He, Guangzhao and Sun, Jiaming and Shen, Yujun and Bao, Hujun and Zhou, Xiaowei},
booktitle={arXiv preprint arXiv:2310.11448},
year={2023}
}
@article{xu2023easyvolcap,
title={EasyVolcap: Accelerating Neural Volumetric Video Research},
author={Xu, Zhen and Xie, Tao and Peng, Sida and Lin, Haotong and Shuai, Qing and Yu, Zhiyuan and He, Guangzhao and Sun, Jiaming and Bao, Hujun and Zhou, Xiaowei},
booktitle={SIGGRAPH Asia 2023 Technical Communications},
year={2023}
}