Skip to content

🎉 [CVPR 2024] Pytorch implementation of 'Har Far Can We Compress Instant-NGP Based NeRF?'

License

Notifications You must be signed in to change notification settings

YihangChen-ee/CNC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[CVPR'24] CNC

Official Pytorch implementation of Har Far Can We Compress Instant-NGP-Based NeRF?.

Yihang Chen, Qianyi Wu, Mehrtash Harandi, Jianfei Cai

[Paper] [Project Page] [Github]

Links

🎉 HAC [ARXIV'24] is now released for efficient 3DGS compression! [Arxiv] [Project Page] [Github]

Overview

In this paper, we introduce the Context-based NeRF Compression (CNC) framework, which leverages highly efficient context models to provide a storage-friendly NeRF representation. Specifically, we excavate both level-wise and dimension-wise context dependencies to enable probability prediction for information entropy reduction. Additionally, we exploit hash collision and occupancy grids as strong prior knowledge for better context modeling.

Performance

Installation

We tested our code on a server with Ubuntu 20.04.1, cuda 11.8, gcc 9.4.0

  1. Create a new environment to run our code
conda create -n CNC_env python==3.7.11
conda activate CNC_env
  1. Install necessary dependent packages
pip install -r requirements.txt
pip install ninja

You might need to run the following command before continuing.

pip uninstall nvidia-cublas-cu11
  1. Install tinycudann
  2. Install our CUDA backens
pip install gridencoder
pip install my_cuda_backen
  1. Manually replace the nerfacc package in your environment (PATH/TO/YOUR/nerfacc) by ours (./nerfacc).

Code Execution

  1. Put dataset to ./data folder, such as ./data/nerf_synthetic/chair or ./data/TanksAndTemple/Barn
  2. To train a scene in nerf_synthetic or tanks_and_temple dataset, conduct the following.
  3. We use a learning rate of 1e-2 in our paper for both MLPs but 6e-3 in this repo as we find it is more stable.
CUDA_VISIBLE_DEVICES=0 python examples/train_CNC_nerf_synthetic.py --lmbda 0.7e-3 --scene chair --sample_num 150000 --n_features 8
CUDA_VISIBLE_DEVICES=0 python examples/train_CNC_tank_temples.py --lmbda 0.7e-3 --scene Barn --sample_num 150000 --n_features 8

Optionally, you can try --lmbda in [0.7e-3, 1e-3, 2e-3, 4e-3] to control rate, and try --sample_num in [150000, 200000], and --n_features in [1, 2, 4, 8] to adjust training time and performance tradeoff.

Please use --sample_num 150000 for --n_features 8 and --sample_num 200000 otherwise

The code will automatically run the entire process of: training, encoding, decoding, testing.

  1. Output data includes:
  1. Recorded output results in ./results. (Including fidelity, size, training time, encoding/decoding time)
  2. Encoded bitstreams of the hash grid are in ./bitstreams

Attention: Common Issues You Might Encounter

  1. Our gcc version is 9.4.0. If you encounter RuntimeError, please check your gcc version.
  2. In some cases, it may be necessary to uninstall nvidia-cublas-cu11 before installing tinycudann and our CUDA backens
  3. If you install nerfacc using pip, the code will need to build the CUDA code on the first run (JIT). See nerfacc for more details.

Contact

Citation

If you find our work helpful, please consider citing:

@inproceedings{cnc2024,
  title={How Far Can We Compress Instant-NGP-Based NeRF?},
  author={Chen, Yihang and Wu, Qianyi and Harandi, Mehrtash and Cai, Jianfei},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2024}
}

About

🎉 [CVPR 2024] Pytorch implementation of 'Har Far Can We Compress Instant-NGP Based NeRF?'

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published