Skip to content

minsu1206/K-Planes

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

K-Planes: Explicit Radiance Fields in Space, Time, and Appearance

Where we develop an extensible (to arbitrary-dimensional scenes) and explicit radiance field model which can be used for static, dynamic, and variable appearance datasets.

Code release for:

K-Planes: Explicit Radiance Fields in Space, Time, and Appearance

Sara Fridovich-Keil*, Giacomo Meanti*, Frederik Rahbæk Warburg, Benjamin Recht, Angjoo Kanazawa

🚀 Project page

📰 Paper

📁 Raw output videos and pretrained models

nerfaccIntegration with NerfAcc library for even faster training

nerfaccIntegration with NerfStudio for easier visualization and development

Setup

We recommend setup with a conda environment using PyTorch for GPU (a high-memory GPU is not required). Training and evaluation data can be downloaded from the respective websites (NeRF, LLFF, DyNeRF, D-NeRF, Phototourism).

Training

Our config files are provided in the configs directory, organized by dataset and explicit vs. hybrid model version. These config files may be updated with the location of the downloaded data and your desired scene name and experiment name. To train a model, run

PYTHONPATH='.' python plenoxels/main.py --config-path path/to/config.py

Note that for DyNeRF scenes it is recommended to first run for a single iteration at 4x downsampling to pre-compute and store the ray importance weights, and then run as usual at 2x downsampling. This is not required for other datasets.

Visualization/Evaluation

The main.py script also supports rendering a novel camera trajectory, evaluating quality metrics, and rendering a space-time decomposition video from a saved model. These options are accessed via flags --render-only, --validate-only, and --spacetime-only, and a saved model can be specified via --log-dir.

License and Citation

@inproceedings{kplanes_2023,
      title={K-Planes: Explicit Radiance Fields in Space, Time, and Appearance},
      author={{Sara Fridovich-Keil and Giacomo Meanti} and Frederik Rahbæk Warburg and Benjamin Recht and Angjoo Kanazawa},
      year={2023},
      booktitle={CVPR}
}

Note: Joint first-authorship is not fully supported in BibTex; you may need to modify the above depending on your format.

This work is made available under the BSD 3-clause license. Click here to view a copy of the license.


CUSTOMIZED

Setup

  1. docker pull ciplab/minsu_torch:kplanes

    be sure that gpu is 2080Ti / 3090 / A5000 / A6000 / A100 (Titan RTX not tested for TinyCUDANN)

  2. git clone repository git clone https://github.com/minsu1206/K-Planes.git

  3. dataset prepare

    : cp -r /media/NAS2/CIPLAB/nerf_team/room1_processed dataset/samsung2024

  4. overview ...

    dataset
    - room1
       - frames{downsample_factor}
          -cam00
          -cam01
          ...
          -cam15
          poses_bounds_${tag1}.npy
          intrinsic_${tag1}.npy
    - room2
       - frames{downsample_factor}
          -cam00
          -cam01
          ...
    K-planes
       plenoxels
       logs
       full_example.sh
       full1.sh
       full2.sh
       ...
    

Train

See full_example.sh.

This includes

  • train
  • evaluate
  • render (spiral / arc)
  • visualization of spacetime

How to do ablation study ?

K-planes code support configuration overriding.
Add $hyperparams=$value at last.
Note that this value is basically string type. Another notes:

  1. To utilize customized poses_bounds.npy, name this file as "poses_bounds_{$tag}.npy" then, override pose_npy_suffix=$tag when you run training code. if pose_npy_suffix='', it uses "poses_bounds.npy" automatically.

  2. To utilize intrinsic_pose, use_intrinsic=True please note that if pose_npy_suffix != '', it uses "intrinsic_{$pose_suffix_npy}.npy" for training

  3. To utilize visualizing per @ step (for example, @=300) add vis_every=300 at your training command

Render_arc

see render_arc.sh

This script is the example of how to render images along arc path.
Be sure that the path of trained model is correct.
Checkpoint path is os.path.join(args.log_dir, "model.pth")

Some utilities

  1. cropping result video into 3 videos.

    : python cropvideo.py $PATH or bash cropvideo.sh $PATH

  2. ...

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.2%
  • Shell 3.8%