Skip to content

[AAAI 2024] PNeRFLoc: Visual Localization with Point-Based Neural Radiance Fields

License

Notifications You must be signed in to change notification settings

BoMingZhao/PNeRFLoc

Repository files navigation

PNeRFLoc [AAAI 24]

This is the official pytorch implementation of PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields.

Update

  • Code for Indoor Dataset (7Scenes, Replica)

  • Code for Outdoor Dataset (Cambridge)

Installation

We have tested the code on Python 3.8, 3.9 and PyTorch 1.8.1, 2.0.1 with CUDA 11.3 and 11.8, while a newer version of pytorch should also work. The steps of installation are as follows:

  • create virtual environmental: conda create -n PNeRFLoc python=3.9

  • Activate the visual environment: conda activate PNeRFLoc

  • Install dependencies: bash requirements.sh. The default installation is PyTorch 2.0.1 with CUDA 11.8. Please select the PyTorch and CUDA versions that are compatible with your GPU.

Data Preparation

We use 7Scenes Dataset and Replica Dataset.

Then we need to extract R2D2 key points from the query image by running:

bash dev_scripts/utils/generate_r2d2.sh

The layout should looks like this:

PNeRFLoc
├── data_src
│   ├── Replica
│   │   │──room0
│   │   │   │──exported
│   │   │   │   │──color
│   │   │   │   │──depth
│   │   │   │   │──depth_render
│   │   │   │   │──pose
│   │   │   │   │──intrinsic
│   │   │   │   │──r2d2_query
│   │   │──room1
    ...
│   ├── 7Scenes
│   │   │──chess
│   │   │   │──exported
│   │   │   │   │──color
│   │   │   │   │──depth
│   │   │   │   │──depth_render
│   │   │   │   │──pose
│   │   │   │   │──intrinsic
│   │   │   │   │──TrainSplit.txt
│   │   │   │   │──TestSplit.txt
│   │   │   │   │──r2d2_query
│   │   │──pumpkin
	...
│   │   │──7scenes_sfm_triangulated

Train

Simply run

bash ./dev_scripts/train/${Dataset}/${Scene}.sh

Command Line Arguments for train
  • scan
  • Scene name.
  • train_end
  • Reference sequence cut-off ID.
  • skip
  • Select one image from every few images of the Reference sequence as the Training view.
  • vox_res
  • Resolution of voxel downsampling.
  • gpu_ids
  • GPU ID.

Optimize

Simply run

bash ./dev_scripts/loc/${Dataset}/${Scene}.sh

Command Line Arguments for optimize
  • format
  • 0 indicates optimizing the pose using quaternions, 1 indicates optimizing the pose using SE3, and 2 indicates optimizing the pose using a 6D representation.
  • save_path
  • Path to the optimized pose results. Note that if you change this path, please also pass the correct path through args when evaluating.
  • per_epoch
  • Number of optimizations per image, 250 by default.
  • render_times
  • Total number of renderings during optimization, 1 by default. Therefore, the total number of optimizations equals **_per_epoch_ * _render_times_**.

Evaluation

Once you have optimized all scenes in a dataset, you can evaluate it like this:

python evaluate_${Dataset}.py

You can also evaluate specific scenes by manually entering the names of the scenes you want to evaluate.

python evaluate_${Dataset}.py --scene ${scene_name1} ${scene_name2} ... ${scene_nameN}

Citing

@inproceedings{zhao2024pnerfloc,
  title={PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields},
  author={Zhao, Boming and Yang, Luwei and Mao, Mao and Bao, Hujun and Cui, Zhaopeng},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={38},
  number={7},
  pages={7450--7459},
  year={2024}
}

About

[AAAI 2024] PNeRFLoc: Visual Localization with Point-Based Neural Radiance Fields

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published