Skip to content

This is the project page for our ICCVW 2017 paper 'Deep photometric stereo network' by Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, and Yasuyuki Matsushita.

License

Notifications You must be signed in to change notification settings

hiroaki-santo/deep-photometric-stereo-network

Repository files navigation

Deep photometric stereo network

This repository is an implementation of Deep Photometric Stereo Network. (http:https://openaccess.thecvf.com/content_ICCV_2017_workshops/w9/html/Santo_Deep_Photometric_Stereo_ICCV_2017_paper.html)

How to Train

We use the deep learning framework Tensorflow with following libraries:

We use python 2.7 on Ubuntu 14.04. You can use our Dockerfile (Nvidia-docker is required).

Download datasets

We use following dataset for the training and evaluation.

You can download each file by download_*.sh. DiLiGenT is only used for evaluation.

params.py

This file defines paths of each dataset and the light source directions. Now the light source directions are fit to DiLiGenT dataset. You can modify this values for your setup.

Also, the path to save the training images are defined here.

Rendering training data

First, you need to build:

$ cd ./merl_brdf_database
$ cmake .
$ make

This is because we use BRDFRead.cpp to read MERL BRDF Database, which is the sample code in that project.

You can render synthetic training and test data by:

$ python renderin_with_merl.py

The training and test data are output to the specified path in params.py.

Preparing training data

We use TFRecord format for training data. You can convert rendered images to the TFRecord file by:

$ python dataset.py

Training

$ python train.py --output_path PATH_TO_SAVE_MODEL --gpu GPU_ID

Other arguments can be confirmed by --help option.

Directory tree of Model

PATH_TO_SAVE_MODEL has following directories:

summary

Summary for tensorboard

  • {train|test}/cost : Output of loss function
  • {train|test}/RMSE : Root Mean Squared Error between ground truth and predicted normal vector

checkpoint

Checkpoint files

best_checkpoint

Best checkpoint file. "Best" means that minimize the L_2 loss for synthetic test data.

eval

Estimated images for synthetic test data.

Result

Our estimated normal maps of DiLiGenT are available in .npy format. When you want to use them for the comparison, please contact to the first author of the paper.

About

This is the project page for our ICCVW 2017 paper 'Deep photometric stereo network' by Hiroaki Santo, Masaki Samejima, Yusuke Sugano, Boxin Shi, and Yasuyuki Matsushita.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published