Our project is based on Nalbach et al. 2017 paper. In this project,
a set of buffers are provided to a Deep Convolutional Network in order to synthetize differend shading effects (such as Ambient Occlusion,
Depth of Field, Global Illumination and Sub-surface Scattering). The set of buffers depends os the shading effect we want to synthetize.
Input Bufers:
- Python 3.x
- Tensorflow 1.10
- Keras
- OpenCV 3.4(for loading,resizing images)
- h5py(for saving trained model)
- pyexr
- Clone the repo
- Download the dataset (https://deep-shading-datasets.mpi-inf.mpg.de/)
- Install the requirements
- Generate the .tfrecord for trainning and validation (Use the DataReader.py)
- Run "Shading.py"
[1] Oliver Nalbach, Elena Arabadzhiyska, Dushyant Mehta, Hans-Peter Seidel, Tobias Ritschel Deep Shading: Convolutional Neural Networks for Screen-Space Shading to appear in Proc. EGSR 2017