Skip to content

This is the official implementation of our EGSR 2023 paper, Accelerating Hair Rendering by Learning High-Order Scattered Radiance.

License

Notifications You must be signed in to change notification settings

facebookresearch/HairMSNN

Repository files navigation

Accelerating Hair Rendering by Learning High-Order Scattered Radiance

1Meta Reality Labs Research
2CVIT, International Institute of Information Technology, Hyderabad (IIIT-H)

Eurographics Symposium on Rendering (EGSR) 2023
Computer Graphics Forum (CGF) Vol. 42, No. 4


A renderer to path trace hair by using learnt approximations of high order radiance.
The renderer provides control over it's bias (w.r.t. ground truth path tracing) and the speedup. In essence, more speedup results in more bias and vice versa.

This bias is however acceptable, depending on the target application. In the best case, we achieve a speedup of approx. 70 % w.r.t. path tracing.

Cloning & Building

This code is primarily tested with Windows 10 and Visual Studio 2022.

To clone this repo, run the following (note the --recursive flag):

git clone --recursive https://github.com/facebookresearch/HairMSNN

Next, create a build directory and run CMake:
cd PATH_TO_CLONED_REPO
mkdir build
cd build
cmake ..

Open the resulting solution in Visual Studio and build in Release configuration. It should build the following:

render_path_tracing -> Path tracing
render_nrc -> Neural Radiance Caching
render_hair_msnn -> Our renderer

Example Scenes

This repo included two example scenes of two different hair styles: Straight & Curly. Both styles and the head model are taken from Cem Yuksel's webpage.
We have a custom scene description file, which can be found in scenes/straight/config.json or scenes/curly/config.json.
Please note, you will have to edit the paths in the scene file to use them. For example, to use scenes/curly/config.json, modify the highlighted lines:

{

    "hair": {
        "alpha": 2.0,
        "beta_m": 0.3,
        "beta_n": 0.3,
        "type": 1,
        "sigma_a": [
            0.06,
            0.1,
            0.2
        ],
        "geometry": *"YOUR_REPOSITORY_LOCATION/scenes/curly/wCurly.hair"
    },
    "integrator": {
        "ENV_PDF": true,
        "MIS": true,
        "height": 1024,
        "image_output": *"YOUR_REPOSITORY_LOCATION/scenes/curly/render.png",
        "path_v1": 1,
        "path_v2": 40,
        "spp": 500,
        "stats_output": *"YOUR_REPOSITORY_LOCATION/scenes/curly/stats.json",
        "width": 1024
    },
    "lights": {
        "environment": {
            "exr": *"YOUR_REPOSITORY_LOCATION/scenes/envmaps/christmas_photo_studio_07_4k.exr",
            "scale": 1.0
        },
        "directional": [
            {
                "from": [3, 3, 3],
                "emit": [1, 1, 1]
            }
        ]
    },
    "tcnn": {
        "config": *"YOUR_REPOSITORY_LOCATION/scenes/curly/tcnn_hairmsnn.json",
        "init_train": true
    },
    "surface": {
        "geometry": *"YOUR_REPOSITORY_LOCATION/scenes/curly/head.obj"
    }
}

Usage

The general syntax for interactive rendering is:

./Release/render_[RENDERER].exe [SCENE_FILE_PATH] [BETA]

where [RENDERER] can be one of path_tracing, nrc or hair_msnn.
[SCENE_FILE_PATH] is the path to the scene file.
The integer argument [BETA] is processed only by our renderer (hair_msnn) and controls the bias/speedup. For small values (0, 1, 2 ..), the speedup is considerable and bias exists. For higher values (10, 11 ..), the run-time & bias approaches path tracing. Refer to the paper for details.

Run path tracing on curly hair:

./Release/render_path_tracing.exe YOUR_REPOSITORY_LOCATION/scenes/curly/config.json

Run NRC on curly hair:

./Release/render_nrc.exe YOUR_REPOSITORY_LOCATION/scenes/curly/config.json

Run our renderer on curly hair, with maximum speedup & bias (BETA=1):

./Release/render_hair_msnn.exe YOUR_REPOSITORY_LOCATION/scenes/curly/config.json 1

The Save EXR and Save PNG buttons in the UI will save the current frame in image_output defined above.
There are quite a few other parameters in the UI that you can play with, which should all make intuitive sense.

Requirements

  • Tested on a Windows 10 machine with RTX 3090 & Visual Studio 2022
  • A recent NVIDIA GPU, with RTX cores
  • CMake v3.18
  • A C++14 capable compiler
  • NVIDIA Cuda 12.1
  • NVIDIA OptiX 7.4
  • NVIDIA driver 531

License

HairMSNN is MIT licensed, as found in the LICENSE file.

Citation

If you find this repository useful in your own work, please consider citing the paper.

@article{10.1111:cgf.14895,
  journal = {Computer Graphics Forum},
  title = {{Accelerating Hair Rendering by Learning High-Order Scattered Radiance}},
  author = {KT, Aakash and Jarabo, Adrian and Aliaga, Carlos and Chiang, Matt Jen-Yuan and Maury, Olivier and Hery, Christophe and Narayanan, P. J. and Nam, Giljoo},
  year = {2023},
  publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
  issn = {1467-8659},
  doi = {10.1111/cgf.14895}
}

About

This is the official implementation of our EGSR 2023 paper, Accelerating Hair Rendering by Learning High-Order Scattered Radiance.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published