This work used to be my hobby project so the codebase is unstructured, and had been left untouched since around 2017. I only organized the files and added this document to make them more presentable.
Sample scene and render configuration files are missing so you can't feed them to the built program and render an image since I've lost the accompanying work that prepared them.
This work is an implementation of the light transport algorithm proposed in the paper Light transport simulation with vertex connection and merging.
You can read the paper from here.
do the following.
./build.sh
CURRENTLY NOT POSSIBLE DUE TO NECESSARY SAMPLE FILES MISSING.
feed scene and render configuration files like so;
./vcm <output image file path> <render configuration file path> <scene file path #1>...<scene file path #N>
and it should produce the rendered image file in PFM format.
Here are some images rendered with this program. And some images rendered with similar programs of mine.
For those who are unfamiliar with rendering, light transport theory or monte carlo ray tracing, see below.
In computer graphics, rendering is a process by which a computer program is fed geometric and material parameters designed by the artist to produce images. Images rendered using monte carlo ray tracing techniques in particular are photorealistic owing to underlying physically feasible mathematical models and sophisticated algorithms.
You can watch this video for easier understanding of rendering and ray tracing.
Light transport theory is a field of study to solve light transport problems under mathematical frameworks and through computer simulations to render photorealistic images.
Monte carlo ray tracing is a technique that applies monte carlo integration to light transport problems to solve them.
The fundamentals of light transport theory is described by Eric Veach in his thesis Robust Monte Carlo Methods for Light Transport Simulation.
You can read his thesis from here.
For more thorough and friendly reading material that also covers implementational concerns, you can read the book Physically Based Rendering:From Theory To Implementation.
You can read the book from here.
The fundamentals of light transport theory and monte carlo ray tracing is briefly described here. Built on top of that, the algorithm Vertex Connection and Merging is also described briefly.
The central physical quantity we deal with in light transport theory is an radiometric quantity borrowed from the frameworks of radiative transfer in thermodynamics that represents positional and directional power density: Radiance, which is a quantity represented as
And the quantity that represents response of the hypothetical camera sensor: Importance, which is a quantity represented as
Light transport, light emitted from light sources, scattering on surfaces, finally arriving to the hypothetical camera sensor, is mathematically formulated as below.
The quantities are:
-
$L_e$ ,$L_i$ and$L_o$ ; the emitted, incident and exitant radiance, respectively. -
$W_e$ ; the emitted importance. -
$f_s$ ; the bidirectional reflectance distribution function (BRDF) represented as$\frac{dL_o(r,\omega_o)}{L_i(r,\omega_i)d\sigma^\perp(\omega_i)}$ that models scattering of light.
And the integration domains are:
-
$\Omega$ ; unit sphere representing incident directions. -
$\mathcal{M}$ ; surfaces in space.
This is our initial formulation and our interest is to solve for
To apply sophisticated techniques, it is necessary to express the equations in an alternative form.
We first introduce change of variable by relation between projected solid angle and area measure of two vertices
Then express the rendering equation as the three vertex form.
It follows that the our interest
Motivation of this form is so that we can apply multiple importance sampling (MIS) that follows.
We can further simplify it by introducing the path space;
Then we have a single integration over the path space domain;
In practical scene configurations, analytically solving the light transport equation is impossible. quadrature method is also impractical because of high dimensionality of integration domain.
We instead solve it using monte carlo estimator, which can approximate our interest
Where
Ultimate goal of monte carlo integration is variance reduction of the estimator, which is achieved when the distribution of samples is proportional to the integrand (proof can be difficult). Naive sampling strategy is to random walk the surfaces starting from the hypothetical camera sensor, sample an incident direction at each vertex using inversion method (importance sampling), or do the similar starting from the light sources (justification of this strategy requires formulation of importance transport, which we will skip).
Either approach becomes impractical when the scene is dominated by indirect illumination.
We can combine them to design a good estimator by connecting the vertices between camera and light paths, and taking the weighted average of the estimators from the connected paths (multiple importance sampling).
$$\hat{I}=\sum_{i\leq n}\frac{1}{N_i}\sum_{j\leq N_i}w_i(\overline{r}{i,j})\frac{f(\overline{r}{i,j})}{p(\overline{r}_{i,j})}$$
GONNA SKIP THE MATHS FROM HERE ON BECAUSE GITHUB STARTED GLITCHING AND WON'T RENDER THE EXPRESSIONS LOL!
Typical design of the weight is the balance or power heuristics.
This method is unbiased and can handle complex indirect lighting in most scenarios, but still suffers with a type of specular path; the notorious specular-diffuse-specular (SDS) path, which is practically, and sometimes theoretically impossible to integrate because of delta functions appearing in the integrand (e,g. caustics formed underwater seen through the surface, caustics seen through a mirror, both of which are not uncommon scenario).
This method combines bidirectional path tracing (vertex connection) and photon mapping (merging) via multiple importance sampling. method of photon mapping is a convolution by a kernel with an extra vertex at the end of a path. The convolution kernel is asymptotically a delta distribution in a parameter, so that the convolution approximates the true integral.
It can handle SDS paths by loosely speaking, blurring energy contribution over the integration domain. So the resulting image will appear blurry at lower nuber of samples and is characterized by low frequency noise, which is perceptually less noticeable than high frequency noise seen with vertex connection methods.
Theoretical generality of this method is great, but it leaves a practical concern. For small radii of kernel, one needs to cache and gather millions of vertex points, which is a memory-intensive process, in contrast with the vertex connection methods which has a constant space complexity with respect to the simulation size.
The implementation first reads the scene and render configuration files. Then it constructs the BVH structure and does the precalculations for the light sampler. Then it samples light paths by traversing the scene and stores them in the vertex cache. This process is parallelized over multiple threads. Then it constructs the kd-tree for the vertex cache. Then it connects and merges the vertices in the cache to calculate the weights and accumulate the weighted estimates into image pixels, while testing for geometric occlusion for vertex connection and gathering vertices for vertex merging. This process is also parallelized.
I briefly describe program components worthy of note here.
-
For the PRNG, I employed one from the xorshift family, although PCG family seems to be the standard choice for monte carlo ray tracing.
-
For accelerating scene traversal, I employed binary bounding volume hierarchy (BVH) and surface area heuristics (SAH) with full sweeping to build the acceleration structure.
-
For accelerating gathering process, I employed kd-tree, but uniform grid with locality-sensitive hash (hash grid) is a better choice, as construction and query time complexity is much lower than kd-tree with respect to the number of vertices in the cache. The vertex cache is constructed for all threads instead of one cache per thread, so both construction and gathering is slow.
I briefly describe code sections worthy of note to help understand the codebase here.
- L#236-281: PRNG (xoroshiro128+).
- L#283-376: Basic vector maths and reflection/refraction vector transforms.
- L#382-484: Intersection tests and statistical routines of geometric primitives.
- L#517-531: Dielectric and conductive fresnel reflection coefficient calculations.
- L#533-575: Utilities for sampling BRDF by inversion method.
- L#577-783: Emission distribution functions (I made it up) and BRDF evaluation and sampling.
- L#797-834: Precalculation of discrete distribution of lights in the scene.
- L#835-1017: BVH construction by SAH with full sweep and traversal.
- L#1158-1185: Initial ray sampling for camera and light.
- L#1227-1294: Scene traversal, path generation and saving vertex infromation.
- L#1295-1337: Native path tracing.
- L#1338-1374: kd-tree construction.
- L#1375-1565: Vertex merging (gathering) process; weight calculations and estimate accumulations.
- L#1566-1940: Vertex connection process; weight calculations and estimate accumulations.
- L#1941-2081: Vertex connection and merging.
- L#2082-2376: Vertex connection process (Another vertex connection process, because weight calculations are different (less) for bidirectional path tracing); weight calculations and estimate accumulations.
- L#2377-2426: Bidirectional path tracing.
- Light transport simulation with vertex connection and merging
- Disney's Practical Guide to Path Tracing
- Robust Monte Carlo Methods for Light Transport Simulation
- Physically Based Rendering:From Theory To Implementation
- xoshiro / xoroshiro generators and the PRNG shootout
- On fast Construction of SAH-based Bounding Volume Hierarchies
- Fast Minimum Storage Ray Triangle Intersection
- The Stanford 3D Scanning Repository