US9245377B1 - Image processing using progressive generation of intermediate images using photon beams of varying parameters - Google Patents

Image processing using progressive generation of intermediate images using photon beams of varying parameters Download PDF

Info

Publication number
US9245377B1
US9245377B1 US14/164,358 US201414164358A US9245377B1 US 9245377 B1 US9245377 B1 US 9245377B1 US 201414164358 A US201414164358 A US 201414164358A US 9245377 B1 US9245377 B1 US 9245377B1
Authority
US
United States
Prior art keywords
photon
photon beam
beams
computer
beam simulation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US14/164,358
Inventor
Wojciech Jarosz
Derek Nowrouzezahrai
Robert Thomas
Peter-Pike Sloan
Matthias Zwicker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Disney Enterprises Inc
Pixar
Original Assignee
Pixar
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pixar filed Critical Pixar
Priority to US14/164,358 priority Critical patent/US9245377B1/en
Assigned to THE WALT DISNEY COMPANY (SWITZERLAND) GMBH reassignment THE WALT DISNEY COMPANY (SWITZERLAND) GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ZWICKER, MATTHIAS, THOMAS, ROBERT, NOWROUZEZAHRAI, DEREK, JAROSZ, WOJCIECH, SLOAN, PETER-PIKE
Assigned to DISNEY ENTERPRISES, INC. reassignment DISNEY ENTERPRISES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THE WALT DISNEY COMPANY (SWITZERLAND) GMBH
Application granted granted Critical
Publication of US9245377B1 publication Critical patent/US9245377B1/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/55Radiosity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/506Illumination models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Definitions

  • Computer-generated imagery typically involves using software and/or hardware to generate one or more images from a geometric model.
  • a geometric model defines objects, light sources, and other elements of a virtual scene and a rendering system or other computer/software/hardware system will read in the geometric model and determine what colors are needed in what portions of the image.
  • a renderer will generate a two-dimensional (“2D”) or three-dimensional (“3D”) array of pixel color values that collectively result in the desired image or images.
  • Examples of such interaction include the scattering of light, which results in visual complexity, such as when the light interacts with participating media such as clouds, fog, and even air.
  • Rendering this complex light transport typically involves solving a radiative transport equation [Chandrasekar 1960] combined with a rendering equation [Kajiya 1986] as a boundary condition.
  • Other light interactions might include light passing through objects, such as refractive objects.
  • Rendering preferably involves computing unbiased, noise-free images.
  • the typical options to achieve this are variants of brute force path tracing [Kajiya 1986; Lafortune and Willems 1993; Veach and Guibas 1994; Lafortune and Willems 1996] and Metropolis light transport [Veach and Guibas 1997; Pauly et al. 2000], which are notoriously slow to converge to noise-free images despite recent advances [Raab et al. 2008; Yue et al. 2010].
  • SDS specular-diffuse-specular
  • SMS specular-media-specular
  • Volumetric photon mapping is an approach to dealing with light in a participating medium. It is described in [Jensen and Christensen 1998] and subsequently improved by [Jarosz et al. 2008] to avoid costly and redundant density queries due to ray marching. In those approaches, a renderer formulates a “beam radiance estimate” that considers all photons along the length of a ray in one query. [Jarosz et al. 2011] showed how to apply the beam concept not just to the query operation but also to the photon data representation. In that approach, the entire photon path is used instead of just photon points, to obtain significant quality and performance improvement. This is similar in spirit to the concept of ray maps for surface illumination [Lastra et al.
  • a method and system for progressively rendering radiance for a volumetric medium is provided, such that computing images can be done in software and/or hardware efficiently and still represent the desired image effects.
  • a process or apparatus Given a geometric model of a virtual space, or another description of a scene, a process or apparatus generates an image or multiple images, in part using a photon simulation process to produce a representation of photon beams in a scene.
  • the photon beams are rendered with respect to a camera viewpoint, iteratively, by computing an estimated radiance associated with the photon beams. Over multiple iterations, the global radius scaling factor is progressively decreased, thereby reducing overall error by facilitating convergence.
  • a representation of the computed average estimated radiance at each pixel in the scene is stored.
  • a global radii parameter is set for each iteration so that the beams are the same radius for one iterative image and different from image to image.
  • the process is implemented using a graphics processing unit (“GPU”) and formulating the process as a splatting operation, for use in interactive and real-time applications.
  • GPU graphics processing unit
  • heterogeneous participating media is handled.
  • One method of handling it is to use piecewise handling of the beams, including setting a termination point for the beam, as in shadow mapping. Such techniques can be expanded beyond photon beams.
  • FIG. 1 illustrates example scenes rendered using an embodiment of the invention
  • FIG. 1( a ) relates to a disco ball scene, the results for independent render passes, and averages of the multiple render passes
  • FIG. 1( b ) relates to a flashlights scene and corresponding results.
  • FIG. 2 illustrates a flowchart of a disclosed embodiment.
  • FIG. 3 illustrates the geometric configuration of estimating radiance in some embodiments.
  • FIG. 4 illustrates estimation of volumetric scattering in some embodiments.
  • FIG. 5 illustrates three versions of an example scene rendered using an embodiment of the invention.
  • FIG. 6 shows two graphs plotting sample variance of error, sample variance of average error, and expected value of the average error with three different a settings for the highlighted point in the example scene in FIG. 5 .
  • FIG. 7 shows a graph plotting the global radius scaling factor with variations in scale factor, M.
  • FIG. 8 illustrates media radiance in an example scene rendered using an embodiment of the invention.
  • FIG. 9 illustrates two versions of an example scene, in this case rendered entirely on a GPU, using an embodiment of the invention.
  • FIG. 10 illustrates an example of a hardware system that might be used for rendering.
  • the present invention is described herein, in many places, as a set of computations. It should be understood that these computations are not performable manually, but are performed by an appropriately programmed computer, computing device, electronic device, or the like, that might be a general purpose computer, a graphical processing unit, and/or other hardware. As with any physical system, there are constraints as to the memory available and the number of calculations that can be done in a given amount of time. Embodiments of the present invention might be described in mathematical terms, but one of ordinary skill in the art, such as one familiar with computer graphics, would understand that the mathematical steps are to be implemented for execution in some sort of hardware environment. Therefore, it will be assumed that such hardware and/or associated software or instructions are present and the description below will not be burdened with constant mention of same.
  • Embodiments of the present invention might be implemented entirely in software stored on tangle, non-transitory or transitory media or systems, such that it is electronically readable. While in places, process steps might be described by language such as “we calculate” or “we evaluate” or “we determine”, it should be apparent in some contexts herein that such steps are performed by computer hardware and/or defined by computer hardware instructions and not persons.
  • the essential task of such computer software and/or hardware is to generate images from a geometric model or other description of a virtual scene.
  • the model/description includes description of a space, a virtual camera location and virtual camera details, objects and light sources.
  • some participating media i.e., virtual media in the space that light from the virtual light sources passes through and interacts with.
  • Such effects would cause some light from the light source to be deflected from the original path from the light source in some direction, deflected off of some part of the media and toward the camera viewpoint.
  • the output image is in the form of a two-dimensional (“2D”) or three-dimensional (“3D”) array of pixel values and in such cases, the software and/or hardware used to generate those images is referred to as a renderer.
  • a renderer outputs images in a suitable form.
  • the renderer renders an image or images in part, leaving some other module, component or system to perform additional steps on the image or images to form a completed image or images.
  • a renderer takes as its input a geometric model or some representation of the objects, lighting, effects, etc. present in a virtual scene and derives one or more images of that virtual scene from a camera viewpoint.
  • the renderer is expected to have some mechanism for reading in that model or representation in an electronic form, store those inputs in some accessible memory and have computing power to make computations.
  • the renderer will also have memory for storing intermediate variables, program variables, as well as storage for data structures related to lighting, photon beams and the like, as well as storage for intermediate images.
  • the renderer when the renderer is described, for example, as having generated multiple intermediate images and averaging them, it should be understood that the corresponding computational operations are performed, e.g., a processor reads values if pixels of the intermediate images, averages pixels and stores a result as another intermediate image, an accumulation image, or the final image, etc.
  • the renderer might also include or have access to a random number generator.
  • a progressive photon beam process is used.
  • photon beams are used in multiple rendering passes, where the rendering passes use different photon beam radii and the results are combined. It might be such that two or more rendering passes use the same photon beam radius, and while most of the examples described herein assume a different photon beam radius for each pass, the invention is not so limited.
  • These processes are efficient, robust to complex light paths, and handle heterogeneous media and anisotropic scattering while provably converging to the correct solution using a bounded memory footprint.
  • a progressive mapping with multiple iterations starts with a pass with a given beam radius, followed by a pass with a smaller radius, and so on, until sufficient convergence is obtained.
  • Each pass can be independent of other passes, so the order can be changed among the intermediate images that are averaged together without affecting the final result.
  • the radii might vary over an iteration, but the radii for a given beam varying from iteration to iteration, with a similar final result as the case where all of the beams of one radii are in the same iterative pass. Of course, nothing requires that, from iteration to iteration, the same beams be used. In a typical embodiment, suppose a light source is to be represented by 100 beams.
  • 100 beams are randomly selected for one iterative pass, and for the next iterative pass, 100 beams are randomly selected, so they are likely to be different from the beams in the first pass.
  • 100 beams are randomly selected, so they are likely to be different from the beams in the first pass.
  • their effects average out and converge to the correct solution.
  • Progressive photon beam methods can robustly handle situations that are difficult for most other algorithms, such as scenes containing participating media and specular interfaces, with realistic light sources completely enclosed by refractive and reflective materials. Our technique described herein handles heterogeneous media and also trivially supports stochastic effects, such as depth-of-field and glossy materials. As explained herein, progressive photon beams can be implemented efficiently on a GPU as a splatting operation, making it applicable to interactive and real-time applications. These features can provide scalability, provide the same physically-based algorithm for interactive feedback and reference-quality, and unbiased solutions.
  • Convergence is achieved with less computational effort than, say, path tracing, and is robust to SDS or SMS subpaths, and has a bounded memory footprint.
  • GPU acceleration allows for interactive lighting design in the presence of complex light sources and participating media. This makes it possible to produce interactive previews with the same technique used for a high-quality final render—providing visual consistency, an essential property for interactive lighting design tools.
  • Photon beam handling is a generalization of volumetric photon mapping, which accelerates participating media rendering by considering the full path of photons (beams), instead of just photon scattering locations.
  • photon beams are blurred with a finite width, leading to bias. Reducing this width reduces bias, but unfortunately increases noise.
  • Progressive photon mapping (“PPM”) provides a way to eliminate bias and noise simultaneously in photon mapping.
  • PPM Progressive photon mapping
  • Unfortunately, naively applying PPM to photon beams is not possible due to the fundamental differences between density estimation using points and beams, so convergence guarantees need to be re-derived for this more complicated case.
  • previous PPM derivations only apply to fixed-radius or k-nearest neighbor density estimation, which are commonly used for surface illumination.
  • Photon beams are formulated using variable kernel density estimation, where each beam has an associated kernel.
  • the challenges of rendering beams progressively can be overcome. Described herein is an efficient density estimation framework for participating media that is robust to SDS and SMS subpaths, and which converges to ground truth with bounded memory usage. Additionally, it is described how photons beams can be applied to efficiently handle heterogeneous media.
  • FIG. 1 illustrates example scenes rendered using iterative photon beam rendering.
  • FIG. 1( a ) relates to a disco ball scene, the results for independent render passes, and averages of the multiple render passes;
  • FIG. 1( b ) relates to a flashlights scene and corresponding results.
  • the left half of the top image shows results for the case where homogeneous media is assumed and the right half of the top image shows results for the case where heterogeneous media is assumed.
  • the middle sequence of images is intermediate images, each done with different beam radii, while the bottom sequence of images are the averaging of the intermediate images.
  • the photon beam radii are progressive, in that the first one has wide radii and each successive one has a lower radii.
  • the renderer renders each pass using a collection of stochastically generated photon beams.
  • the radii of the photon beams is reduced using a global scaling factor after each pass. Therefore each subsequent image has less bias, but slightly more noise.
  • the average of these intermediate images converges to the correct solution, generating an unbiased solution with finite memory.
  • a theoretical error analysis of density estimation using photon beams derives the necessary conditions for convergence, and a numerical validation of the theory is provided herein.
  • a progressive generalization of deep shadow maps handles heterogeneous media efficiently.
  • the photon beam radiance estimate formulated as a splatting operation exploits GPU rasterization, allowing simple scenes to be rendered with multiple specular reflections in real-time.
  • FIG. 2 illustrates a flowchart of an example embodiment comprising multiple iterations of steps 210 through 240 .
  • a photon simulation is performed, resulting in a plurality of photon beams in a scene.
  • the photon simulation may be performed in any conventional manner, such as by performing, for example, a shadow-mapping operation, a rasterization operation, a ray-marching operation, or a ray-tracing operation.
  • the performing photon simulation includes calculating an interaction of the plurality of photon beams with geometry and participating media in the scene.
  • the photon beams are rendered with respect to a camera viewpoint.
  • Rendering the photon beams includes computing estimated radiance associated with the photon beams. Rendering may be performed in any conventional manner, such as by performing, for example, a splatting operation, a ray-tracing operation, a ray-marching operation, or a rasterization operation.
  • rendering the photon beams comprises determining a contribution of the plurality of photon beams to illumination of one or more pixels in the scene based on a progressive deep shadow map.
  • a global radius scaling factor is applied to the radius used for photon beams, relative to the radius used for the prior iteration.
  • the global radius scaling factor is progressively decreased over iterations of steps 210 through 240 .
  • determining a progressive decrease in the global radius scaling factor comprises decreasing a kernel width of the photon beam.
  • determining a progressive decrease in the global radius scaling factor comprises decreasing a kernel width of a query ray.
  • determining a progressive decrease in the global radius scaling factor comprises enforcing a ratio of variance between successive iterations.
  • a representation of the computed average estimated radiance at each pixel in the scene is stored in a computer-readable storage media, in effect averaging the intermediate images that are rendered in each pass.
  • the rendered photon beams are discarded after each iteration of steps 210 through 240 in order to reduce memory usage.
  • the radii are varied in another manner.
  • the radii are varied, but are varied so that the same radius is not used repeatedly for the same beam. Beams may be randomly selected among possible beams.
  • the light incident at any point, x, in a scene (e.g., the camera view point) from a direction, (the over arrow here signals a ray or vector), such as direction through a pixel, can be expressed using a radiative transport equation [Chandrasekar 1960] as the sum of two terms, as in Equation 1.
  • L ( x , ) T r ( s ) L s ( x s , )+ L m ( x , ) (Eqn. 1)
  • T r (s) e ⁇ s ⁇ t , where ⁇ t is the extinction coefficient.
  • transmittance accounts for the extinction coefficient along the entire segment between the two points, but we use this simple one-parameter notation here for brevity.
  • Equation 2 The second term is medium radiance, shown in Equation 2, where f is the normalized phase function, as is the scattering coefficient, and w is a scalar distance along the camera direction (which is itself a vector, as indicated by the superimposed arrow).
  • L m ( x , ) ⁇ 0 s ⁇ s ( x w ) T r ( w ) ⁇ ⁇ 4 ⁇ f ( ⁇ ) L ( x w , ) d dw (Eqn. 2)
  • the inner integral corresponds to the in-scattered radiance, which recursively depends on radiance arriving at x w from directions on the sphere ⁇ 4 ⁇ .
  • Photon mapping methods approximate the medium radiance (see, Eqn. 2) using a collection of photons, each with a power, position, and direction. Instead of performing density estimation on just the positions of the photons, the recent photon beams approach [Jarosz et al. 2011] treats each photon as a beam of light starting at the photon position and shooting in the photon's outgoing direction. [Jarosz et al. 2011] derived a “Beam ⁇ Beam 1D” estimate that directly estimates medium radiance due to photon beams along a query ray.
  • FIG. 3 illustrates a use of this coordinate system and radiance estimation with one photon beam as viewed from the side (left) and in the plane perpendicular to the query ray.
  • the direction ⁇ extends out of the page (left).
  • To estimate radiance due to a photon beam treat the beam as an infinite number of imaginary photon points along its length (as shown in the right-hand side of FIG. 3 .
  • the power of the photons is blurred in 1D, along .
  • Equation 3 An estimate of the incident radiance along the direction using one photon beam can be expressed as shown in Equation 3, where ⁇ is the power of the photon, and the scalars (u, v, w) are signed distances along the three axes to the imaginary photon point closest to the query ray (the point on the beam closest to the ray ).
  • the first transmittance term accounts for attenuation through a distance w to x
  • the second computes the transmittance through a distance v to the position of the photon.
  • the photon beam is blurred using a 1D kernel k r centered on the beam with a support width of r along direction .
  • Equation 3 is evaluated for many beams to obtain a high quality image and is a consistent estimator like standard photon mapping. In other words, it produces an unbiased solution when using an infinite number of beams with an infinitesimally small blur kernel. This is an important property which we will use later on.
  • PPM Progressive photon mapping
  • Equation 4 Observe from Equation 4 that if the same radii were used in the radiance estimate in each pass, the variance of the average error would be reduced, but the bias would remain the same.
  • the renderer computes the contribution of media radiance, L m , to pixel values c.
  • FIG. 4 illustrates the problem schematically, and Equation 6 shows this mathematically.
  • c ⁇ W ( x , ) L m ( x , ) dxd (Eqn. 6)
  • photon beams are shot from light sources and the paths are traced from the eye until a diffuse surface is hit and then the renderer estimating volumetric scattering by finding the beam/ray intersections and weighting by the contribution W to the camera. In each pass, the renderer reduces a global radius scaling factor and repeats.
  • W( ) is a function that weights the contribution of L m to the pixel value (accounting for antialiasing, glossy reflection, depth-of-field, etc.).
  • the renderer computes c by tracing a number of paths N from the eye, evaluating W, and evaluating the media radiance L m .
  • the error term, ⁇ is the difference between the true radiance, L m (x, ), and the radiance estimated using a photon beam with a kernel of radius r. As explained herein, this converges.
  • the photon beam method generates images using more than one photon beam at a time.
  • the photon beam widths need not be equal, but could be determined adaptively per beam using, e.g., photon differentials. We can express this by generalizing Equation 8 into Equation 8A.
  • Equations 5A and 5B we show how to enforce conditions A and B (from Equations 5A and 5B).
  • PPM the variance increases slightly in each pass, but in such a way that the variance of the average error still vanishes.
  • Increasing variance allows us to reduce the kernel scale (see Eqn. 11), which in turn reduces the expected error of the radiance estimate (see Eqn. 12).
  • Equation 13 The convergence in PPM can be achieved by enforcing the ratio of variance between passes as indicated in Equation 13, where ⁇ is a user specified constant between 0 and 1.
  • Var[ ⁇ i+1 ]/Var[ ⁇ i ] ( i+ 1)/( i + ⁇ ) (Eqn. 13)
  • this ratio induces a variance sequence, where the variance of the i-th pass is predicted as shown in Equation 14.
  • the variance of the average error after N passes can be expressed in terms of the variance of the first pass, Var[ ⁇ 1 ], as in Equation 15, which vanishes as desired when N? ⁇ .
  • the renderer uses a global scaling factor, R i , to scale the radius of each beam, as well as the minimum and maximum radii bounds, in pass i. Note that by scaling all radii by that global scaling factor, that scales their harmonic and arithmetic means by that factor as well.
  • the “Sphere Caustic” image contains a glass sphere, an anisotropic medium, and a point light. We shot 1K beams per pass and obtained a high-quality result in 100 passes. The rightmost image—for 100 passes—completed in 10 seconds.
  • FIG. 6 provides graphs of corresponding bias and variance of the highlighted point.
  • On the left of FIG. 6 the sample variance of the radiance estimate as a function of the iterations is shown, in particular the per-pass variance (left; upper three curves), the average variance (left; lower three curves), and bias (right) with three ⁇ settings for the highlighted point in FIG. 5 .
  • Empirical results match the theoretical models derived herein well.
  • the noise in the empirical curves is due to a limited number (10K) of measurement runs.
  • the example process described in this section has an inner loop and an outer loop.
  • the inner loop can be the standard two-pass photon beam process described in [Jarosz et al. 2011] or some other method.
  • photon beams are emitted from lights and scatter at surfaces and media in the scene.
  • this pass can be effectively identical to the photon tracing in volumetric and surface-based photon mapping described by [Jensen 2001].
  • the process determines the kernel width of each beam by tracing photon differentials during the photon tracing process.
  • the process also includes automatically computing and enforcing radii bounds to avoid infinite variance or bias. Of course, as explained above, the order might not matter.
  • the renderer computes radiance along each ray using Equation 3 or equivalent.
  • this involves a couple of exponentials and scaling by the scattering coefficient and foreshortened phase function.
  • the case for heterogeneous media is described further below.
  • a user can have a single intuitive parameter to control convergence.
  • a number of parameters influence the process' performance, such as the bias-noise tradeoff ⁇ (0,1), the number of photons per pass, M, and either a number of nearest neighbors k or an initial global radius.
  • FIG. 7 is a plot of the global radius scaling factor, with varying M.
  • the standard approach produces vastly different scaling sequences for a progressive simulation using the same total number of stored photons.
  • We reduce the scale factor M times after each pass, which approximates the scaling sequence of M 1 regardless of M.
  • An unbiased estimator can be implemented, in one example, by using mean-free path sampling as a black-box. Given a function, d(x, ), which returns a random propagation distance from a point x in direction , the transmittance between x and a point s units away in direction is as shown by Equation 22, where ⁇ is the Heaviside step function. This estimates transmittance by counting samples that successfully propagate a distance ⁇ s.
  • Equation 22 evaluates Equation 22 for each ray/beam intersection within a pixel.
  • Each evaluation actually provides enough information for an unbiased estimate of the transmittance function for all distances along the ray, and not just the function value at a single distance s.
  • a renderer can handle this by computing n propagation distances and re-evaluating Equation 22 for arbitrary values of s. This results in an unbiased, piecewise-constant representation of the transmittance function, as illustrated in the left-hand side of FIG. 8 .
  • FIG. 8 There, validation of progressive deep shadow maps is shown (thin solid line) for extinction functions (dashed line) with analytically-computable transmittances (thick solid line).
  • four random propagation distances are used, resulting in a four-step approximation of transmittance in each pass.
  • the renderer can and store several unbiased random propagation distances along each beam. Given these distances, it can re-evaluate transmittance using Equation 20 at any distance along the beam.
  • the collection of transmittance functions across all photon beams forms an unstructured deep shadow map that converges to the correct result with many passes.
  • Equation 22 When using Equation 22 to estimate transmittance, the only effect on the error analysis is that Var[ ⁇ ] in Equations 9 and 11 increases compared to using analytic transmittance (note that bias is not affected since E[ ⁇ ] does not change with an unbiased estimator). Homogeneous media could be rendered using the analytic formula or using Equation 22. Both approaches converge to the same result (as illustrated by the top row of FIG. 8 ), but the Monte Carlo estimator for transmittance adds additional variance. In some cases then, it is preferred to use analytic transmittance in the case of homogeneous media.
  • a more general renderer combines a CPU ray tracer with a GPU rasterizer and possibly a GPU that can handle GPU operations.
  • the CPU ray tracer handles the photon shooting process.
  • the renderer can decompose the light paths into ones that can be easily handled using GPU-accelerated rasterization, and handle all other light paths with the CPU ray tracer.
  • the renderer can rasterize all photon beams that are directly visible by the camera.
  • the CPU ray tracer then handles the remaining light paths, such as those visible only via reflections/refractions off of objects.
  • Equation 3 has a simple geometric interpretation, as illustrated in FIG. 3 , namely that each beam is an axial-billboard facing the camera. As in the standard photon beams approach, the CPU ray tracer computes ray-billboard intersections with this representation. However, for directly-visible beams, Equation 3 can be reformulated as a splatting operation amenable to GPU rasterization and thus GPU instructions can be generated.
  • C++ is used and so is OpenGL.
  • the photon beam billboard quad geometry is generated for every stored beam on the CPU.
  • This geometry is rasterized with GPU blending enabled and a simple pixel shader evaluates Equation 3 for every pixel under the support of the beam kernel on the GPU.
  • the renderer also culls the beam quads against the remaining scene geometry to avoid computing radiance from occluded beams.
  • the renderer can apply a Gaussian jitter to the camera matrix in each pass.
  • the CPU component handles all other light paths using Monte Carlo ray tracing with a single path per pixel per pass.
  • the fragment shader evaluates Equation 3 using two exponentials for the transmittance. It can use several layers of simplex noise for heterogeneous media, and follow the approach derived above for progressive deep shadow maps.
  • For transmittance along a beam it computes and stores a fixed number, n b , of random propagation distances along each beam using Woodcock tracking (in practice, n b is usually between 4 and 16). Since transmittance is constant between these distances, we can split each beam into n b quads before rasterizing and assign the appropriate transmittance to each segment using Equation 22.
  • the OptiX GPU ray tracing API described by [Parker et al. 2010] is used. That OptiX renderer implements two kernels: one for photon beam shooting, and one for eye ray tracing and progressive accumulation. The renderer shoots and stores photon beams, in parallel, on the GPU. The shading kernel traces against all scene geometry and photon beams, each stored in their own BVH, with volumetric shading computed using Equation 3 at each beam intersection.
  • a real-time GPU renderer only uses OpenGL rasterization in scenes with a limited number of specular bounces. Shadow mapping is extended to trace and generate beam quads that are visualized with GPU rasterization as above. [McGuire et al. 2009] also used shadow maps, but that was limited to photon splatting on surfaces. In this renderer, it generates and splats beams, possibly exploiting a progressive framework to obtain convergent results.
  • a light-space projection transform (as in standard shadow mapping) can be used to rasterize the scene from the light's viewpoint. Instead of recording depth for each shadow map texel, each texel instead produces a photon beam.
  • the renderer computes the origin and direction of the central beam as well as the auxiliary differential rays.
  • the differential ray intersection points are computed using differential properties stored at the scene geometry as vertex attributes, interpolated during rasterization.
  • beam directions are reflected and/or refracted at the scene geometry's interface. This entire process can be implemented in a simple pixel shader that outputs to multiple render-targets. Depending on the number of available render targets, several (reflected, refracted, or light) beams can be generated per render pass.
  • the render target outputs can be snapped to vertex buffers and render points at the beam origins.
  • the remainder of the beam data is passed as vertex attributes to the point geometry, and a geometry shader converts each point into an axial billboard.
  • These quads can then be rendered using the same homogeneous shader as the hybrid example described above.
  • the shadow map grid can be jittered to produce a different set of beams each pass.
  • This end-to-end rendering procedure can be carried out entirely with GPU rasterization, and can render photon beams that emanate from the light source as well as those due to a single surface reflection/refraction.
  • a hybrid implementation might use a 12-core 2.66 GHz Intel XeonTM 12 GB with an ATI Radeon HD 5770.
  • the examples of FIG. 1 were generated in that manner, in both homogeneous and heterogeneous media, including zoomed insets of the media illumination showing the progressive refinement of the process.
  • the scenes are rendered at 1280 ⁇ 720, and they include depth-of-field and antialiasing.
  • the lights in these scenes are all modeled realistically with light sources inside reflective/refractive fixtures. Illumination encounters several specular bounces before arriving at surfaces or media, making these scenes impractical for path tracing.
  • PPM can be used for surface shading, while PPB is used to get performance and quality on the media scattering.
  • the Disco scene of FIG. 1 contains a mirror disco ball illuminated by six lights inside realistic Fresnel-lens casings. Each Fresnel light has a complex emission distribution due to refraction, and the reflections off of the faceted sphere produce intricate volume caustics.
  • the media radiance was rendered in three minutes in homogeneous media and 5.7 minutes in heterogeneous media.
  • the surface caustics on the wall of the scene require another 7.5 minutes.
  • the Flashlights scene renders in 8.0 minutes and 10.8 minutes respectively using 2.1 M beams (diffuse shading takes an additional 124 minutes).
  • Beam storage includes start and end points (2 ⁇ 3 floats), differential rays (2 ⁇ 2 ⁇ 3 floats), and power (3 floats).
  • a scene-dependent acceleration structure might also be necessary, and even for a single bounding box per beam, this is 2 ⁇ 3 floats (it can use a BVH and implicitly split beams as described in [Jarosz, et al. 2011]).
  • the implementation need not be optimized for memory usage with the progressive approach, but even with careful tuning this would likely be above 100 bytes per beam. Thus, even in simple scenes, beam storage can quickly exceed available memory.
  • a scene with intricate refractions might require over 50 M beams for high-quality results and that would exceed 5 GB of memory even with the conservative 100 bytes/beam estimate. Using the progressive approach, this is not a problem.
  • Our beams use adaptive kernels with ray differentials, which may allow for higher quality results using fewer beams. Also, rasterization is used for large portions of the illumination, which improves performance.
  • FIG. 9 illustrates the OCEAN scene, where the viewer sees light beams refracted through the ocean surface and scattering in the ocean's media.
  • the progressive rasterization converges in less than a second. It can be done in real-time using GPU rasterization with 4K beams at around 600 FPS, which is less than 2 ms). The image after 20 passes (top) renders at around 30 FPS (33 ms). The high-quality result renders in less than a second (bottom, 450 passes).
  • FIG. 10 is a block diagram of hardware that might be used to implement a renderer.
  • the renderer can use a dedicated computer system that only renders, but might also be part of a computer system that performs other actions, such as executing a real-time game or other experience with rendering images being one part of the operation.
  • Rendering system 800 is illustrated including a processing unit 820 coupled to one or more display devices 810 , which might be used to display the intermediate images or accumulated images or final images, as well as allow for interactive specification of scene elements and/or rendering parameters.
  • a variety of user input devices, 830 and 840 may be provided as inputs.
  • a data interface 850 may also be provided.
  • user input device 830 includes wired-connections such as a computer-type keyboard, a computer mouse, a trackball, a track pad, a joystick, drawing tablet, microphone, and the like; and user input device 840 includes wireless connections such as wireless remote controls, wireless keyboards, wireless mice, and the like.
  • user input devices 830 - 840 typically allow a user to select objects, icons, text and the like that graphically appear on a display device (e.g., 810 ) via a command such as a click of a button or the like.
  • Other embodiments of user input devices include front-panel buttons on processing unit 820 .
  • Embodiments of data interfaces 850 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, FireWire interface, USB interface, and the like.
  • data interfaces 850 may be coupled to a computer network, to a FireWire bus, a satellite cable connection, an optical cable, a wired-cable connection, or the like.
  • Processing unit 820 might include one or more CPU and one or more GPU.
  • processing unit 820 may include familiar computer-type components such as a processor 860 , and memory storage devices, such as a random access memory (RAM) 870 , disk drives 880 , and system bus 890 interconnecting the above components.
  • RAM random access memory
  • the CPU(s) and or GPU(s) can execute instructions representative of process steps described herein.
  • RAM 870 and hard-disk drive 880 are examples of tangible media configured to store data such as images, scene data, instructions and the like.
  • Other types of tangible media includes removable hard disks, optical storage media such as CD-ROMS, DVD-ROMS, and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, and the like ( 825 ).
  • FIG. 10 is representative of a processing unit 820 capable of rendering or otherwise generating images. It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention.
  • processing unit 820 may be a personal computer, handheld computer, server farm, or similar hardware.
  • the techniques described below may be implemented upon a chip or an auxiliary processing board.
  • Equation A1 The variance of the error in Equation 8 is as shown by Equation A1.
  • the term remaining in square brackets is just a constant associated with the kernel, which we denote C 1 .
  • Equation B3 Equation B3
  • E [ ⁇ ( x, ⁇ right arrow over (w) ⁇ ,r )] E [ ⁇ ]( p U ⁇ right arrow over (w) ⁇ (0)+ rC 2 ) ⁇
  • E[ ⁇ ]p U ⁇ right arrow over (w) ⁇ (0) rE[ ⁇ ]C 2 (Eqn. B5)
  • Equation C1 For M photons in the photon beams estimate, each with their own kernel radius, the variance is as shown in Equation C1, where we use the harmonic mean of the radii in the last step, i.e.,
  • Equation D1 where r A denotes the arithmetic mean of the beam radii.
  • the Monte Carlo transmittance estimator increases variance per pass.
  • the variance of the transmittance estimate increases with distance (where fewer random samples propagate). More precisely, at a distance where transmittance is 1%, only 1% of the beams contribute, which results in higher variance.
  • the worst-case scenario is if both the camera and light source are very far away from the subject (or, conversely, if the medium is optically thick) since most of the beams terminate before reaching the subject, and most of the deep shadow map distances to the camera result in zero contribution.
  • An unbiased transmittance estimator which falls off to zero in a piecewise-continuous, and not piecewise-constant, fashion might be used if this is an issue. Markov Chain Monte Carlo or adaptive sampling techniques could be used to reduce variance.
  • adaptive techniques might be used to choose a to optimize convergence.
  • progressive photon beam processing can render complex illumination in participating media. It converges to the gold standard of rendering, i.e., unbiased, noise free solutions of the radiative transfer and the rendering equation, while being robust to complex light paths including SDS and SMS subpaths. Such processing can be combined with photon beams in a simple and elegant way. In each iteration of a progressive process, a global scaling factor is applied to the beam radii and reduced each pass.
  • Embodiments disclosed herein describe progressive photon beams, a new algorithm to render complex illumination in participating media.
  • the main advantage of the algorithm disclosed herein is that it converges to the gold standard of rendering, i.e., unbiased, noise-free solutions of the radiative transfer and the rendering equation, while being robust to complex light paths including specular-diffuse-specular subpaths.
  • the a parameter that controls the trade-off between reducing variance and bias is set to a constant. Some embodiments may adaptively determine this parameter to optimize convergence.
  • photon beams can be sampled in various manners.
  • combinations or sub-combinations of the above disclosed embodiments can be advantageously made.
  • the block diagrams of the architecture and flow charts are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Generation (AREA)

Abstract

A method for rendering radiance for a volumetric medium is provided. A photon simulation produces a representation of photon beams in a scene. The photon beams are rendered with respect to a camera viewpoint, by computing an estimated radiance associated with the photon beams. A global radius scaling factor can be applied to obtain different radii for the photon beams. Over multiple applications of these steps, the global radius scaling factor can be decreased, thereby reducing overall error by facilitating convergence. Finally, the renderer can be efficiently implemented on the GPU as a splatting operation, for use in interactive and real-time applications.

Description

CROSS-REFERENCES TO RELATED APPLICATIONS
The present application is a continuation of U.S. patent application Ser. No. 13/235,299 filed Sep. 16, 2011, the entire contents of which are incorporated herein by reference for all purposes.
BACKGROUND
Computer-generated imagery typically involves using software and/or hardware to generate one or more images from a geometric model. A geometric model defines objects, light sources, and other elements of a virtual scene and a rendering system or other computer/software/hardware system will read in the geometric model and determine what colors are needed in what portions of the image. A renderer will generate a two-dimensional (“2D”) or three-dimensional (“3D”) array of pixel color values that collectively result in the desired image or images.
For a simple geometric model, such as a cube in a vacuum with a single light source, a simple computer program running on most computer hardware could render the corresponding image in a reasonable amount of time without much optimization effort. However, there are many needs—in the entertainment industry and beyond—for methods and apparatus that can efficiently process complex interactions of virtual objects to generate imagery in constrained timeframes where the images might need to convey realistic light interactions, such as light interacting with a participating medium.
Examples of such interaction include the scattering of light, which results in visual complexity, such as when the light interacts with participating media such as clouds, fog, and even air. Rendering this complex light transport typically involves solving a radiative transport equation [Chandrasekar 1960] combined with a rendering equation [Kajiya 1986] as a boundary condition. Other light interactions might include light passing through objects, such as refractive objects.
Rendering preferably involves computing unbiased, noise-free images. Unfortunately, the typical options to achieve this are variants of brute force path tracing [Kajiya 1986; Lafortune and Willems 1993; Veach and Guibas 1994; Lafortune and Willems 1996] and Metropolis light transport [Veach and Guibas 1997; Pauly et al. 2000], which are notoriously slow to converge to noise-free images despite recent advances [Raab et al. 2008; Yue et al. 2010]. This becomes particularly problematic when the scene contains so-called specular-diffuse-specular (“SDS”) subpaths or specular-media-specular (“SMS”) subpaths, which are actually quite common in physical scenes (e.g., illumination due to a light source inside a glass fixture). Unfortunately, path tracing methods cannot robustly handle these situations, especially in the presence of small light sources.
Methods based on volumetric photon mapping [Jensen and Christensen 1998] do not suffer from these problems. They can robustly handle SDS and SMS subpaths, and generally produce less noise. However, these methods suffer from bias. Bias can be eliminated, in theory, by using infinitely many photons, but in practice this is not feasible since tracking infinitely many photons requires unlimited memory.
Volumetric photon mapping is an approach to dealing with light in a participating medium. It is described in [Jensen and Christensen 1998] and subsequently improved by [Jarosz et al. 2008] to avoid costly and redundant density queries due to ray marching. In those approaches, a renderer formulates a “beam radiance estimate” that considers all photons along the length of a ray in one query. [Jarosz et al. 2011] showed how to apply the beam concept not just to the query operation but also to the photon data representation. In that approach, the entire photon path is used instead of just photon points, to obtain significant quality and performance improvement. This is similar in spirit to the concept of ray maps for surface illumination [Lastra et al. 2002; Havran et al. 2005; Herzog et al. 2007] as well as the recent line-space gathering technique [Sun et al. 2010]. These methods result in bias, which allows for more efficient simulation; however, when the majority of the illumination is due to caustics (which is often the case with realistic lighting fixtures or when there are specular surfaces), the photons are visualized directly and a large number is required to obtain high-quality results. Though these methods converge to an exact solution as the number of photons increases, obtaining a converged solution requires storing an infinite collection of photons, which is not feasible.
Progressive photon mapping [Hachisuka et al. 2008] alleviates this memory constraint. Instead of storing all photons needed to obtain a converged result, it updates progressive estimates of radiance at measurement points in the scene [Hachisuka et al. 2008; Hachisuka and Jensen 2009] or on the image plane [Knaus and Zwicker 2011]. Photons are traced and discarded progressively, and the radiance estimates are updated after each photon tracing pass in such a way that the approximation converges to the correct solution in the limit.
Previous progressive techniques have primarily focused on surface illumination, but [Knaus and Zwicker 2011] demonstrated results for traditional volumetric photon mapping [Jensen and Christensen 1998]. Unfortunately, volumetric photon mapping with photon points produces inferior results to photon beams [Jarosz et al. 2011], especially in the presence of complex specular interactions that benefit most from progressive estimation.
Thus, even with those techniques, there was room for improvement.
REFERENCES
  • AKENINE-MÖLLER, T., HANES, E., and HOFFMAN, N. 2008. Real-Time Rendering 3rd Edition. A. K. Peters, Ltd., Natick, Mass., USA.
  • CHANDRASEKAR, S. 1960. Radiative Transfer. Dover Publications.
  • ENGELHARDT, T., NOVAK, J., and DACHSBACHER, C. 2010. Instant multiple scattering for interactive rendering of heterogeneous participating media. Tech. Rep., Karlsruhe Institute of Technology (December).
  • HACHISUKA, T., and JENSEN, H. W. 2009. Stochastic progressive photon mapping. ACM Transactions on Graphics (December).
  • HACHISUKA, T., OGAKI, S., and JENSEN, H. W. 2008. Progressive photon mapping. ACM Transactions on Graphics (December).
  • HAVRAN, V. BITTNER, J., HERZOG, R., and SEIDEL, H.-P. 2005. Ray maps for global illumination. In Rendering Techniques, 43-54.
  • HERZOG, R., HAVRAN, V. KINUWAKI, S., MYSZKOWSKI, K., and SEIDEL, H.-P. 2007. Global illumination using photon ray splatting. Computer Graphics Forum 26, 3 (September), 503-513.
  • HU, W., DONG, Z., IHRKE, I., GROSCH, T., YUAN, G., and SEIDEL, H.-P. 2010. Interactive volume caustics in single-scattering media. In I3D, ACM.
  • JAROSZ, W., ZWICKER, M., and JENSEN, H. W. 2008. The beam radiance estimate for volumetric photon mapping. Computer Graphics Forum 27, 2 (April), 557-566.
  • JAROSZ, W., NOWROUZEZAHRAI, D., SADEGHI, I., and JENSEN, H. W. 2011. A comprehensive theory of volumetric radiance estimation using photon points and beams. ACM Transactions on Graphics.
  • JENSEN, H. W., and CHRISTENSEN, P. H. 1998. Efficient simulation of light transport in scenes with participating media using photon maps. In Proceedings of SIGGRAPH.
  • JENSEN, H. W. 2001, Realistic Image Synthesis Using Photon Mapping. A. K. Peters, Ltd., Natick, Mass., USA.
  • KAJIYA, J. T. 1986. The rendering equation. In Computer Graphics (Proceedings of SIGGRAPH 86), 143-150.
  • KNAUS, C., and ZWICKER, M. 2011. Progressive photon mapping: A probabilistic approach. ACM Transactions on Graphics.
  • KRÜGER, J., BÜRGER, K., and WESTERMANN, R. 2006. Interactive screen-space accurate photon tracing on GPUs. In Rendering Techniques.
  • LAFORTUNE, E. P., and WILLEMS, Y. D. 1993. Bi-directional path tracing. In Compugraphics.
  • LAFORTUNE, E. P., and WILLEMS, Y. D. 1996. Rendering participating media with bidirectional path tracing. In EG Rendering Workshop.
  • LASTRA, M., UREÑA, C., REVELLES, J., and MONTES, R. 2002. A particle-path based method for Monte Carlo density estimation. In EG Workshop on Rendering, EG Association.
  • LIKTOR, G., and DACHSBACHER, C. 2011. Real-time volume caustics with adaptive beam tracing. In Symposium on Interactive 3D Graphics and Games, ACM, New York, N.Y., USA, I3D '11, 47-54.
  • LOKOVIC, T., and VEACH, E. 2000. Deep shadow maps. In SIGGRAPH, ACM Press, New York, N.Y., USA, 385-392.
  • MCGUIRE, M., and LUEBKE, D. 2009. Hardware-accelerated global illumination by image space photon mapping. In HPG, ACM.
  • PARKER, S. G., BIGLER, J., DIETRICH, A., FRIEDRICH, H., HOBEROCK, J., LUEBKE, D., MCALLISTER, D., MCGUIRE, M., MORLEY, K., ROBISON, A., and STICH, M. 2010. Optix: A general purpose ray tracing engine. ACM Transactions on Graphics (July).
  • PAULY, M., KOLLIG, T., and KELLER, A. 2000. Metropolis light transport for participating media. In Rendering Techniques, 11-22.
  • PERLIN, K. 2001. Noise hardware. In Real-time Shading, ACM SIGGRAPH Course Notes.
  • RAAB, M., SEIBERT, D, and KELLER, A. 2008. Unbiased global illumination with participating media. In Monte Carlo and Quasi-Monte Carlo Methods 2006. Springer, 591-606.
  • SCHJØTH, L. FRISVAD, J. R., ERLEBEN, K., and SPORRING, J. 2007. Photon differentials. In GRAPHITE, ACM, New York.
  • SILVERMAN, B. 1986. Density Estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probability. Chapman and Hall, New York.
  • SUN, X., ZHOU, K., LIN, S., and GUO, B. 2010. Line space gathering for single scattering in large scenes. ACM Transactions on Graphics.
  • SZIRMAY-KALOS, L., TÓTH, B. and MAGDICS, M. 2011. Free path sampling in high resolution inhomogeneous participating media. Computer Graphics Forum 30, 1, 85-97.
  • VEACH, E., and GUIBAS, L. 1994. Bidirectional estimators for light transport. In Fifth Eurographics Workshop on Rendering, 147-162.
  • VEACH, E., and GUIBAS, L. J. 1997. Metropolis light transport, In Proceedings of SIGGRAPH 97, Computer Graphics Proceedings, Annual Conference Series, 65-76.
  • WALTER, B., ZHAO, S., HOLZSCHUCH, N., and BALA, K. 2009. Single scattering in refractive media with triangle mesh boundaries. ACM Transactions in Graphics 28, 3 (July), 92:1-92:8.
  • WILLIAMS, L. 1978. Casting curved shadows on curved surfaces. In Computer Graphics (Proceedings of SIGGRAPH 78), 270-274.
  • WOODCOCK, E., MURPHY, T., HEMMINGS, P., and T. C., L. 1965. Techniques used in the GEM code for Monte Carlo neutronics calculations in reactors and other systems of complex geometry. In App. of Computing Methods to Reactor Prob., Argonne National Laboratory.
  • YUE, Y., IWASAKI, K., CHEN, B.-Y., DOBASHI, Y., and NISHITA, T. 2010. Unbiased, adaptive stochastic sampling for rendering inhomogeneous participating media. ACM Transactions on Graphics.
SUMMARY
A method and system for progressively rendering radiance for a volumetric medium is provided, such that computing images can be done in software and/or hardware efficiently and still represent the desired image effects. Given a geometric model of a virtual space, or another description of a scene, a process or apparatus generates an image or multiple images, in part using a photon simulation process to produce a representation of photon beams in a scene. The photon beams are rendered with respect to a camera viewpoint, iteratively, by computing an estimated radiance associated with the photon beams. Over multiple iterations, the global radius scaling factor is progressively decreased, thereby reducing overall error by facilitating convergence. Finally, a representation of the computed average estimated radiance at each pixel in the scene is stored. In each iteration, there might be different samplings of photon beams and different beam radii for different beams in the iterations. In a specific embodiment, a global radii parameter is set for each iteration so that the beams are the same radius for one iterative image and different from image to image.
In some embodiments, the process is implemented using a graphics processing unit (“GPU”) and formulating the process as a splatting operation, for use in interactive and real-time applications.
In some embodiments, heterogeneous participating media is handled. One method of handling it is to use piecewise handling of the beams, including setting a termination point for the beam, as in shadow mapping. Such techniques can be expanded beyond photon beams.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates example scenes rendered using an embodiment of the invention; FIG. 1( a) relates to a disco ball scene, the results for independent render passes, and averages of the multiple render passes; FIG. 1( b) relates to a flashlights scene and corresponding results.
FIG. 2 illustrates a flowchart of a disclosed embodiment.
FIG. 3 illustrates the geometric configuration of estimating radiance in some embodiments.
FIG. 4 illustrates estimation of volumetric scattering in some embodiments.
FIG. 5 illustrates three versions of an example scene rendered using an embodiment of the invention.
FIG. 6 shows two graphs plotting sample variance of error, sample variance of average error, and expected value of the average error with three different a settings for the highlighted point in the example scene in FIG. 5.
FIG. 7 shows a graph plotting the global radius scaling factor with variations in scale factor, M.
FIG. 8 illustrates media radiance in an example scene rendered using an embodiment of the invention.
FIG. 9 illustrates two versions of an example scene, in this case rendered entirely on a GPU, using an embodiment of the invention.
FIG. 10 illustrates an example of a hardware system that might be used for rendering.
DETAILED DESCRIPTION
The present invention is described herein, in many places, as a set of computations. It should be understood that these computations are not performable manually, but are performed by an appropriately programmed computer, computing device, electronic device, or the like, that might be a general purpose computer, a graphical processing unit, and/or other hardware. As with any physical system, there are constraints as to the memory available and the number of calculations that can be done in a given amount of time. Embodiments of the present invention might be described in mathematical terms, but one of ordinary skill in the art, such as one familiar with computer graphics, would understand that the mathematical steps are to be implemented for execution in some sort of hardware environment. Therefore, it will be assumed that such hardware and/or associated software or instructions are present and the description below will not be burdened with constant mention of same. Embodiments of the present invention might be implemented entirely in software stored on tangle, non-transitory or transitory media or systems, such that it is electronically readable. While in places, process steps might be described by language such as “we calculate” or “we evaluate” or “we determine”, it should be apparent in some contexts herein that such steps are performed by computer hardware and/or defined by computer hardware instructions and not persons.
The essential task of such computer software and/or hardware, in this instance is to generate images from a geometric model or other description of a virtual scene. In such instances, it will be assumed that the model/description includes description of a space, a virtual camera location and virtual camera details, objects and light sources. Also, here we will assume some participating media, i.e., virtual media in the space that light from the virtual light sources passes through and interacts with. Such effects would cause some light from the light source to be deflected from the original path from the light source in some direction, deflected off of some part of the media and toward the camera viewpoint. Of course, there need not be actual light and actual reflection, but it is often desirable that the image be generated with realism, i.e., creating images where the effects appear to behave in a manner similar to how light behaves and interacts with objects in a physical space.
As explained above, simple images are easy to render, but often that is not what is desired. Complex images might include participating media such as fog, clouds, etc. but light from transformative objects might also need to be dealt with. An example is caustics, light paths created by light being bent, for example, by passing through glass shapes.
In many descriptions of image generators, the output image is in the form of a two-dimensional (“2D”) or three-dimensional (“3D”) array of pixel values and in such cases, the software and/or hardware used to generate those images is referred to as a renderer. There might be image generators that generate other representations of images, so it should be understood that unless otherwise indicated, a renderer outputs images in a suitable form. In some cases, the renderer renders an image or images in part, leaving some other module, component or system to perform additional steps on the image or images to form a completed image or images.
As explained, a renderer takes as its input a geometric model or some representation of the objects, lighting, effects, etc. present in a virtual scene and derives one or more images of that virtual scene from a camera viewpoint. As such, the renderer is expected to have some mechanism for reading in that model or representation in an electronic form, store those inputs in some accessible memory and have computing power to make computations. The renderer will also have memory for storing intermediate variables, program variables, as well as storage for data structures related to lighting, photon beams and the like, as well as storage for intermediate images. Thus, it should be understood that when the renderer is described, for example, as having generated multiple intermediate images and averaging them, it should be understood that the corresponding computational operations are performed, e.g., a processor reads values if pixels of the intermediate images, averages pixels and stores a result as another intermediate image, an accumulation image, or the final image, etc. The renderer might also include or have access to a random number generator.
In approaches described below, a progressive photon beam process is used. In such a process, photon beams are used in multiple rendering passes, where the rendering passes use different photon beam radii and the results are combined. It might be such that two or more rendering passes use the same photon beam radius, and while most of the examples described herein assume a different photon beam radius for each pass, the invention is not so limited. These processes are efficient, robust to complex light paths, and handle heterogeneous media and anisotropic scattering while provably converging to the correct solution using a bounded memory footprint.
In many examples herein, a progressive mapping with multiple iterations starts with a pass with a given beam radius, followed by a pass with a smaller radius, and so on, until sufficient convergence is obtained. Each pass can be independent of other passes, so the order can be changed among the intermediate images that are averaged together without affecting the final result. In fact, in some embodiments, the radii might vary over an iteration, but the radii for a given beam varying from iteration to iteration, with a similar final result as the case where all of the beams of one radii are in the same iterative pass. Of course, nothing requires that, from iteration to iteration, the same beams be used. In a typical embodiment, suppose a light source is to be represented by 100 beams. Of all the possible beam directions (and beam origin, perhaps, for non-point sources), 100 beams are randomly selected for one iterative pass, and for the next iterative pass, 100 beams are randomly selected, so they are likely to be different from the beams in the first pass. However, when all the intermediate images are combined, their effects average out and converge to the correct solution.
Progressive photon beam methods can robustly handle situations that are difficult for most other algorithms, such as scenes containing participating media and specular interfaces, with realistic light sources completely enclosed by refractive and reflective materials. Our technique described herein handles heterogeneous media and also trivially supports stochastic effects, such as depth-of-field and glossy materials. As explained herein, progressive photon beams can be implemented efficiently on a GPU as a splatting operation, making it applicable to interactive and real-time applications. These features can provide scalability, provide the same physically-based algorithm for interactive feedback and reference-quality, and unbiased solutions.
Convergence is achieved with less computational effort than, say, path tracing, and is robust to SDS or SMS subpaths, and has a bounded memory footprint. GPU acceleration allows for interactive lighting design in the presence of complex light sources and participating media. This makes it possible to produce interactive previews with the same technique used for a high-quality final render—providing visual consistency, an essential property for interactive lighting design tools.
Photon beam handling is a generalization of volumetric photon mapping, which accelerates participating media rendering by considering the full path of photons (beams), instead of just photon scattering locations. Typically, photon beams are blurred with a finite width, leading to bias. Reducing this width reduces bias, but unfortunately increases noise. Progressive photon mapping (“PPM”) provides a way to eliminate bias and noise simultaneously in photon mapping. Unfortunately, naively applying PPM to photon beams is not possible due to the fundamental differences between density estimation using points and beams, so convergence guarantees need to be re-derived for this more complicated case. Moreover, previous PPM derivations only apply to fixed-radius or k-nearest neighbor density estimation, which are commonly used for surface illumination. Photon beams, on the other hand, are formulated using variable kernel density estimation, where each beam has an associated kernel. However, using methods and apparatus described herein, the challenges of rendering beams progressively can be overcome. Described herein is an efficient density estimation framework for participating media that is robust to SDS and SMS subpaths, and which converges to ground truth with bounded memory usage. Additionally, it is described how photons beams can be applied to efficiently handle heterogeneous media.
FIG. 1 illustrates example scenes rendered using iterative photon beam rendering. FIG. 1( a) relates to a disco ball scene, the results for independent render passes, and averages of the multiple render passes; FIG. 1( b) relates to a flashlights scene and corresponding results. The left half of the top image shows results for the case where homogeneous media is assumed and the right half of the top image shows results for the case where heterogeneous media is assumed. The middle sequence of images is intermediate images, each done with different beam radii, while the bottom sequence of images are the averaging of the intermediate images. In this example, the photon beam radii are progressive, in that the first one has wide radii and each successive one has a lower radii. As explained herein, it may be that the order is not important, such as where all of the intermediate images are averaged. However, it is sometimes easier to understand if it is assumed that intermediate images are generated in a progressive order.
In such examples, the renderer renders each pass using a collection of stochastically generated photon beams. As a key step, the radii of the photon beams is reduced using a global scaling factor after each pass. Therefore each subsequent image has less bias, but slightly more noise. As more passes are added, however, the average of these intermediate images converges to the correct solution, generating an unbiased solution with finite memory. As explained herein, a theoretical error analysis of density estimation using photon beams derives the necessary conditions for convergence, and a numerical validation of the theory is provided herein. A progressive generalization of deep shadow maps handles heterogeneous media efficiently. The photon beam radiance estimate formulated as a splatting operation exploits GPU rasterization, allowing simple scenes to be rendered with multiple specular reflections in real-time.
FIG. 2 illustrates a flowchart of an example embodiment comprising multiple iterations of steps 210 through 240. In step 210, a photon simulation is performed, resulting in a plurality of photon beams in a scene. The photon simulation may be performed in any conventional manner, such as by performing, for example, a shadow-mapping operation, a rasterization operation, a ray-marching operation, or a ray-tracing operation. In some embodiments, the performing photon simulation includes calculating an interaction of the plurality of photon beams with geometry and participating media in the scene.
In step 220, the photon beams are rendered with respect to a camera viewpoint. Rendering the photon beams includes computing estimated radiance associated with the photon beams. Rendering may be performed in any conventional manner, such as by performing, for example, a splatting operation, a ray-tracing operation, a ray-marching operation, or a rasterization operation. In some embodiments, rendering the photon beams comprises determining a contribution of the plurality of photon beams to illumination of one or more pixels in the scene based on a progressive deep shadow map.
In step 230, a global radius scaling factor is applied to the radius used for photon beams, relative to the radius used for the prior iteration. In step 240, the global radius scaling factor is progressively decreased over iterations of steps 210 through 240. In some embodiments, determining a progressive decrease in the global radius scaling factor comprises decreasing a kernel width of the photon beam. In some embodiments, determining a progressive decrease in the global radius scaling factor comprises decreasing a kernel width of a query ray. In some embodiments, determining a progressive decrease in the global radius scaling factor comprises enforcing a ratio of variance between successive iterations.
In some embodiments, a representation of the computed average estimated radiance at each pixel in the scene is stored in a computer-readable storage media, in effect averaging the intermediate images that are rendered in each pass. In some embodiments, the rendered photon beams are discarded after each iteration of steps 210 through 240 in order to reduce memory usage. In some embodiments, rather than reducing the radii each time, the radii are varied in another manner. In some embodiments, rather using the same radius for each beam in an intermediate image, the radii are varied, but are varied so that the same radius is not used repeatedly for the same beam. Beams may be randomly selected among possible beams.
Background Explanation of Light Transport
Technical details of light transport in participating media are explained at the outset below, and relevant equations related to photon beams and progressive photon mapping provided.
Light Transport in Participating Media
In the presence of participating media, the light incident at any point, x, in a scene (e.g., the camera view point) from a direction,
Figure US09245377-20160126-P00001
(the over arrow here signals a ray or vector), such as direction through a pixel, can be expressed using a radiative transport equation [Chandrasekar 1960] as the sum of two terms, as in Equation 1.
L(x,
Figure US09245377-20160126-P00001
)=T r(s)L s(x s,
Figure US09245377-20160126-P00001
)+L m(x,
Figure US09245377-20160126-P00001
)  (Eqn. 1)
The first term is outgoing reflected radiance from a surface, Ls, at the endpoint of the ray, xs=x−s
Figure US09245377-20160126-P00001
, after attenuation due to the transmittance Tr. In homogeneous media, Tr(s)=e−sσ t , where σt is the extinction coefficient. In heterogeneous media, transmittance accounts for the extinction coefficient along the entire segment between the two points, but we use this simple one-parameter notation here for brevity. The second term is medium radiance, shown in Equation 2, where f is the normalized phase function, as is the scattering coefficient, and w is a scalar distance along the camera direction
Figure US09245377-20160126-P00001
(which is itself a vector, as indicated by the superimposed arrow).
L m(x,
Figure US09245377-20160126-P00002
)=∫0 sσs(x w)T r(w)∫Ω f(
Figure US09245377-20160126-P00002
·
Figure US09245377-20160126-P00003
)L(x w,
Figure US09245377-20160126-P00003
)d
Figure US09245377-20160126-P00003
dw  (Eqn. 2)
Equation 2 integrates scattered light at all points along xw=x−w
Figure US09245377-20160126-P00001
until the nearest surface at distance s. The inner integral corresponds to the in-scattered radiance, which recursively depends on radiance arriving at xw from directions
Figure US09245377-20160126-P00004
on the sphere Ω.
Photon Beams
Photon mapping methods approximate the medium radiance (see, Eqn. 2) using a collection of photons, each with a power, position, and direction. Instead of performing density estimation on just the positions of the photons, the recent photon beams approach [Jarosz et al. 2011] treats each photon as a beam of light starting at the photon position and shooting in the photon's outgoing direction. [Jarosz et al. 2011] derived a “Beam×Beam 1D” estimate that directly estimates medium radiance due to photon beams along a query ray. Herein, a coordinate system (
Figure US09245377-20160126-P00005
,
Figure US09245377-20160126-P00004
,
Figure US09245377-20160126-P00001
) is used, where
Figure US09245377-20160126-P00001
is the query ray direction,
Figure US09245377-20160126-P00004
is the direction of the photon beam, and
Figure US09245377-20160126-P00005
=
Figure US09245377-20160126-P00004
×
Figure US09245377-20160126-P00001
is the direction perpendicular to both the query ray and the photon beam.
FIG. 3 illustrates a use of this coordinate system and radiance estimation with one photon beam as viewed from the side (left) and in the plane perpendicular to the query ray. The direction
Figure US09245377-20160126-P00005
=
Figure US09245377-20160126-P00004
×
Figure US09245377-20160126-P00001
extends out of the page (left). To estimate radiance due to a photon beam, treat the beam as an infinite number of imaginary photon points along its length (as shown in the right-hand side of FIG. 3. The power of the photons is blurred in 1D, along
Figure US09245377-20160126-P00005
.
An estimate of the incident radiance along the direction
Figure US09245377-20160126-P00001
using one photon beam can be expressed as shown in Equation 3, where Φ is the power of the photon, and the scalars (u, v, w) are signed distances along the three axes to the imaginary photon point closest to the query ray (the point on the beam closest to the ray
Figure US09245377-20160126-P00001
).
L m ( x , w ρ , r ) k r ( u ) σ s ( x w ) Φ T r ( w ) T r ( v ) f ( ) sin ( ) ( Eqn . 3 )
The first transmittance term accounts for attenuation through a distance w to x, and the second computes the transmittance through a distance v to the position of the photon. The photon beam is blurred using a 1D kernel kr centered on the beam with a support width of r along direction
Figure US09245377-20160126-P00005
.
In practice, Equation 3 is evaluated for many beams to obtain a high quality image and is a consistent estimator like standard photon mapping. In other words, it produces an unbiased solution when using an infinite number of beams with an infinitesimally small blur kernel. This is an important property which we will use later on.
However, obtaining an unbiased result requires storing an infinite number of beams, and using an infinitesimal blurring kernel; neither of which are feasible in practice. Below is an analysis of the variance and bias of Equation 3 usable to develop a progressive, memory-bounded consistent estimator.
Progressive Photon Mapping
Progressive photon mapping (“PPM”) is known and need not be explained in detail here. [Knaus and Zwicker 2011] builds on a probabilistic analysis of the error in radiance estimation and models the error as consisting of two components: its variance (noise) and its expected value (bias). The main idea in PPM is to average a sequence of images generated using independent photon maps. Radiance estimation can be performed such that both the variance and the expected value of the average error are reduced continuously as more images are added. Therefore, PPM achieves noise- and bias-free results in the limit.
Denote the error of radiance estimation in pass i at some point in the scene being rendered by εi. Herein, “ε” or “∈” might refer to an error term. The average error after N passes is
ɛ _ N = 1 N i = 1 N ɛ i .
Since each image i uses a different photon map, the errors εi can be interpreted as samples of independent random variables. Hence, the variance (noise) and expected value (bias) of average error are as shown in Equations 4A and 4B, respectively.
Var [ ɛ _ N ] = 1 N 2 i = 1 N Var [ ɛ i ] ( Eqn . 4 A )
E [ ɛ _ N ] = 1 N i = 1 N E [ ɛ i ] ( Eqn . 4 B )
Convergence in progressive photon mapping is obtained if the average noise and bias go to zero simultaneously as the number of passes goes to infinity, i.e., as N→∞, as in Equations 5A and 5B that define convergence conditions A and B used herein.
Var[ ε N]→0, as N→∞  (Eqn. 5A)
E[ ε N]→0, as N→∞  (Eqn. 5B)
Observe from Equation 4 that if the same radii were used in the radiance estimate in each pass, the variance of the average error would be reduced, but the bias would remain the same. By decreasing (or varying) the radiance estimation radii in each pass by a small factor. This increases variance in each pass, but only so much that the variance of the average still vanishes. At the same time, reducing the radii decreases the bias, hence achieving the desired convergence.
Applying this to radiance estimation using photon beams provides novel benefits.
Progressive Photon Beams
The renderer computes the contribution of media radiance, Lm, to pixel values c. FIG. 4 illustrates the problem schematically, and Equation 6 shows this mathematically.
c=∫∫W(x,
Figure US09245377-20160126-P00002
)L m(x,
Figure US09245377-20160126-P00002
)dxd
Figure US09245377-20160126-P00002
  (Eqn. 6)
As illustrated in FIG. 4, photon beams are shot from light sources and the paths are traced from the eye until a diffuse surface is hit and then the renderer estimating volumetric scattering by finding the beam/ray intersections and weighting by the contribution W to the camera. In each pass, the renderer reduces a global radius scaling factor and repeats.
In Equation 6, W( ) is a function that weights the contribution of Lm to the pixel value (accounting for antialiasing, glossy reflection, depth-of-field, etc.). The renderer computes c by tracing a number of paths N from the eye, evaluating W, and evaluating the media radiance Lm. This forms a Monte Carlo estimate c N for c, as in Equation 7, where p(xi,
Figure US09245377-20160126-P00001
i) denotes the probability density of generating a particular position and direction when tracing paths from the eye and Lm is approximated using photon beams.
c _ N = 1 N i = 1 N W ( x i , ) L m ( x i , ) p ( x i , w ρ i ) ( Eqn . 7 )
The photon beam approximation of the media radiance introduces an error, which we can make explicit by redefining Equation 3 as in Equation 8, where all terms in Equation 3 except for the kernel have been folded into γ.
L m(x,
Figure US09245377-20160126-P00001
,r)=k r(u)γ+ε(x,
Figure US09245377-20160126-P00001
,r)  (Eqn. 8)
The error term, ε, is the difference between the true radiance, Lm(x,
Figure US09245377-20160126-P00001
), and the radiance estimated using a photon beam with a kernel of radius r. As explained herein, this converges.
Error Analysis of Photon Beam Density Estimation
We analyze the variance Var[ε(x,
Figure US09245377-20160126-P00001
, r)] of the error (noise), and expected value E[ε(x,
Figure US09245377-20160126-P00001
, r)] of the error (bias) for the case of one beam first. We then generalize the result for the case of many beams. This allows for a derivation of a progressive estimate for photon beams, and allows us to assign different kernel radii to each beam. Some steps are explained mathematically in more detail in a derivation section below.
Since photon beams are generated using stochastic ray tracing, we can interpret the 1D distance u between a photon beam and a query ray
Figure US09245377-20160126-P00001
as independent and identically distributed samples of a random variable U with probability density pU
Figure US09245377-20160126-P00002
. The photon contributions γ can take on different values, which we again model as samples of a random variable. We assume that u and γ are independent. The variable γ incorporates several terms: σs, Φ, and the transmittance along w are all independent of u, and the remaining terms depend on v. Graphically, we assume that if we fix our query location and generate a random beam, the distances u and v (see right-hand side of FIG. 3) are mutually independent (note that these measure distance along orthogonal directions). This assumption need only hold locally since only beams close to a query ray contribute. Additionally, as the beam radii decrease, the accuracy of this assumption increases.
Variance Using One Photon Beam
To derive the variance of the error, we also assume that locally (within the 1D kernel's support at
Figure US09245377-20160126-P00001
), the distance u between the beam and view ray is a uniformly distributed random variable. This is similar to the uniform local density assumption used in previous PPM methods [Hachisuka et al. 2008; Knaus and Zwicker 2011]. We show in the derivation section below that under these assumptions, the variance can be expressed as shown by Equation 9, wherein C1 is a constant derived from the kernel, and pU
Figure US09245377-20160126-P00002
(0) is the probability density of a photon beam intersecting the view ray
Figure US09245377-20160126-P00001
exactly.
Var[ε(x,
Figure US09245377-20160126-P00002
,r)]=(Var[γ]+E[γ] 2)p U
Figure US09245377-20160126-P00002
(0)C 1 /r  (Eqn. 9)
This result states that the variance of beam radiance estimation increases linearly if we reduce the kernel radius r.
Expected Error Using One Photon Beam
On the other hand, in the derivation section below, we show that, for some constant C2, the expected error decreases linearly as we reduce the kernel support r, as in Equation 10.
E[ε(x,
Figure US09245377-20160126-P00001
,r)]=rE[γ]C 2  (Eqn. 10)
Using Many Beams
In practice, the photon beam method generates images using more than one photon beam at a time. Moreover, the photon beam widths need not be equal, but could be determined adaptively per beam using, e.g., photon differentials. We can express this by generalizing Equation 8 into Equation 8A.
L m ( x , w ρ , r 1 Kr M ) = 1 M j = 1 M k r j ( u j ) γ j - ɛ ( x , w ρ , r 1 Kr M ) ( Eqn . 8 A )
In the derivation section below, we show that if we use M beams, each with their own radius rj at
Figure US09245377-20160126-P00001
, the variance of the error is as shown in Equation 11, where rH is the harmonic mean of the radii,
1 / r H = 1 M Σ 1 r j .
This includes the expected behavior that variance decreases linearly with the number of emitted photon beams M.
Var [ ɛ ( x , w ρ , r 1 Kr M ) ] = 1 M ( Var [ γ ] + E [ γ ] 2 ) p U w ρ ( 0 ) C 1 r H ( Eqn . 11 )
Furthermore, we show that the expected error is now is illustrated subsection D of the derivation section below, and in Equation 12, where rA is the arithmetic mean of the photon radii. Note that this does not depend on M, so the expected error (bias) will not decrease as we use more photon beams.
E[ε(x,
Figure US09245377-20160126-P00001
,r 1 Kr M)]=r A E[γ]C 2  (Eqn. 12)
To be absolutely precise, this analysis requires that the minimum radius in any pass be non-zero (to avoid infinite variance) and the maximum radius be finite (to avoid infinite bias). In practice, enforcing some bounds is not a problem and the high-level intuition remains the same: the variance increases linearly and expected error decreases linearly if we globally decrease all the beam radii as well as these radii bounds. It is also worth highlighting that for photon beams this relationship is linear due to the 1D blurring kernel. This is in contrast to the analysis performed by Knaus and Zwicker, which resulted in a quadratic relationship for the standard radiance estimate on surfaces and a cubic relationship for the volumetric radiance estimate using photon points.
Our analysis generalizes previous work in two important ways. Firstly, we consider the more complicated case of photon beams, and secondly, we allow the kernel radius to be associated with data elements (e.g., photons or photon beams) instead of the query location (as in k-nearest neighbor estimation). This second feature not only applies to photon beams but also allows variable kernel density estimation in any PPM context, such as density estimation on surfaces or in media using photons. The blur dimensionality (1D, 2D, or 3D) will dictate whether the variance and bias relationship is linear, quadratic or cubic.
Achieving Convergence
To obtain convergence using the approach outlined above, we show how to enforce conditions A and B (from Equations 5A and 5B). With PPM, the variance increases slightly in each pass, but in such a way that the variance of the average error still vanishes. Increasing variance allows us to reduce the kernel scale (see Eqn. 11), which in turn reduces the expected error of the radiance estimate (see Eqn. 12).
The convergence in PPM can be achieved by enforcing the ratio of variance between passes as indicated in Equation 13, where α is a user specified constant between 0 and 1.
Var[ε i+1 ]/Var[ε i]=(i+1)/(i+α)  (Eqn. 13)
Given the variance of the first pass, this ratio induces a variance sequence, where the variance of the i-th pass is predicted as shown in Equation 14.
Var [ ɛ i + 1 ] = Var [ ɛ 1 ] ( k = 1 i - 1 k k + α ) i ( Eqn . 14 )
Using this ratio, the variance of the average error after N passes can be expressed in terms of the variance of the first pass, Var[ε1], as in Equation 15, which vanishes as desired when N?∞. Hence, if we could enforce such a variance sequence, we would satisfy condition A.
Var [ ɛ _ N ] = Var [ ɛ 1 ] N 2 ( 1 + i = 2 N ( k = 1 i - 1 k k + α ) i ) ( Eqn . 15 )
Radius Reduction Sequence
In one particular approach, the renderer uses a global scaling factor, Ri, to scale the radius of each beam, as well as the minimum and maximum radii bounds, in pass i. Note that by scaling all radii by that global scaling factor, that scales their harmonic and arithmetic means by that factor as well. To modify the radii such that the increase in variance between passes corresponds to Equation 13, the renderer can use the expression for variance of Equation 11 in this ratio. Since variance is inversely proportional to beam radius, Equation 16 would hold.
R i+1 /R i =Var[ε i ]/V[ε i+1]=(i+α)/(i+1)  (Eqn. 16)
Given an initial scaling factor of R1=1, this ratio induces a sequence of scaling factors according to Equation 17.
R i ( k = 1 i - 1 k + α k ) 1 i ( Eqn . 17 )
Expected Value of Average Error
Since the expected error is proportional to the average radius (Eqn. 12), we can obtain a relation regarding the expected error of each pass from Equation 17, to get the result of Equation 18, where ε1 is the error of the first pass.
E[ε i ]=E[ε 1 ]R i  (Eqn. 18)
We can solve for the expected value of the average error in a similar way to Equation 15, with the result of Equation 19, which vanishes as N→∞. Hence, by using the radius sequence in Equation 17, we furthermore satisfy condition B.
E [ ɛ _ i ] = E [ ɛ 1 ] N ( 1 + i = 2 N ( k = 1 i - 1 k + α k ) 1 i ) ( Eqn . 19 )
Empirical Validation
We validated the approach described above against a reverse path tracing reference solution of the Disco scene of FIG. 1. This is an incredibly difficult scene for unbiased path sampling techniques. We computed many connections between each light path and the camera to improve convergence. Unfortunately, since the light sources in this scene are point lights, mirror reflections of the media constitute SMS subpaths, which cannot be simulated. We therefore visualize only media radiance directly visible by the camera and compare to the media radiance using the techniques taught herein. A reference solution took over three hours to render while our technique requires only three minutes (including SDS and SMS subpaths). We used α=0.7 and used 19.67 million beams in total.
We also numerically validated our error analysis by examining the noise and bias behavior (for three values of α) of the highlighted point in the “Sphere Caustic” image shown as FIG. 5. The “Sphere Caustic” image contains a glass sphere, an anisotropic medium, and a point light. We shot 1K beams per pass and obtained a high-quality result in 100 passes. The rightmost image—for 100 passes—completed in 10 seconds.
FIG. 6 provides graphs of corresponding bias and variance of the highlighted point. We used progressive photon beams to simulate multiple scattering and multiple specular bounces of light in the glass. We computed 10,000 independent runs of our simulation and compared these empirical statistics to the theoretical behavior predicted by our models. On the left of FIG. 6, the sample variance of the radiance estimate as a function of the iterations is shown, in particular the per-pass variance (left; upper three curves), the average variance (left; lower three curves), and bias (right) with three α settings for the highlighted point in FIG. 5. Empirical results match the theoretical models derived herein well. The noise in the empirical curves is due to a limited number (10K) of measurement runs.
Since the theoretical error model used depends on some scene-dependent constants that are not possible to estimate in the general case, we fitted these constants to the empirical data. We gather statistics for α=0.1, 0.5, and 0.9, showing the effect of this parameter on the convergence. The top curves in FIG. 6 (left) show that the variance of each pass increases, as predicted by Equation 14. The lower three curves in FIG. 6 (left) show that variance of the average error decreases after each pass as Equation 15 predicts. We also examined the expected average error (bias) in FIG. 6 (right). These experiments show that both the bias and noise decay with the number of passes.
Example Progressive Photon Beams Process
The example process described in this section has an inner loop and an outer loop. The inner loop can be the standard two-pass photon beam process described in [Jarosz et al. 2011] or some other method.
In the first pass, photon beams are emitted from lights and scatter at surfaces and media in the scene. Other than computing appropriate beam widths, this pass can be effectively identical to the photon tracing in volumetric and surface-based photon mapping described by [Jensen 2001]. The process determines the kernel width of each beam by tracing photon differentials during the photon tracing process. The process also includes automatically computing and enforcing radii bounds to avoid infinite variance or bias. Of course, as explained above, the order might not matter.
In the second pass, the renderer computes radiance along each ray using Equation 3 or equivalent. For homogeneous media, this involves a couple of exponentials and scaling by the scattering coefficient and foreshortened phase function. The case for heterogeneous media is described further below.
In the progressive estimation framework, these two steps are repeated in each progressive pass, scaling the widths of all beams (and the radii bounds) by the global scaling factor. Other approaches are possible instead.
User Parameters
In some embodiments, a user (artist, image maker, etc.) can have a single intuitive parameter to control convergence. In standard PPM, a number of parameters influence the process' performance, such as the bias-noise tradeoff α∈(0,1), the number of photons per pass, M, and either a number of nearest neighbors k or an initial global radius.
Unfortunately, the parameters a and M both influence the bias-noise tradeoff in an interconnected way. Since the radius is updated each pass, increasing M means that the radius is scaled more slowly with respect to the total number of photons after N passes. This is illustrated in differences between curves in FIG. 7.
FIG. 7 is a plot of the global radius scaling factor, with varying M. The standard approach produces vastly different scaling sequences for a progressive simulation using the same total number of stored photons. We reduce the scale factor M times after each pass, which approximates the scaling sequence of M=1 regardless of M.
Given two progressive runs, we would hope to obtain the same output image if the same total number of beams has been shot, regardless of the number M per pass. We provide a simple improvement that achieves this intuitive behavior by reducing the effect of M on the bias-noise tradeoff That solution is to always apply M progressive radius updates after each pass. This induces a piecewise constant approximation of M=1 (the smooth lowest curve in FIG. 7) for the radius reduction, at any arbitrary setting of M (the lower stepwise curves in FIG. 7), as compared to the upper two stepwise curves that correspond to the standard approach for M=15 and M=100. This modifies Equations 16 and 17 to Equations 20 and 21, respectively.
R i + 1 R i = j = 1 M ( i - 1 ) M + j + α ( i - 1 ) M + j + 1 ( Eqn . 20 )
R i = ( k = 1 Mi - 1 k + α k ) 1 M i ( Eqn . 21 )
It is clear that variance still vanishes in this modified scheme, since the beams in each pass have larger (or for the first pass equal) radii to the “single beam per pass” (M=1) approach. The expected error vanishes, because eventually the scaling factor is still zero.
Evaluating γ in Heterogeneous Media
In theory, our error analysis and convergence derivation above applies to both homogeneous and heterogeneous media—the properties of the medium are encapsulated in the random variable γ. To realize a convergent process for heterogeneous media, however, the scattering properties should be evaluated and the transmittances contained in γ computed. Unfortunately, the standard approach in graphics for computing transmittance, ray marching, is biased, and would compromise the process. To ensure convergence, the renderer's transmittance estimator should be unbiased.
An unbiased estimator can be implemented, in one example, by using mean-free path sampling as a black-box. Given a function, d(x,
Figure US09245377-20160126-P00001
), which returns a random propagation distance from a point x in direction
Figure US09245377-20160126-P00001
, the transmittance between x and a point s units away in direction
Figure US09245377-20160126-P00001
is as shown by Equation 22, where δ is the Heaviside step function. This estimates transmittance by counting samples that successfully propagate a distance ≧s.
T r ( x , w ρ , s ) = E [ 1 n j = 0 n δ ( d ( x , w ρ ) - s ] ( Eqn . 22 )
A naive solution would be to use this algorithm to compute the two transmittance terms when evaluating Equation 8. This approach can be applied to both homogeneous media, where d(x,
Figure US09245377-20160126-P00001
)=−log(1−ξ)/σt, and to heterogeneous media, where d(x,
Figure US09245377-20160126-P00001
) can be implemented using Woodcock tracking [Woodcock et al. 1965]. Woodcock tracking is an unbiased technique for sampling mean-free paths within heterogeneous media used in the particle physics field and recently introduced to graphics by [Raab et al. 2008].
Though this approach converges to the correct result, it is inefficient, since obtaining low variance requires many samples, and there are many ray/beam intersections to evaluate. An improved process will efficiently evaluate transmittance for all ray/beam intersections. We first consider transmittance towards the eye, and then transmittance along each photon beam.
Progressive Deep Shadow Maps
The naive approach evaluates Equation 22 for each ray/beam intersection within a pixel. Each evaluation, however, actually provides enough information for an unbiased estimate of the transmittance function for all distances along the ray, and not just the function value at a single distance s. A renderer can handle this by computing n propagation distances and re-evaluating Equation 22 for arbitrary values of s. This results in an unbiased, piecewise-constant representation of the transmittance function, as illustrated in the left-hand side of FIG. 8. There, validation of progressive deep shadow maps is shown (thin solid line) for extinction functions (dashed line) with analytically-computable transmittances (thick solid line). In one embodiment, four random propagation distances are used, resulting in a four-step approximation of transmittance in each pass.
The collection of transmittance functions across all primary rays could be viewed as a deep shadow map from the camera. Deep shadow maps also store multiple distances to approximate transmittance; however, the key difference here is that our transmittance estimator remains unbiased, and will converge to the correct solution when averaged across passes. In subsection E of the derivation section below, we prove this convergence and validate it empirically for heterogeneous media with closed-form solutions to transmittance in FIG. 8.
In our approach, we similarly accelerate transmittance along each beam. Instead of repeatedly evaluating Equation 22 at each ray/beam intersection, the renderer can and store several unbiased random propagation distances along each beam. Given these distances, it can re-evaluate transmittance using Equation 20 at any distance along the beam. The collection of transmittance functions across all photon beams forms an unstructured deep shadow map that converges to the correct result with many passes.
Effect on Error Convergence
When using Equation 22 to estimate transmittance, the only effect on the error analysis is that Var[γ] in Equations 9 and 11 increases compared to using analytic transmittance (note that bias is not affected since E[γ] does not change with an unbiased estimator). Homogeneous media could be rendered using the analytic formula or using Equation 22. Both approaches converge to the same result (as illustrated by the top row of FIG. 8), but the Monte Carlo estimator for transmittance adds additional variance. In some cases then, it is preferred to use analytic transmittance in the case of homogeneous media.
Specific Constructions
In this section, we discuss several efficient implementations of our theory, demonstrating its flexibility and generality. First, we introduce our most general implementation, which is a CPU-GPU hybrid capable of rendering arbitrary surface and volumetric shading effects, including complex paths with multiple reflective, refractive and volumetric interactions in homogeneous or heterogeneous media. We also present two GPU-only implementations: a GPU ray-tracer capable of supporting general lighting effects, and a rasterization-only implementation that uses a custom extension of shadow-mapping to accelerate beam tracing. Both the hybrid and rasterization demos exploit a reformulation of the beam radiance estimate as a splatting operation, described in the next section.
Hybrid Beam Splatting and Ray-Tracing Renderer
A more general renderer combines a CPU ray tracer with a GPU rasterizer and possibly a GPU that can handle GPU operations. The CPU ray tracer handles the photon shooting process. For radiance estimation, the renderer can decompose the light paths into ones that can be easily handled using GPU-accelerated rasterization, and handle all other light paths with the CPU ray tracer. The renderer can rasterize all photon beams that are directly visible by the camera. The CPU ray tracer then handles the remaining light paths, such as those visible only via reflections/refractions off of objects.
Note that Equation 3 has a simple geometric interpretation, as illustrated in FIG. 3, namely that each beam is an axial-billboard facing the camera. As in the standard photon beams approach, the CPU ray tracer computes ray-billboard intersections with this representation. However, for directly-visible beams, Equation 3 can be reformulated as a splatting operation amenable to GPU rasterization and thus GPU instructions can be generated.
In one implementation of the hybrid approach C++ is used and so is OpenGL. In that implementation, after CPU beam shooting, the photon beam billboard quad geometry is generated for every stored beam on the CPU. This geometry is rasterized with GPU blending enabled and a simple pixel shader evaluates Equation 3 for every pixel under the support of the beam kernel on the GPU. The renderer also culls the beam quads against the remaining scene geometry to avoid computing radiance from occluded beams. To handle stochastic effects such as anti-aliasing and depth-of-field, the renderer can apply a Gaussian jitter to the camera matrix in each pass. The CPU component handles all other light paths using Monte Carlo ray tracing with a single path per pixel per pass.
For homogeneous media, the fragment shader evaluates Equation 3 using two exponentials for the transmittance. It can use several layers of simplex noise for heterogeneous media, and follow the approach derived above for progressive deep shadow maps. For transmittance along a beam, it computes and stores a fixed number, nb, of random propagation distances along each beam using Woodcock tracking (in practice, nb is usually between 4 and 16). Since transmittance is constant between these distances, we can split each beam into nb quads before rasterizing and assign the appropriate transmittance to each segment using Equation 22.
For transmittance towards the camera, we construct a progressive deep shadow map on the GPU using Woodcock tracking. This is updated before each pass, and accessed by the beam fragment shader to evaluate transmittance to the camera. We implement this by rendering to an off-screen render target and pack four propagation distances per pixel into a single RGBA output. We can use the same random sequence to compute Woodcock tracking for each pixel within a single pass. This replaces high-frequency noise with coherent banding while remaining unbiased across many passes. This also improves performance slightly due to more coherent branching behavior in the fragment shader. Some renderers can support up to nc=12 distances per pixel in some implementations (using three render targets). However, it may be that using a single RGBA texture is a good performance-quality compromise.
Raytracing on the GPU
In another implementation, the OptiX GPU ray tracing API described by [Parker et al. 2010] is used. That OptiX renderer implements two kernels: one for photon beam shooting, and one for eye ray tracing and progressive accumulation. The renderer shoots and stores photon beams, in parallel, on the GPU. The shading kernel traces against all scene geometry and photon beams, each stored in their own BVH, with volumetric shading computed using Equation 3 at each beam intersection.
Augmented Shadow Mapping for Beam Shooting
In another implementation, a real-time GPU renderer only uses OpenGL rasterization in scenes with a limited number of specular bounces. Shadow mapping is extended to trace and generate beam quads that are visualized with GPU rasterization as above. [McGuire et al. 2009] also used shadow maps, but that was limited to photon splatting on surfaces. In this renderer, it generates and splats beams, possibly exploiting a progressive framework to obtain convergent results.
A light-space projection transform (as in standard shadow mapping) can be used to rasterize the scene from the light's viewpoint. Instead of recording depth for each shadow map texel, each texel instead produces a photon beam. At the hit-point, the renderer computes the origin and direction of the central beam as well as the auxiliary differential rays. The differential ray intersection points are computed using differential properties stored at the scene geometry as vertex attributes, interpolated during rasterization. Depending on whether the light is inside or outside a participating medium, beam directions are reflected and/or refracted at the scene geometry's interface. This entire process can be implemented in a simple pixel shader that outputs to multiple render-targets. Depending on the number of available render targets, several (reflected, refracted, or light) beams can be generated per render pass.
The render target outputs can be snapped to vertex buffers and render points at the beam origins. The remainder of the beam data is passed as vertex attributes to the point geometry, and a geometry shader converts each point into an axial billboard. These quads can then be rendered using the same homogeneous shader as the hybrid example described above. To ensure convergence, the shadow map grid can be jittered to produce a different set of beams each pass. This end-to-end rendering procedure can be carried out entirely with GPU rasterization, and can render photon beams that emanate from the light source as well as those due to a single surface reflection/refraction.
Results
A hybrid implementation might use a 12-core 2.66 GHz Intel Xeon™ 12 GB with an ATI Radeon HD 5770. The examples of FIG. 1 were generated in that manner, in both homogeneous and heterogeneous media, including zoomed insets of the media illumination showing the progressive refinement of the process. The scenes are rendered at 1280×720, and they include depth-of-field and antialiasing. The lights in these scenes are all modeled realistically with light sources inside reflective/refractive fixtures. Illumination encounters several specular bounces before arriving at surfaces or media, making these scenes impractical for path tracing. PPM can be used for surface shading, while PPB is used to get performance and quality on the media scattering.
The Disco scene of FIG. 1 contains a mirror disco ball illuminated by six lights inside realistic Fresnel-lens casings. Each Fresnel light has a complex emission distribution due to refraction, and the reflections off of the faceted sphere produce intricate volume caustics. In an implementation being tested, the media radiance was rendered in three minutes in homogeneous media and 5.7 minutes in heterogeneous media. The surface caustics on the wall of the scene require another 7.5 minutes. The Flashlights scene renders in 8.0 minutes and 10.8 minutes respectively using 2.1 M beams (diffuse shading takes an additional 124 minutes). These results show that heterogeneous media incurs only a marginal performance penalty over homogeneous media using photon beams.
Beam storage includes start and end points (2×3 floats), differential rays (2×2×3 floats), and power (3 floats). A scene-dependent acceleration structure might also be necessary, and even for a single bounding box per beam, this is 2×3 floats (it can use a BVH and implicitly split beams as described in [Jarosz, et al. 2011]). The implementation need not be optimized for memory usage with the progressive approach, but even with careful tuning this would likely be above 100 bytes per beam. Thus, even in simple scenes, beam storage can quickly exceed available memory. A scene with intricate refractions might require over 50 M beams for high-quality results and that would exceed 5 GB of memory even with the conservative 100 bytes/beam estimate. Using the progressive approach, this is not a problem.
Our beams use adaptive kernels with ray differentials, which may allow for higher quality results using fewer beams. Also, rasterization is used for large portions of the illumination, which improves performance.
Shadow Mapping Implementation Results
In one test implementation, for interactive and accurate lighting design, we used a 64 by 64 shadow map, generating 4K beams, and rendered progressive passes in less than two milliseconds per frame for the ocean geometry of FIG. 9 with 13 M triangles. By jittering the perspective projection, antialiasing and depth-of-field effects can be incorporated. Since every progressive pass reduces the beam width, higher passes render significantly faster. FIG. 9 illustrates the OCEAN scene, where the viewer sees light beams refracted through the ocean surface and scattering in the ocean's media. In the test implementation, the progressive rasterization converges in less than a second. It can be done in real-time using GPU rasterization with 4K beams at around 600 FPS, which is less than 2 ms). The image after 20 passes (top) renders at around 30 FPS (33 ms). The high-quality result renders in less than a second (bottom, 450 passes).
Hardware Example
FIG. 10 is a block diagram of hardware that might be used to implement a renderer. The renderer can use a dedicated computer system that only renders, but might also be part of a computer system that performs other actions, such as executing a real-time game or other experience with rendering images being one part of the operation.
Rendering system 800 is illustrated including a processing unit 820 coupled to one or more display devices 810, which might be used to display the intermediate images or accumulated images or final images, as well as allow for interactive specification of scene elements and/or rendering parameters. A variety of user input devices, 830 and 840, may be provided as inputs. In one embodiment, a data interface 850 may also be provided.
In various embodiments, user input device 830 includes wired-connections such as a computer-type keyboard, a computer mouse, a trackball, a track pad, a joystick, drawing tablet, microphone, and the like; and user input device 840 includes wireless connections such as wireless remote controls, wireless keyboards, wireless mice, and the like. In the various embodiments, user input devices 830-840 typically allow a user to select objects, icons, text and the like that graphically appear on a display device (e.g., 810) via a command such as a click of a button or the like. Other embodiments of user input devices include front-panel buttons on processing unit 820.
Embodiments of data interfaces 850 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, FireWire interface, USB interface, and the like. In various embodiments, data interfaces 850 may be coupled to a computer network, to a FireWire bus, a satellite cable connection, an optical cable, a wired-cable connection, or the like.
Processing unit 820 might include one or more CPU and one or more GPU. In various embodiments, processing unit 820 may include familiar computer-type components such as a processor 860, and memory storage devices, such as a random access memory (RAM) 870, disk drives 880, and system bus 890 interconnecting the above components. The CPU(s) and or GPU(s) can execute instructions representative of process steps described herein.
RAM 870 and hard-disk drive 880 are examples of tangible media configured to store data such as images, scene data, instructions and the like. Other types of tangible media includes removable hard disks, optical storage media such as CD-ROMS, DVD-ROMS, and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, and the like (825).
FIG. 10 is representative of a processing unit 820 capable of rendering or otherwise generating images. It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention. For example, processing unit 820 may be a personal computer, handheld computer, server farm, or similar hardware. In still other embodiments, the techniques described below may be implemented upon a chip or an auxiliary processing board.
Derivations
While possibly not necessary for a full understanding of the inventions described above, a number of calculations and equations are provided in this section. Some of these are performed by the apparatus doing the rendering, while others are performed only here to show some analysis or result that is expected to occur by operation of the renderer.
Derivation A—Variance of Beam Radiance Estimation
The variance of the error in Equation 8 is as shown by Equation A1.
Var[ε(x,
Figure US09245377-20160126-P00001
,r)]=Var[k r(U)γ−L(
Figure US09245377-20160126-P00001
)]
=Var[k r(U)γ](Var[γ]+E[γ] 2)+Var[γ]E[k r(U)]2  (Eqn. A1)
Using the definition of variance, we have the result of Equation A2, where Ω is the kernel's support.
Var[k r(U)]=
Figure US09245377-20160126-P00006
k r(ξ)2 p U
Figure US09245377-20160126-P00002
(ξ)dξ−[
Figure US09245377-20160126-P00006
k r(ξ)p U
Figure US09245377-20160126-P00002
(ξ)dξ] 2  (Eqn. A2)
We assume that locally (within Ω), the distance between the beam and ray is a uniformly distributed random variable. Hence, the probability density function within the kernel support is constant and equal to the probability density of an imaginary photon landing a distance 0 from our view ray
Figure US09245377-20160126-P00002
: pU
Figure US09245377-20160126-P00002
(ξ)=pU
Figure US09245377-20160126-P00002
(0) and we get the result of Equation A3.
Var [ k r ( U ) ] = p U w ρ ( 0 ) Ω k r ( ξ ) 2 ξ - [ p U w ρ ( 0 ) Ω k r ( ξ ) ξ ] 2 = p U w ρ ( 0 ) Ω k r ( ξ ) 2 ξ - p U w ρ ( 0 ) 2 = p U w ρ ( 0 ) [ Ω k r ( ξ ) 2 ξ - p U w ρ ( 0 ) ] = p U w ρ ( 0 ) [ 1 r k ( ξ r ) 2 ξ - p U w ρ ( 0 ) ] ( Eqn . A3 )
The last step replaces kr with an equivalently-shaped unit kernel, k. Inserting into Equation A1, and noting that E[kr(U)]=pU
Figure US09245377-20160126-P00002
(0) under the uniform density assumption, we have the result of Equation A4, where the second line assumes the kernel overlaps only a small portion of the scene and hence pU
Figure US09245377-20160126-P00002
(0)2 is negligible. The term remaining in square brackets is just a constant associated with the kernel, which we denote C1.
Var [ ɛ ( x , w ρ , r ) ] = p U w ρ ( 0 ) [ 1 r k ( ξ r ) 2 ξ - p U w ρ ( 0 ) ] ( Var [ γ ] + E [ γ ] 2 ) + Var [ γ ] p U w ρ ( 0 ) 2 ( Var [ γ ] + E [ γ ] 2 ) p U w ρ ( 0 ) r [ k ( ξ r ) 2 ξ ] = ( Var [ γ ] + E [ γ ] 2 ) p U w ρ ( 0 ) r C 1 ( Eqn . A4 )
Derivation B—Bias of Beam Radiance Estimation
The expected error of our single-photon beam radiance estimate is as shown by Equation B1.
E[ε(x,
Figure US09245377-20160126-P00001
,r)]=E[k r(U)γ−L(
Figure US09245377-20160126-P00001
)]
=E[γ]E[k r(U)]−L(
Figure US09245377-20160126-P00002
)  (Eqn. B1)
In the variance analysis in subsection A above, we assumed locally uniform densities. This is too restrictive here since it leads to zero expected error (bias). To analyze the expected error, instead use a Taylor expansion of the density around ξ: pU
Figure US09245377-20160126-P00002
(ξ)=pU
Figure US09245377-20160126-P00002
(0)+(ξ)∇pU
Figure US09245377-20160126-P00002
(0)+O(ξ2), and insert into the integral for the expected value of the kernel the quantity shown in Equation B2, where we have used the same change of variable to a canonical kernel k.
E[k r(U)]=1/r
Figure US09245377-20160126-P00007
k(ξ/r)p U
Figure US09245377-20160126-P00002
(ξ)
=1/r
Figure US09245377-20160126-P00007
k(ξ/r)(p U
Figure US09245377-20160126-P00002
(0)+(ξ)∇p U
Figure US09245377-20160126-P00002
(0)+O2)  (Eqn. B2)
If we assume a kernel with a vanishing first moment, then the middle term drops out, resulting in Equation B3, for some constant C2.
E [ k r ( U ) ] = p U w ( 0 ) + 1 r k ( ξ r ) O ( ξ 2 ) ξ = p U w ( 0 ) + 1 r k ( ψ ) rO ( ψ 2 ) r ψ = p U w ( 0 ) + r k ( ψ ) O ( ψ 2 ) ψ C 2 = p U w ( 0 ) + rC 2 , ( Eqn . B3 )
Combining with Equation B1, and noting an infinitesimal kernel results in the exact radiance as in Equation B4, we obtain the result of Equation B5.
L({right arrow over (w)})=E[γ]E[δ(U)]=E[γ]p U {right arrow over (w)}(0)  (Eqn. B4)
E[∈(x,{right arrow over (w)},r)]=E[γ](p U {right arrow over (w)}(0)+rC 2)−E[γ]p U {right arrow over (w)}(0)=rE[γ]C 2  (Eqn. B5)
Derivation C—Variance Using Many Photons
For M photons in the photon beams estimate, each with their own kernel radius, the variance is as shown in Equation C1, where we use the harmonic mean of the radii in the last step, i.e.,
1 / r H = 1 M 1 r j .
Var [ ε ( x , w , r 1 r M ) ] = Var [ 1 M j = 1 M ε ( x , w , r j ) ] = 1 M 2 j = 1 M Var [ ε ( x , w , r j ) ] = 1 M 2 j = 1 M [ ( Var [ γ ] + E [ γ ] 2 ) p U w ( 0 ) C 1 r j ] = ( Var [ γ ] + E [ γ ] 2 ) p U w ( 0 ) C 1 r H M , ( Eqn . C1 )
Derivation D—Expected Error Using Many Photons
We apply a similar procedure for expected error using many photon beams. We have Equation D1, where rA denotes the arithmetic mean of the beam radii.
E [ ε ( x , w , r 1 r M ) ] = E [ 1 M j = 1 M ε ( x , w , r j ) ] = 1 M j = 1 M E [ ε ( x , w , r j ) ] = E [ γ ] C 2 M p = 1 M r j = r A E [ γ ] C 2 ( Eqn . D1 )
Derivation E—Unbiased Progressive Deep Shadow Maps
Progressive deep shadow maps simply count the number of stored propagation distances, dj, that travel further than s in each pass using Equation 22. If p(dj) denotes the probability density function (“PDF”) of the propagation distance dj, we have the result of Equation E1, where τ(d)=∫0 dσt(x+tv
Figure US09245377-20160126-P00002
)dt denotes the optical depth.
E [ 1 n j = 1 n δ ( d j - s ) ] = s p ( d ) d = s σ t ( d ) - τ d = - τ ( s ) ( Eqn . E1 )
The final result, e−τ(s), is simply the definition of transmittance, confirming that averaging progressive deep shadow maps produces an unbiased estimator for transmittance.
Additional Variations
Even with progressive deep shadow maps, a heterogeneous media rendering is slower than for the homogeneous form. That performance loss of each pass is primarily due to multiple evaluations of Perlin noise, which is quite expensive both on the GPU and CPU. Accelerated variants of Woodcock tracking might be used to reduce the number of density evaluations needed for free-path sampling.
In addition to increased cost per pass, the Monte Carlo transmittance estimator increases variance per pass. The variance of the transmittance estimate increases with distance (where fewer random samples propagate). More precisely, at a distance where transmittance is 1%, only 1% of the beams contribute, which results in higher variance. The worst-case scenario is if both the camera and light source are very far away from the subject (or, conversely, if the medium is optically thick) since most of the beams terminate before reaching the subject, and most of the deep shadow map distances to the camera result in zero contribution. An unbiased transmittance estimator which falls off to zero in a piecewise-continuous, and not piecewise-constant, fashion might be used if this is an issue. Markov Chain Monte Carlo or adaptive sampling techniques could be used to reduce variance.
In some implementations, adaptive techniques might be used to choose a to optimize convergence.
CONCLUSION
As explained herein, progressive photon beam processing can render complex illumination in participating media. It converges to the gold standard of rendering, i.e., unbiased, noise free solutions of the radiative transfer and the rendering equation, while being robust to complex light paths including SDS and SMS subpaths. Such processing can be combined with photon beams in a simple and elegant way. In each iteration of a progressive process, a global scaling factor is applied to the beam radii and reduced each pass.
Embodiments disclosed herein describe progressive photon beams, a new algorithm to render complex illumination in participating media. The main advantage of the algorithm disclosed herein is that it converges to the gold standard of rendering, i.e., unbiased, noise-free solutions of the radiative transfer and the rendering equation, while being robust to complex light paths including specular-diffuse-specular subpaths.
In some embodiments, the a parameter that controls the trade-off between reducing variance and bias is set to a constant. Some embodiments may adaptively determine this parameter to optimize convergence.
While heterogeneous media can be incorporated into this approach in a straightforward way using ray marching along the beams, extensions of this brute-force approach that may be more computationally efficient. Some embodiments assume that photon beams are sampled using independent, identically distributed random variables. Some embodiments may include adaptive algorithms such as Markov Chain Monte Carlo and adaptive importance sampling.
Further embodiments can be envisioned to one of ordinary skill in the art after reading the attached documents. For example, photon beams can be sampled in various manners. In other embodiments, combinations or sub-combinations of the above disclosed embodiments can be advantageously made. The block diagrams of the architecture and flow charts are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.

Claims (20)

What is claimed is:
1. A computer-implemented method of rendering a resulting image of a scene from a plurality of intermediate images, wherein the scene comprises a volumetric medium, and wherein the scene, the volumetric medium and lighting of such volumetric medium are represented by electronically readable representative data, the method comprising:
(a) performing a plurality of photon beam simulations, each photon beam simulation resulting in a representation of a plurality of photon beams in the scene, wherein performance of a photon beam simulation includes selecting a radius for each of the plurality of photon beams;
(b) for each photon beam simulation, rendering the plurality of photon beams of the photon beam simulation to produce an intermediate image, wherein the rendering comprises computing, from the plurality of photon beams, an estimated radiance for each of a plurality of rays with respect to a camera viewpoint, and wherein computing an estimated radiance of a ray from the plurality of photon beams includes calculating a volumetric radiance associated with an intersection of the ray with a photon beam;
(c) accumulating corresponding pixel values from at least two produced intermediate images to generate the resulting image of the scene.
2. The computer-implemented method of claim 1, wherein the radius selected for each photon beam in a photon beam simulation is equal to a single radius value corresponding to that photon beam simulation.
3. The computer-implemented method of claim 2, wherein each photon beam simulation uses a respective single radius value.
4. The computer-implemented method of claim 3, wherein the respective single radius value of a first photon beam simulation is multiplied by a positive global scaling factor less than 1 to determine the respective single radius value of a second photon beam simulation.
5. The computer-implemented method of claim 4, wherein the respective global radius scaling factor is determined by
R i + 1 / R i = i + α i + 1 ,
i is a first index of the first photon beam simulation, i+1 is a second index of the second photon beam simulation, Ri is the single radius value of the first photon beam simulation, Ri+1 is the single radius value of the second photon beam simulation, and α is a constant with value greater than 0 and less than 1.
6. The computer-implemented method of claim 1, wherein accumulating comprises pixel-by-pixel averaging.
7. The computer-implemented method of claim 1, wherein performing a photon beam simulation comprises performing a shadow-mapping operation, a rasterization operation, a ray-marching operation, and/or a ray-tracing operation.
8. The computer-implemented method of claim 1, wherein performing a photon beam simulation comprises calculating an interaction of the plurality of photon beams with geometry and participating media in the scene.
9. The computer-implemented method of claim 1, wherein rendering the plurality of photon beams comprises performing a splatting operation, a ray-tracing operation, a ray-marching operation, and/or a rasterization operation.
10. The computer-implemented method of claim 1, wherein rendering the plurality of photon beams comprises determining a contribution of the plurality of photon beams to illumination of one or more pixels in the scene based on a progressive deep shadow map.
11. A computer-program product embodied on a non-transitory computer readable medium and comprising instructions that when executed causes one or more processors to perform a method of rendering a resulting image of a scene from a plurality of intermediate images, wherein the scene comprises a volumetric medium, and wherein the scene, the volumetric medium and lighting of such volumetric medium are represented by electronically readable representative data, the method comprising:
(a) performing a plurality of photon beam simulations, each photon beam simulation resulting in a representation of a plurality of photon beams in the scene, wherein performance of the photon beam simulation includes selecting a radius for each of the plurality of photon beams;
(b) for each photon beam simulation, rendering the plurality of photon beams of the photon beam simulation to produce an intermediate image, wherein the rendering comprises computing, from the plurality of photon beams, an estimated radiance for each of a plurality of rays with respect to a camera viewpoint, and wherein computing an estimated radiance of a ray from the plurality of photon beams includes calculating a volumetric radiance associated with an intersection of the ray with a photon beam;
(c) accumulating corresponding pixel values from at least two produced intermediate images to generate the resulting image of the scene.
12. The computer-program product of claim 11, wherein the radius selected for each photon beam in a photon beam simulation is equal to a single radius value corresponding to that photon beam simulation.
13. The computer-program product of claim 12, wherein each photon beam simulation uses a respective single radius value.
14. The computer-program product of claim 13, wherein the respective single radius value of a first photon beam simulation is multiplied by a positive global scaling factor less than 1 to determine the respective single radius value of a second photon beam simulation.
15. The computer-program product of claim 14, wherein the respective global radius scaling factor is determined by
R i + 1 / R i = i + α i + 1 ,
i is a first index of the first photon beam simulation, i+1 is a second index of the second photon beam simulation, Ri is the single radius value of the first photon beam simulation, Ri+1 is the single radius value of the second photon beam simulation, and α is a constant with value greater than 0 and less than 1.
16. The computer-program product of claim 11, wherein accumulating comprises pixel-by-pixel averaging.
17. The computer-program product of claim 11, wherein performing a photon beam simulation comprises performing a shadow-mapping operation, a rasterization operation, a ray-marching operation, and/or a ray-tracing operation.
18. The computer-program product of claim 11, wherein performing a photon beam simulation comprises calculating an interaction of the plurality of photon beams with geometry and participating media in the scene.
19. The computer-program product of claim 11, wherein rendering the plurality of photon beams comprises performing a splatting operation, a ray-tracing operation, a ray-marching operation, and/or a rasterization operation.
20. The computer-program product of claim 11, wherein rendering the plurality of photon beams comprises determining a contribution of the plurality of photon beams to illumination of one or more pixels in the scene based on a progressive deep shadow map.
US14/164,358 2011-09-16 2014-01-27 Image processing using progressive generation of intermediate images using photon beams of varying parameters Expired - Fee Related US9245377B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/164,358 US9245377B1 (en) 2011-09-16 2014-01-27 Image processing using progressive generation of intermediate images using photon beams of varying parameters

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/235,299 US8638331B1 (en) 2011-09-16 2011-09-16 Image processing using iterative generation of intermediate images using photon beams of varying parameters
US14/164,358 US9245377B1 (en) 2011-09-16 2014-01-27 Image processing using progressive generation of intermediate images using photon beams of varying parameters

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/235,299 Continuation US8638331B1 (en) 2011-06-10 2011-09-16 Image processing using iterative generation of intermediate images using photon beams of varying parameters

Publications (1)

Publication Number Publication Date
US9245377B1 true US9245377B1 (en) 2016-01-26

Family

ID=49957951

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/235,299 Active 2032-01-20 US8638331B1 (en) 2011-06-10 2011-09-16 Image processing using iterative generation of intermediate images using photon beams of varying parameters
US14/164,358 Expired - Fee Related US9245377B1 (en) 2011-09-16 2014-01-27 Image processing using progressive generation of intermediate images using photon beams of varying parameters

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/235,299 Active 2032-01-20 US8638331B1 (en) 2011-06-10 2011-09-16 Image processing using iterative generation of intermediate images using photon beams of varying parameters

Country Status (1)

Country Link
US (2) US8638331B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150035831A1 (en) * 2013-08-02 2015-02-05 Disney Enterprises, Inc. Methods and systems of joint path importance sampling
US20160042553A1 (en) * 2014-08-07 2016-02-11 Pixar Generating a Volumetric Projection for an Object
US20220051786A1 (en) * 2017-08-31 2022-02-17 Gmeditec Co., Ltd. Medical image processing apparatus and medical image processing method which are for medical navigation device
US11494966B2 (en) * 2020-01-07 2022-11-08 Disney Enterprises, Inc. Interactive editing of virtual three-dimensional scenes

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130100135A1 (en) * 2010-07-01 2013-04-25 Thomson Licensing Method of estimating diffusion of light
US9070208B2 (en) 2011-05-27 2015-06-30 Lucasfilm Entertainment Company Ltd. Accelerated subsurface scattering determination for rendering 3D objects
US8638331B1 (en) * 2011-09-16 2014-01-28 Disney Enterprises, Inc. Image processing using iterative generation of intermediate images using photon beams of varying parameters
US9013484B1 (en) * 2012-06-01 2015-04-21 Disney Enterprises, Inc. Progressive expectation-maximization for hierarchical volumetric photon mapping
GB2513698B (en) * 2013-03-15 2017-01-11 Imagination Tech Ltd Rendering with point sampling and pre-computed light transport information
US9953457B2 (en) * 2013-04-22 2018-04-24 Nvidia Corporation System, method, and computer program product for performing path space filtering
US10198856B2 (en) * 2013-11-11 2019-02-05 Oxide Interactive, LLC Method and system of anti-aliasing shading decoupled from rasterization
US10198788B2 (en) * 2013-11-11 2019-02-05 Oxide Interactive Llc Method and system of temporally asynchronous shading decoupled from rasterization
US9607426B1 (en) 2013-12-20 2017-03-28 Imagination Technologies Limited Asynchronous and concurrent ray tracing and rasterization rendering processes
US9697640B2 (en) 2014-04-21 2017-07-04 Qualcomm Incorporated Start node determination for tree traversal in ray tracing applications
US10235338B2 (en) * 2014-09-04 2019-03-19 Nvidia Corporation Short stack traversal of tree data structures
JP6393153B2 (en) * 2014-10-31 2018-09-19 株式会社スクウェア・エニックス Program, recording medium, luminance calculation device, and luminance calculation method
EP3057067B1 (en) * 2015-02-16 2017-08-23 Thomson Licensing Device and method for estimating a glossy part of radiation
CN105118083B (en) * 2015-08-11 2018-03-16 浙江大学 A kind of Photon Mapping method for drafting of unbiased
US9818221B2 (en) 2016-02-25 2017-11-14 Qualcomm Incorporated Start node determination for tree traversal for shadow rays in graphics processing
US9905054B2 (en) * 2016-06-09 2018-02-27 Adobe Systems Incorporated Controlling patch usage in image synthesis
US10269172B2 (en) * 2016-10-24 2019-04-23 Disney Enterprises, Inc. Computationally efficient volume rendering in computer-generated graphics
US10943390B2 (en) * 2017-11-20 2021-03-09 Fovia, Inc. Gradient modulated shadow mapping
CN108961372B (en) * 2018-03-27 2022-10-14 北京大学 Progressive photon mapping method based on statistical model test
US11010963B2 (en) * 2018-04-16 2021-05-18 Nvidia Corporation Realism of scenes involving water surfaces during rendering
CN109509248B (en) * 2018-09-28 2023-07-18 北京大学 Photon mapping rendering method and system based on neural network
US11436783B2 (en) 2019-10-16 2022-09-06 Oxide Interactive, Inc. Method and system of decoupled object space shading

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8064726B1 (en) 2007-03-08 2011-11-22 Nvidia Corporation Apparatus and method for approximating a convolution function utilizing a sum of gaussian functions
US8638331B1 (en) * 2011-09-16 2014-01-28 Disney Enterprises, Inc. Image processing using iterative generation of intermediate images using photon beams of varying parameters

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8064726B1 (en) 2007-03-08 2011-11-22 Nvidia Corporation Apparatus and method for approximating a convolution function utilizing a sum of gaussian functions
US8638331B1 (en) * 2011-09-16 2014-01-28 Disney Enterprises, Inc. Image processing using iterative generation of intermediate images using photon beams of varying parameters

Non-Patent Citations (36)

* Cited by examiner, † Cited by third party
Title
Engelhardt, T., Novak, J., and Dachsbachfr, C., "Instant multiple scattering for interactive rendering of heterogeneous participating media." Technical Report. Karlsruhe Institute of Technology, Dec. 8, 2010. 9 pages. Retrieved from https://cg.ibds.kit.edu/downloads/InstantMultipleScattering.sub.--TechRepo- rt08Dec2010.pdf on Aug. 20, 2013.
Hachisuka, T., and Jensen, H. W., "Stochastic progressive photon mapping.". ACM Transactions on Graphics (TOG), Dec. 2009. 8 pages. Retrieved from https://cs.au.dk/.about.toshiya/sppm.pdf on Aug. 20, 2013.
Hachisuka, T., Ogaki, S., and Jensen, H. W., "Progressive photon mapping." ACM Transactions on Graphics (TOG). vol. 27. No. 5. ACM, 2008. 7 pages. Retrieved from https://cs.au.dk/.about.toshiya/ppm.pdf on Aug. 20, 2013.
Hachisuka, Toshiya, Shinji Ogaki, and Henrik Wann Jensen. "Progressive photon mapping." ACM Transactions on Graphics (TOG). vol. 27. No. 5. ACM, 2008. *
Hachisuka, Toshiya, Wojciech Jarosz, and Henrik Wann Jensen. "Stochastic Progressive Photon Mapping" ACM Transactions on Graphics (TOG) (Dec. 2009). *
Havran, V,, Bittner, J., Herzog, R., and Seidel, H.-P., "Ray maps for global illumination." Rendering Techniques, 2005. 14 pages. Retrieved from https://www.mpi-inf.mpg.de/.about.rherzog/Papers/raymapsEGSR05.pdf on Aug. 20, 2013.
Herzog, R., Havran, V,, Kinuwaki, S., Myszkowski, K., and Seidel, H.-P., "Global illumination using photon ray splatting." Computer Graphics Forum vol. 26, No. 3, Sep. 2007. (Blackwell Publishing Ltd.), 11 pages. Retrieved from https://www.mpi-inf.mpg.de/.about.rherzog/Papers/herzog07EG.pdf on Aug. 20, 2013.
Herzog, Robert, et al. "Global illumination using photon ray splatting." Computer Graphics Forum. vol. 26. No. 3. Blackwell Publishing Ltd, 2007. *
Hu, W., Dong, Z., Ihrke, I., Grosch, T., Yuan, G., and Seidel, H.-P., "Interactive volume caustics in single-scattering media." 13D, ACM .2010. 9 pages. Retrieved from https://www.graphics.cornell.edu/.about.zd/download/I3DFinal/I3D2020.sub.-- -Sub.pdf on Aug. 20, 2013.
Jarosz, W., Nowrouzezahrai, D., Sadeghi, I., and Jensen, H. W., "A comprehensive theory of volumetric radiance estimation using photon points and beams." ACM Transactions on Graphics, 2011. 19 pages. Retrieved from https://zurich.disneyresearch.com/.about.wjarosz/publications/jaroz11compr- ehensive.pdf on Aug. 20, 2013.
Jarosz, W., Zwicker, M., and Jensen, H. W., "The beam radiance estimate for volumetric photon mapping." Computer Graphics Forum vol. 27, No. 2, Apr. 2008. 10 pages. Retrieved from https://zurich.disneyresearch.com/.about.wjarosz/publications/jarosz08beam- .pdf on Aug. 20, 2013.
Jarosz, Wojciech, Derek Nowrouzezahrai, Robert Thomas, Peter-Pike Sloan, and Matthias Zwicker, "Progressive Photon Beams", ACM SIGGRAPH Asia 2011, vol. 30, Issue 6, Article No. 181, ACM (Dec. 2011), 11 pages.
Jarosz, Wojciech, Matthias Zwicker, and Henrik Wann Jensen. "The beam radiance estimate for volumetric photon mapping." ACM SIGGRAPH 2008 classes. ACM, 2008. *
Jensen, H. W., and Christensen, P. H., "Efficient simulation of light transport in scenes with participating media using photon maps." Proceedings of SIGGRAPH, 1998. 10 pages. Retrieved from https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.118.6575&rep=rep- 1&type=pdf on Aug. 20, 2013.
Kajiya, J. T., "The rendering equation." Computer Graphics, Proceedings of SIGGRAPH 86, 1986. 8 pages. Retrieved from https://x86.cs.duke.edu/courses/cps124/compsci344/.../16...p143-kajiya.pdf on Aug. 20, 2013.
Knaus, C., and Zwicker, M., "Progressive photon mapping: A probabilistic approach." ACM Transactions on Graphics, 2011. 14 pages. Retrieved from https://www.cs.jhu.edu/.about.misha/ReadingSeminar/Papers/Knaus11.pdf on Aug. 20, 2013.
Kruger, J., Burger, K., and Westermann, R., "Interactive screen-space accurate photon tracing on GPUs." In Rendering Techniques. 2006. 12 pages. Retrieved from https://wwwcg.in.tum.de/research/research/publications/2006/interactive-sc- reen-space-accurate-photon-tracing-on-gpus.html on Aug. 20, 2013.
LaFortune, E. P., and Willems, Y. D., "Bi-directional path tracing." Proceedings of Compugraphics, vol. 93. 1993. 8 pages.
LaFortune, E. P., and Willems, Y. D., "Rendering participating media with bidirectional path tracing." EG Rendering Workshop. 1996. 11 pages. Retrieved from https://luthuli.cs.uiuc.edu/.about.daf/courses/Rendering/Papers/lafortune9- 6rendering.pdf on Aug. 20, 2013.
Lastra, M., Urena, C., Revelles, J., and Montes, R. "A particle-path based method for Monte-Carlo density estimation." EG Workshop on Rendering. 2002. 8 pages. Retrieved from https://lsi.ugr.es/about.curena/inves/egrw02/lastra-egrw02.ps.gz on Aug. 20, 2013.
Liktor, G., and Dachsbacher, C., "Real-time volume caustics with adaptive beam tracing." Symposium on Interactive 3D Graphics and Games. ACM Press. New York, NY. 2011. 8 pages. Retrieved from https://cg.ibds.kit.edu/downloads/VolumeCaustics.sub.--Preprint.pdf on Aug. 20, 2013.
Lokovic, Tom, and Eric Veach. "Deep shadow maps." Proceedings of the 27th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 2000. *
Lokovic, Tom, and Eric Veach., "Deep shadow maps." Proceedings of the 27th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 2000.
McGuire, M., and Luebke, D., "Hardware-accelerated global illumination by image space photon mapping." HPG, ACM. 2009. 12 pages. Retrieved from https://graphics.cs.williams.edu/papers/PhotonHPG09/ISPM-HPG09.pdf on Aug. 20, 2013.
Parker, S. G., Bigler, J., Dietrich, A., Friedrich, H., Hoberock, J., Luebke, D., McAllister, D., McGuire, M., Morley, K., Robison, A., and Stich, M., "Optix: A general purpose ray tracing engine." ACM Transactions on Graphics. Jul. 2010. Retrieved from https://graphics.cs.williams.edu/papers/OptiXSIGGRAPH10/Parker1.0OptiX.pdf on Aug. 20, 2013.
Pauly, M., Kollig, T., and Keller, A., "Metropolis light transport for participating media." Rendering Techniques. 2000. 13 pages. Retrieved from https://www.cse.ohio-state.edu/.about.parent/classes/782/Papers/Photo-nMap/metropolis.pdf on Aug. 20, 2013.
Perlin, K., "Noise hardware." Real-time Shading, ACM SIG-GRAPH Course Notes. 2001. 26 pages. Retrieved from https://reality.sgiweb.org/olano/s2002c36/ch02.pdf on Aug. 20, 2013.
Raab, M., Seibert, D and Keller, A., "Unbiased global illumination with participating media." Monte Carlo and Quasi-Monte Carlo Methods. 2006. 16 pages. Retrieved from https://www.uni-ulm.de/fileadmin/website.sub.--uni.sub.--.ulm/iui.inst.100/- institut/Papers/ugiwpm.pdf on Aug. 20, 2013.
Schjoth, L,, Frisvad, J. R., Erleben, K., and SPORRING, J., "Photon differentials." Graphite, ACM, New York. 2007. 8 pages. Retrieved from https://orbit.dtu.dk/fedora/objects/orbit:63065/datastreams/file.sub.--550- 4929/content on Aug. 20, 2013.
Sun, X., Zhou, K., Lin, S., and Guo, B., "Line space gathering for single scattering in large scenes." ACM Transactions on Graphics. 2010. 8 pages. Retrieved from https://www.kunzhou.net/2010/LSG.pdf on Aug. 20, 2013.
Szirmay-Kalos, L., Toth, B,, and Magdics, M., "Free path sampling in high resolution inhomogeneous participating media." Computer Graphics Forum. 2011. 12 pages. Retrieved from https://sirkan.iit.bme.hu/.about.szirmay/woodcockperlin7.pdf Aug. 20, 2013.
Veach, E., and Guibas, L. J., "Metropolis light transport." Proceedings of SIGGRAPH 97, Computer Graphics Proceedings, Annual Conference Series. 1997. 13 pages. Retrieved from https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.88.944&rep=rep1&- type on Aug. 20, 2013.
Veach, E., and Guibas, L., "Bidirectional estimators for light transport." Fifth Eurographics Workshop on Rendering. 1994. 20 pages. Retrieved from https://www0.cs.ucl.ac.uk/research/vr/Projects/VLF/vlfpapers/multi-pass.su- b.--hybrid/Veach.sub.--E.sub.--Bi-directional.sub.--Estimators.sub.--for.s- ub.--Light.sub.--Transport.pdf on Aug. 20, 2013.
Walter, B., Zhao, S., Holzschuch, N., and Bala, K., "Single scattering in refractive media with triangle mesh boundaries." ACM Transactions on Graphics28, 3, 92:1-92:8. Jul. 2009. 7 pages. Retrieved from https://shuangz.com/projects/amber-sg09/amber.pdf on Aug. 20, 2013. cited by applicant.
Williams, L., "Casting curved shadows on curved surfaces." Computer Graphics, Proceedings of SIGGRAPH 78. 1978. 5 pages. Retrieved from https://www.cs.berkeley.edu/.about.ravir/6160-fall04/papers/p270-williams.- pdf on Aug. 20, 2013.
Yue, Y., Iwasaki, K., Chen, B.-Y., Dobashi, Y., and Nishita, T., "Unbiased, adaptive stochastic sampling for rendering inhomogeneous participating media." ACM Transactions on Graphics. 2010. 7 pages. Retrieved from https://nishitalab.org/user/egaku/sigasia10/177.pdf on Aug. 20, 2013.

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150035831A1 (en) * 2013-08-02 2015-02-05 Disney Enterprises, Inc. Methods and systems of joint path importance sampling
US9665974B2 (en) * 2013-08-02 2017-05-30 Disney Enterprises, Inc. Methods and systems of joint path importance sampling
US20160042553A1 (en) * 2014-08-07 2016-02-11 Pixar Generating a Volumetric Projection for an Object
US10169909B2 (en) * 2014-08-07 2019-01-01 Pixar Generating a volumetric projection for an object
US20220051786A1 (en) * 2017-08-31 2022-02-17 Gmeditec Co., Ltd. Medical image processing apparatus and medical image processing method which are for medical navigation device
US11676706B2 (en) * 2017-08-31 2023-06-13 Gmeditec Co., Ltd. Medical image processing apparatus and medical image processing method which are for medical navigation device
US11494966B2 (en) * 2020-01-07 2022-11-08 Disney Enterprises, Inc. Interactive editing of virtual three-dimensional scenes
US11847731B2 (en) 2020-01-07 2023-12-19 Disney Enterprises, Inc. Interactive editing of virtual three-dimensional scenes

Also Published As

Publication number Publication date
US8638331B1 (en) 2014-01-28

Similar Documents

Publication Publication Date Title
US9245377B1 (en) Image processing using progressive generation of intermediate images using photon beams of varying parameters
US10290142B2 (en) Water surface rendering in virtual environment
Jarosz et al. A comprehensive theory of volumetric radiance estimation using photon points and beams
Zeltner et al. Monte Carlo estimators for differential light transport
US8493383B1 (en) Adaptive depth of field sampling
US7952583B2 (en) Quasi-monte carlo light transport simulation by efficient ray tracing
Heckbert Simulating global illumination using adaptive meshing
Keller Quasi-Monte Carlo image synthesis in a nutshell
US7940268B2 (en) Real-time rendering of light-scattering media
US9013484B1 (en) Progressive expectation-maximization for hierarchical volumetric photon mapping
US9208610B2 (en) Alternate scene representations for optimizing rendering of computer graphics
US20210142555A1 (en) Rendering images using modified multiple importance sampling
US10249077B2 (en) Rendering the global illumination of a 3D scene
Belcour et al. A local frequency analysis of light scattering and absorption
Dietrich et al. Massive-model rendering techniques: a tutorial
US12067667B2 (en) Using directional radiance for interactions in path tracing
Frolov et al. Light transport in realistic rendering: state-of-the-art simulation methods
JP5718934B2 (en) Method for estimating light scattering
Papaioannou Real-time diffuse global illumination using radiance hints
Patel et al. Instant convolution shadows for volumetric detail mapping
Pan et al. Transient instant radiosity for efficient time-resolved global illumination
Wang et al. Rendering transparent objects with caustics using real-time ray tracing
Reza Efficient Sample Reusage in Path Space for Real-Time Light Transport
Belcour A Frequency Analysis of Light Transport: from Theory to Implementation
Apers et al. Interactive Light Map and Irradiance Volume Preview in Frostbite

Legal Events

Date Code Title Description
AS Assignment

Owner name: THE WALT DISNEY COMPANY (SWITZERLAND) GMBH, SWITZE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAROSZ, WOJCIECH;NOWROUZEZAHRAI, DEREK;THOMAS, ROBERT;AND OTHERS;SIGNING DATES FROM 20110919 TO 20111101;REEL/FRAME:032119/0469

Owner name: DISNEY ENTERPRISES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THE WALT DISNEY COMPANY (SWITZERLAND) GMBH;REEL/FRAME:032119/0678

Effective date: 20120101

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20240126