US9245377B1 - Image processing using progressive generation of intermediate images using photon beams of varying parameters - Google Patents
Image processing using progressive generation of intermediate images using photon beams of varying parameters Download PDFInfo
- Publication number
- US9245377B1 US9245377B1 US14/164,358 US201414164358A US9245377B1 US 9245377 B1 US9245377 B1 US 9245377B1 US 201414164358 A US201414164358 A US 201414164358A US 9245377 B1 US9245377 B1 US 9245377B1
- Authority
- US
- United States
- Prior art keywords
- photon
- photon beam
- beams
- computer
- beam simulation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
- 230000000750 progressive effect Effects 0.000 title claims description 49
- 238000012545 processing Methods 0.000 title description 12
- 238000000034 method Methods 0.000 claims abstract description 67
- 238000009877 rendering Methods 0.000 claims abstract description 40
- 238000004088 simulation Methods 0.000 claims abstract description 40
- 238000013507 mapping Methods 0.000 claims description 25
- 238000005286 illumination Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 230000003993 interaction Effects 0.000 claims description 9
- 238000012935 Averaging Methods 0.000 claims description 6
- 230000002452 interceptive effect Effects 0.000 abstract description 9
- 230000003247 decreasing effect Effects 0.000 abstract description 6
- 238000002834 transmittance Methods 0.000 description 41
- 238000013459 approach Methods 0.000 description 25
- 230000008569 process Effects 0.000 description 22
- 230000015654 memory Effects 0.000 description 16
- 238000009795 derivation Methods 0.000 description 15
- 239000000243 solution Substances 0.000 description 15
- 230000006870 function Effects 0.000 description 14
- 230000000694 effects Effects 0.000 description 13
- 238000004458 analytical method Methods 0.000 description 12
- 230000007423 decrease Effects 0.000 description 8
- 238000005070 sampling Methods 0.000 description 8
- 238000003860 storage Methods 0.000 description 7
- 241000566107 Scolopax Species 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 6
- 239000003518 caustics Substances 0.000 description 6
- 239000000700 radioactive tracer Substances 0.000 description 6
- 241001050985 Disco Species 0.000 description 5
- 230000003044 adaptive effect Effects 0.000 description 5
- 230000006399 behavior Effects 0.000 description 5
- 239000011521 glass Substances 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008033 biological extinction Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 239000012634 fragment Substances 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000007425 progressive decline Effects 0.000 description 3
- 238000010200 validation analysis Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000004926 polymethyl methacrylate Substances 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 239000012088 reference solution Substances 0.000 description 2
- 230000008080 stochastic effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000699666 Mus <mouse, genus> Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 235000000332 black box Nutrition 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000009699 differential effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000005433 particle physics related processes and functions Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/55—Radiosity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
Definitions
- Computer-generated imagery typically involves using software and/or hardware to generate one or more images from a geometric model.
- a geometric model defines objects, light sources, and other elements of a virtual scene and a rendering system or other computer/software/hardware system will read in the geometric model and determine what colors are needed in what portions of the image.
- a renderer will generate a two-dimensional (“2D”) or three-dimensional (“3D”) array of pixel color values that collectively result in the desired image or images.
- Examples of such interaction include the scattering of light, which results in visual complexity, such as when the light interacts with participating media such as clouds, fog, and even air.
- Rendering this complex light transport typically involves solving a radiative transport equation [Chandrasekar 1960] combined with a rendering equation [Kajiya 1986] as a boundary condition.
- Other light interactions might include light passing through objects, such as refractive objects.
- Rendering preferably involves computing unbiased, noise-free images.
- the typical options to achieve this are variants of brute force path tracing [Kajiya 1986; Lafortune and Willems 1993; Veach and Guibas 1994; Lafortune and Willems 1996] and Metropolis light transport [Veach and Guibas 1997; Pauly et al. 2000], which are notoriously slow to converge to noise-free images despite recent advances [Raab et al. 2008; Yue et al. 2010].
- SDS specular-diffuse-specular
- SMS specular-media-specular
- Volumetric photon mapping is an approach to dealing with light in a participating medium. It is described in [Jensen and Christensen 1998] and subsequently improved by [Jarosz et al. 2008] to avoid costly and redundant density queries due to ray marching. In those approaches, a renderer formulates a “beam radiance estimate” that considers all photons along the length of a ray in one query. [Jarosz et al. 2011] showed how to apply the beam concept not just to the query operation but also to the photon data representation. In that approach, the entire photon path is used instead of just photon points, to obtain significant quality and performance improvement. This is similar in spirit to the concept of ray maps for surface illumination [Lastra et al.
- a method and system for progressively rendering radiance for a volumetric medium is provided, such that computing images can be done in software and/or hardware efficiently and still represent the desired image effects.
- a process or apparatus Given a geometric model of a virtual space, or another description of a scene, a process or apparatus generates an image or multiple images, in part using a photon simulation process to produce a representation of photon beams in a scene.
- the photon beams are rendered with respect to a camera viewpoint, iteratively, by computing an estimated radiance associated with the photon beams. Over multiple iterations, the global radius scaling factor is progressively decreased, thereby reducing overall error by facilitating convergence.
- a representation of the computed average estimated radiance at each pixel in the scene is stored.
- a global radii parameter is set for each iteration so that the beams are the same radius for one iterative image and different from image to image.
- the process is implemented using a graphics processing unit (“GPU”) and formulating the process as a splatting operation, for use in interactive and real-time applications.
- GPU graphics processing unit
- heterogeneous participating media is handled.
- One method of handling it is to use piecewise handling of the beams, including setting a termination point for the beam, as in shadow mapping. Such techniques can be expanded beyond photon beams.
- FIG. 1 illustrates example scenes rendered using an embodiment of the invention
- FIG. 1( a ) relates to a disco ball scene, the results for independent render passes, and averages of the multiple render passes
- FIG. 1( b ) relates to a flashlights scene and corresponding results.
- FIG. 2 illustrates a flowchart of a disclosed embodiment.
- FIG. 3 illustrates the geometric configuration of estimating radiance in some embodiments.
- FIG. 4 illustrates estimation of volumetric scattering in some embodiments.
- FIG. 5 illustrates three versions of an example scene rendered using an embodiment of the invention.
- FIG. 6 shows two graphs plotting sample variance of error, sample variance of average error, and expected value of the average error with three different a settings for the highlighted point in the example scene in FIG. 5 .
- FIG. 7 shows a graph plotting the global radius scaling factor with variations in scale factor, M.
- FIG. 8 illustrates media radiance in an example scene rendered using an embodiment of the invention.
- FIG. 9 illustrates two versions of an example scene, in this case rendered entirely on a GPU, using an embodiment of the invention.
- FIG. 10 illustrates an example of a hardware system that might be used for rendering.
- the present invention is described herein, in many places, as a set of computations. It should be understood that these computations are not performable manually, but are performed by an appropriately programmed computer, computing device, electronic device, or the like, that might be a general purpose computer, a graphical processing unit, and/or other hardware. As with any physical system, there are constraints as to the memory available and the number of calculations that can be done in a given amount of time. Embodiments of the present invention might be described in mathematical terms, but one of ordinary skill in the art, such as one familiar with computer graphics, would understand that the mathematical steps are to be implemented for execution in some sort of hardware environment. Therefore, it will be assumed that such hardware and/or associated software or instructions are present and the description below will not be burdened with constant mention of same.
- Embodiments of the present invention might be implemented entirely in software stored on tangle, non-transitory or transitory media or systems, such that it is electronically readable. While in places, process steps might be described by language such as “we calculate” or “we evaluate” or “we determine”, it should be apparent in some contexts herein that such steps are performed by computer hardware and/or defined by computer hardware instructions and not persons.
- the essential task of such computer software and/or hardware is to generate images from a geometric model or other description of a virtual scene.
- the model/description includes description of a space, a virtual camera location and virtual camera details, objects and light sources.
- some participating media i.e., virtual media in the space that light from the virtual light sources passes through and interacts with.
- Such effects would cause some light from the light source to be deflected from the original path from the light source in some direction, deflected off of some part of the media and toward the camera viewpoint.
- the output image is in the form of a two-dimensional (“2D”) or three-dimensional (“3D”) array of pixel values and in such cases, the software and/or hardware used to generate those images is referred to as a renderer.
- a renderer outputs images in a suitable form.
- the renderer renders an image or images in part, leaving some other module, component or system to perform additional steps on the image or images to form a completed image or images.
- a renderer takes as its input a geometric model or some representation of the objects, lighting, effects, etc. present in a virtual scene and derives one or more images of that virtual scene from a camera viewpoint.
- the renderer is expected to have some mechanism for reading in that model or representation in an electronic form, store those inputs in some accessible memory and have computing power to make computations.
- the renderer will also have memory for storing intermediate variables, program variables, as well as storage for data structures related to lighting, photon beams and the like, as well as storage for intermediate images.
- the renderer when the renderer is described, for example, as having generated multiple intermediate images and averaging them, it should be understood that the corresponding computational operations are performed, e.g., a processor reads values if pixels of the intermediate images, averages pixels and stores a result as another intermediate image, an accumulation image, or the final image, etc.
- the renderer might also include or have access to a random number generator.
- a progressive photon beam process is used.
- photon beams are used in multiple rendering passes, where the rendering passes use different photon beam radii and the results are combined. It might be such that two or more rendering passes use the same photon beam radius, and while most of the examples described herein assume a different photon beam radius for each pass, the invention is not so limited.
- These processes are efficient, robust to complex light paths, and handle heterogeneous media and anisotropic scattering while provably converging to the correct solution using a bounded memory footprint.
- a progressive mapping with multiple iterations starts with a pass with a given beam radius, followed by a pass with a smaller radius, and so on, until sufficient convergence is obtained.
- Each pass can be independent of other passes, so the order can be changed among the intermediate images that are averaged together without affecting the final result.
- the radii might vary over an iteration, but the radii for a given beam varying from iteration to iteration, with a similar final result as the case where all of the beams of one radii are in the same iterative pass. Of course, nothing requires that, from iteration to iteration, the same beams be used. In a typical embodiment, suppose a light source is to be represented by 100 beams.
- 100 beams are randomly selected for one iterative pass, and for the next iterative pass, 100 beams are randomly selected, so they are likely to be different from the beams in the first pass.
- 100 beams are randomly selected, so they are likely to be different from the beams in the first pass.
- their effects average out and converge to the correct solution.
- Progressive photon beam methods can robustly handle situations that are difficult for most other algorithms, such as scenes containing participating media and specular interfaces, with realistic light sources completely enclosed by refractive and reflective materials. Our technique described herein handles heterogeneous media and also trivially supports stochastic effects, such as depth-of-field and glossy materials. As explained herein, progressive photon beams can be implemented efficiently on a GPU as a splatting operation, making it applicable to interactive and real-time applications. These features can provide scalability, provide the same physically-based algorithm for interactive feedback and reference-quality, and unbiased solutions.
- Convergence is achieved with less computational effort than, say, path tracing, and is robust to SDS or SMS subpaths, and has a bounded memory footprint.
- GPU acceleration allows for interactive lighting design in the presence of complex light sources and participating media. This makes it possible to produce interactive previews with the same technique used for a high-quality final render—providing visual consistency, an essential property for interactive lighting design tools.
- Photon beam handling is a generalization of volumetric photon mapping, which accelerates participating media rendering by considering the full path of photons (beams), instead of just photon scattering locations.
- photon beams are blurred with a finite width, leading to bias. Reducing this width reduces bias, but unfortunately increases noise.
- Progressive photon mapping (“PPM”) provides a way to eliminate bias and noise simultaneously in photon mapping.
- PPM Progressive photon mapping
- Unfortunately, naively applying PPM to photon beams is not possible due to the fundamental differences between density estimation using points and beams, so convergence guarantees need to be re-derived for this more complicated case.
- previous PPM derivations only apply to fixed-radius or k-nearest neighbor density estimation, which are commonly used for surface illumination.
- Photon beams are formulated using variable kernel density estimation, where each beam has an associated kernel.
- the challenges of rendering beams progressively can be overcome. Described herein is an efficient density estimation framework for participating media that is robust to SDS and SMS subpaths, and which converges to ground truth with bounded memory usage. Additionally, it is described how photons beams can be applied to efficiently handle heterogeneous media.
- FIG. 1 illustrates example scenes rendered using iterative photon beam rendering.
- FIG. 1( a ) relates to a disco ball scene, the results for independent render passes, and averages of the multiple render passes;
- FIG. 1( b ) relates to a flashlights scene and corresponding results.
- the left half of the top image shows results for the case where homogeneous media is assumed and the right half of the top image shows results for the case where heterogeneous media is assumed.
- the middle sequence of images is intermediate images, each done with different beam radii, while the bottom sequence of images are the averaging of the intermediate images.
- the photon beam radii are progressive, in that the first one has wide radii and each successive one has a lower radii.
- the renderer renders each pass using a collection of stochastically generated photon beams.
- the radii of the photon beams is reduced using a global scaling factor after each pass. Therefore each subsequent image has less bias, but slightly more noise.
- the average of these intermediate images converges to the correct solution, generating an unbiased solution with finite memory.
- a theoretical error analysis of density estimation using photon beams derives the necessary conditions for convergence, and a numerical validation of the theory is provided herein.
- a progressive generalization of deep shadow maps handles heterogeneous media efficiently.
- the photon beam radiance estimate formulated as a splatting operation exploits GPU rasterization, allowing simple scenes to be rendered with multiple specular reflections in real-time.
- FIG. 2 illustrates a flowchart of an example embodiment comprising multiple iterations of steps 210 through 240 .
- a photon simulation is performed, resulting in a plurality of photon beams in a scene.
- the photon simulation may be performed in any conventional manner, such as by performing, for example, a shadow-mapping operation, a rasterization operation, a ray-marching operation, or a ray-tracing operation.
- the performing photon simulation includes calculating an interaction of the plurality of photon beams with geometry and participating media in the scene.
- the photon beams are rendered with respect to a camera viewpoint.
- Rendering the photon beams includes computing estimated radiance associated with the photon beams. Rendering may be performed in any conventional manner, such as by performing, for example, a splatting operation, a ray-tracing operation, a ray-marching operation, or a rasterization operation.
- rendering the photon beams comprises determining a contribution of the plurality of photon beams to illumination of one or more pixels in the scene based on a progressive deep shadow map.
- a global radius scaling factor is applied to the radius used for photon beams, relative to the radius used for the prior iteration.
- the global radius scaling factor is progressively decreased over iterations of steps 210 through 240 .
- determining a progressive decrease in the global radius scaling factor comprises decreasing a kernel width of the photon beam.
- determining a progressive decrease in the global radius scaling factor comprises decreasing a kernel width of a query ray.
- determining a progressive decrease in the global radius scaling factor comprises enforcing a ratio of variance between successive iterations.
- a representation of the computed average estimated radiance at each pixel in the scene is stored in a computer-readable storage media, in effect averaging the intermediate images that are rendered in each pass.
- the rendered photon beams are discarded after each iteration of steps 210 through 240 in order to reduce memory usage.
- the radii are varied in another manner.
- the radii are varied, but are varied so that the same radius is not used repeatedly for the same beam. Beams may be randomly selected among possible beams.
- the light incident at any point, x, in a scene (e.g., the camera view point) from a direction, (the over arrow here signals a ray or vector), such as direction through a pixel, can be expressed using a radiative transport equation [Chandrasekar 1960] as the sum of two terms, as in Equation 1.
- L ( x , ) T r ( s ) L s ( x s , )+ L m ( x , ) (Eqn. 1)
- T r (s) e ⁇ s ⁇ t , where ⁇ t is the extinction coefficient.
- transmittance accounts for the extinction coefficient along the entire segment between the two points, but we use this simple one-parameter notation here for brevity.
- Equation 2 The second term is medium radiance, shown in Equation 2, where f is the normalized phase function, as is the scattering coefficient, and w is a scalar distance along the camera direction (which is itself a vector, as indicated by the superimposed arrow).
- L m ( x , ) ⁇ 0 s ⁇ s ( x w ) T r ( w ) ⁇ ⁇ 4 ⁇ f ( ⁇ ) L ( x w , ) d dw (Eqn. 2)
- the inner integral corresponds to the in-scattered radiance, which recursively depends on radiance arriving at x w from directions on the sphere ⁇ 4 ⁇ .
- Photon mapping methods approximate the medium radiance (see, Eqn. 2) using a collection of photons, each with a power, position, and direction. Instead of performing density estimation on just the positions of the photons, the recent photon beams approach [Jarosz et al. 2011] treats each photon as a beam of light starting at the photon position and shooting in the photon's outgoing direction. [Jarosz et al. 2011] derived a “Beam ⁇ Beam 1D” estimate that directly estimates medium radiance due to photon beams along a query ray.
- FIG. 3 illustrates a use of this coordinate system and radiance estimation with one photon beam as viewed from the side (left) and in the plane perpendicular to the query ray.
- the direction ⁇ extends out of the page (left).
- To estimate radiance due to a photon beam treat the beam as an infinite number of imaginary photon points along its length (as shown in the right-hand side of FIG. 3 .
- the power of the photons is blurred in 1D, along .
- Equation 3 An estimate of the incident radiance along the direction using one photon beam can be expressed as shown in Equation 3, where ⁇ is the power of the photon, and the scalars (u, v, w) are signed distances along the three axes to the imaginary photon point closest to the query ray (the point on the beam closest to the ray ).
- the first transmittance term accounts for attenuation through a distance w to x
- the second computes the transmittance through a distance v to the position of the photon.
- the photon beam is blurred using a 1D kernel k r centered on the beam with a support width of r along direction .
- Equation 3 is evaluated for many beams to obtain a high quality image and is a consistent estimator like standard photon mapping. In other words, it produces an unbiased solution when using an infinite number of beams with an infinitesimally small blur kernel. This is an important property which we will use later on.
- PPM Progressive photon mapping
- Equation 4 Observe from Equation 4 that if the same radii were used in the radiance estimate in each pass, the variance of the average error would be reduced, but the bias would remain the same.
- the renderer computes the contribution of media radiance, L m , to pixel values c.
- FIG. 4 illustrates the problem schematically, and Equation 6 shows this mathematically.
- c ⁇ W ( x , ) L m ( x , ) dxd (Eqn. 6)
- photon beams are shot from light sources and the paths are traced from the eye until a diffuse surface is hit and then the renderer estimating volumetric scattering by finding the beam/ray intersections and weighting by the contribution W to the camera. In each pass, the renderer reduces a global radius scaling factor and repeats.
- W( ) is a function that weights the contribution of L m to the pixel value (accounting for antialiasing, glossy reflection, depth-of-field, etc.).
- the renderer computes c by tracing a number of paths N from the eye, evaluating W, and evaluating the media radiance L m .
- the error term, ⁇ is the difference between the true radiance, L m (x, ), and the radiance estimated using a photon beam with a kernel of radius r. As explained herein, this converges.
- the photon beam method generates images using more than one photon beam at a time.
- the photon beam widths need not be equal, but could be determined adaptively per beam using, e.g., photon differentials. We can express this by generalizing Equation 8 into Equation 8A.
- Equations 5A and 5B we show how to enforce conditions A and B (from Equations 5A and 5B).
- PPM the variance increases slightly in each pass, but in such a way that the variance of the average error still vanishes.
- Increasing variance allows us to reduce the kernel scale (see Eqn. 11), which in turn reduces the expected error of the radiance estimate (see Eqn. 12).
- Equation 13 The convergence in PPM can be achieved by enforcing the ratio of variance between passes as indicated in Equation 13, where ⁇ is a user specified constant between 0 and 1.
- Var[ ⁇ i+1 ]/Var[ ⁇ i ] ( i+ 1)/( i + ⁇ ) (Eqn. 13)
- this ratio induces a variance sequence, where the variance of the i-th pass is predicted as shown in Equation 14.
- the variance of the average error after N passes can be expressed in terms of the variance of the first pass, Var[ ⁇ 1 ], as in Equation 15, which vanishes as desired when N? ⁇ .
- the renderer uses a global scaling factor, R i , to scale the radius of each beam, as well as the minimum and maximum radii bounds, in pass i. Note that by scaling all radii by that global scaling factor, that scales their harmonic and arithmetic means by that factor as well.
- the “Sphere Caustic” image contains a glass sphere, an anisotropic medium, and a point light. We shot 1K beams per pass and obtained a high-quality result in 100 passes. The rightmost image—for 100 passes—completed in 10 seconds.
- FIG. 6 provides graphs of corresponding bias and variance of the highlighted point.
- On the left of FIG. 6 the sample variance of the radiance estimate as a function of the iterations is shown, in particular the per-pass variance (left; upper three curves), the average variance (left; lower three curves), and bias (right) with three ⁇ settings for the highlighted point in FIG. 5 .
- Empirical results match the theoretical models derived herein well.
- the noise in the empirical curves is due to a limited number (10K) of measurement runs.
- the example process described in this section has an inner loop and an outer loop.
- the inner loop can be the standard two-pass photon beam process described in [Jarosz et al. 2011] or some other method.
- photon beams are emitted from lights and scatter at surfaces and media in the scene.
- this pass can be effectively identical to the photon tracing in volumetric and surface-based photon mapping described by [Jensen 2001].
- the process determines the kernel width of each beam by tracing photon differentials during the photon tracing process.
- the process also includes automatically computing and enforcing radii bounds to avoid infinite variance or bias. Of course, as explained above, the order might not matter.
- the renderer computes radiance along each ray using Equation 3 or equivalent.
- this involves a couple of exponentials and scaling by the scattering coefficient and foreshortened phase function.
- the case for heterogeneous media is described further below.
- a user can have a single intuitive parameter to control convergence.
- a number of parameters influence the process' performance, such as the bias-noise tradeoff ⁇ (0,1), the number of photons per pass, M, and either a number of nearest neighbors k or an initial global radius.
- FIG. 7 is a plot of the global radius scaling factor, with varying M.
- the standard approach produces vastly different scaling sequences for a progressive simulation using the same total number of stored photons.
- We reduce the scale factor M times after each pass, which approximates the scaling sequence of M 1 regardless of M.
- An unbiased estimator can be implemented, in one example, by using mean-free path sampling as a black-box. Given a function, d(x, ), which returns a random propagation distance from a point x in direction , the transmittance between x and a point s units away in direction is as shown by Equation 22, where ⁇ is the Heaviside step function. This estimates transmittance by counting samples that successfully propagate a distance ⁇ s.
- Equation 22 evaluates Equation 22 for each ray/beam intersection within a pixel.
- Each evaluation actually provides enough information for an unbiased estimate of the transmittance function for all distances along the ray, and not just the function value at a single distance s.
- a renderer can handle this by computing n propagation distances and re-evaluating Equation 22 for arbitrary values of s. This results in an unbiased, piecewise-constant representation of the transmittance function, as illustrated in the left-hand side of FIG. 8 .
- FIG. 8 There, validation of progressive deep shadow maps is shown (thin solid line) for extinction functions (dashed line) with analytically-computable transmittances (thick solid line).
- four random propagation distances are used, resulting in a four-step approximation of transmittance in each pass.
- the renderer can and store several unbiased random propagation distances along each beam. Given these distances, it can re-evaluate transmittance using Equation 20 at any distance along the beam.
- the collection of transmittance functions across all photon beams forms an unstructured deep shadow map that converges to the correct result with many passes.
- Equation 22 When using Equation 22 to estimate transmittance, the only effect on the error analysis is that Var[ ⁇ ] in Equations 9 and 11 increases compared to using analytic transmittance (note that bias is not affected since E[ ⁇ ] does not change with an unbiased estimator). Homogeneous media could be rendered using the analytic formula or using Equation 22. Both approaches converge to the same result (as illustrated by the top row of FIG. 8 ), but the Monte Carlo estimator for transmittance adds additional variance. In some cases then, it is preferred to use analytic transmittance in the case of homogeneous media.
- a more general renderer combines a CPU ray tracer with a GPU rasterizer and possibly a GPU that can handle GPU operations.
- the CPU ray tracer handles the photon shooting process.
- the renderer can decompose the light paths into ones that can be easily handled using GPU-accelerated rasterization, and handle all other light paths with the CPU ray tracer.
- the renderer can rasterize all photon beams that are directly visible by the camera.
- the CPU ray tracer then handles the remaining light paths, such as those visible only via reflections/refractions off of objects.
- Equation 3 has a simple geometric interpretation, as illustrated in FIG. 3 , namely that each beam is an axial-billboard facing the camera. As in the standard photon beams approach, the CPU ray tracer computes ray-billboard intersections with this representation. However, for directly-visible beams, Equation 3 can be reformulated as a splatting operation amenable to GPU rasterization and thus GPU instructions can be generated.
- C++ is used and so is OpenGL.
- the photon beam billboard quad geometry is generated for every stored beam on the CPU.
- This geometry is rasterized with GPU blending enabled and a simple pixel shader evaluates Equation 3 for every pixel under the support of the beam kernel on the GPU.
- the renderer also culls the beam quads against the remaining scene geometry to avoid computing radiance from occluded beams.
- the renderer can apply a Gaussian jitter to the camera matrix in each pass.
- the CPU component handles all other light paths using Monte Carlo ray tracing with a single path per pixel per pass.
- the fragment shader evaluates Equation 3 using two exponentials for the transmittance. It can use several layers of simplex noise for heterogeneous media, and follow the approach derived above for progressive deep shadow maps.
- For transmittance along a beam it computes and stores a fixed number, n b , of random propagation distances along each beam using Woodcock tracking (in practice, n b is usually between 4 and 16). Since transmittance is constant between these distances, we can split each beam into n b quads before rasterizing and assign the appropriate transmittance to each segment using Equation 22.
- the OptiX GPU ray tracing API described by [Parker et al. 2010] is used. That OptiX renderer implements two kernels: one for photon beam shooting, and one for eye ray tracing and progressive accumulation. The renderer shoots and stores photon beams, in parallel, on the GPU. The shading kernel traces against all scene geometry and photon beams, each stored in their own BVH, with volumetric shading computed using Equation 3 at each beam intersection.
- a real-time GPU renderer only uses OpenGL rasterization in scenes with a limited number of specular bounces. Shadow mapping is extended to trace and generate beam quads that are visualized with GPU rasterization as above. [McGuire et al. 2009] also used shadow maps, but that was limited to photon splatting on surfaces. In this renderer, it generates and splats beams, possibly exploiting a progressive framework to obtain convergent results.
- a light-space projection transform (as in standard shadow mapping) can be used to rasterize the scene from the light's viewpoint. Instead of recording depth for each shadow map texel, each texel instead produces a photon beam.
- the renderer computes the origin and direction of the central beam as well as the auxiliary differential rays.
- the differential ray intersection points are computed using differential properties stored at the scene geometry as vertex attributes, interpolated during rasterization.
- beam directions are reflected and/or refracted at the scene geometry's interface. This entire process can be implemented in a simple pixel shader that outputs to multiple render-targets. Depending on the number of available render targets, several (reflected, refracted, or light) beams can be generated per render pass.
- the render target outputs can be snapped to vertex buffers and render points at the beam origins.
- the remainder of the beam data is passed as vertex attributes to the point geometry, and a geometry shader converts each point into an axial billboard.
- These quads can then be rendered using the same homogeneous shader as the hybrid example described above.
- the shadow map grid can be jittered to produce a different set of beams each pass.
- This end-to-end rendering procedure can be carried out entirely with GPU rasterization, and can render photon beams that emanate from the light source as well as those due to a single surface reflection/refraction.
- a hybrid implementation might use a 12-core 2.66 GHz Intel XeonTM 12 GB with an ATI Radeon HD 5770.
- the examples of FIG. 1 were generated in that manner, in both homogeneous and heterogeneous media, including zoomed insets of the media illumination showing the progressive refinement of the process.
- the scenes are rendered at 1280 ⁇ 720, and they include depth-of-field and antialiasing.
- the lights in these scenes are all modeled realistically with light sources inside reflective/refractive fixtures. Illumination encounters several specular bounces before arriving at surfaces or media, making these scenes impractical for path tracing.
- PPM can be used for surface shading, while PPB is used to get performance and quality on the media scattering.
- the Disco scene of FIG. 1 contains a mirror disco ball illuminated by six lights inside realistic Fresnel-lens casings. Each Fresnel light has a complex emission distribution due to refraction, and the reflections off of the faceted sphere produce intricate volume caustics.
- the media radiance was rendered in three minutes in homogeneous media and 5.7 minutes in heterogeneous media.
- the surface caustics on the wall of the scene require another 7.5 minutes.
- the Flashlights scene renders in 8.0 minutes and 10.8 minutes respectively using 2.1 M beams (diffuse shading takes an additional 124 minutes).
- Beam storage includes start and end points (2 ⁇ 3 floats), differential rays (2 ⁇ 2 ⁇ 3 floats), and power (3 floats).
- a scene-dependent acceleration structure might also be necessary, and even for a single bounding box per beam, this is 2 ⁇ 3 floats (it can use a BVH and implicitly split beams as described in [Jarosz, et al. 2011]).
- the implementation need not be optimized for memory usage with the progressive approach, but even with careful tuning this would likely be above 100 bytes per beam. Thus, even in simple scenes, beam storage can quickly exceed available memory.
- a scene with intricate refractions might require over 50 M beams for high-quality results and that would exceed 5 GB of memory even with the conservative 100 bytes/beam estimate. Using the progressive approach, this is not a problem.
- Our beams use adaptive kernels with ray differentials, which may allow for higher quality results using fewer beams. Also, rasterization is used for large portions of the illumination, which improves performance.
- FIG. 9 illustrates the OCEAN scene, where the viewer sees light beams refracted through the ocean surface and scattering in the ocean's media.
- the progressive rasterization converges in less than a second. It can be done in real-time using GPU rasterization with 4K beams at around 600 FPS, which is less than 2 ms). The image after 20 passes (top) renders at around 30 FPS (33 ms). The high-quality result renders in less than a second (bottom, 450 passes).
- FIG. 10 is a block diagram of hardware that might be used to implement a renderer.
- the renderer can use a dedicated computer system that only renders, but might also be part of a computer system that performs other actions, such as executing a real-time game or other experience with rendering images being one part of the operation.
- Rendering system 800 is illustrated including a processing unit 820 coupled to one or more display devices 810 , which might be used to display the intermediate images or accumulated images or final images, as well as allow for interactive specification of scene elements and/or rendering parameters.
- a variety of user input devices, 830 and 840 may be provided as inputs.
- a data interface 850 may also be provided.
- user input device 830 includes wired-connections such as a computer-type keyboard, a computer mouse, a trackball, a track pad, a joystick, drawing tablet, microphone, and the like; and user input device 840 includes wireless connections such as wireless remote controls, wireless keyboards, wireless mice, and the like.
- user input devices 830 - 840 typically allow a user to select objects, icons, text and the like that graphically appear on a display device (e.g., 810 ) via a command such as a click of a button or the like.
- Other embodiments of user input devices include front-panel buttons on processing unit 820 .
- Embodiments of data interfaces 850 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, FireWire interface, USB interface, and the like.
- data interfaces 850 may be coupled to a computer network, to a FireWire bus, a satellite cable connection, an optical cable, a wired-cable connection, or the like.
- Processing unit 820 might include one or more CPU and one or more GPU.
- processing unit 820 may include familiar computer-type components such as a processor 860 , and memory storage devices, such as a random access memory (RAM) 870 , disk drives 880 , and system bus 890 interconnecting the above components.
- RAM random access memory
- the CPU(s) and or GPU(s) can execute instructions representative of process steps described herein.
- RAM 870 and hard-disk drive 880 are examples of tangible media configured to store data such as images, scene data, instructions and the like.
- Other types of tangible media includes removable hard disks, optical storage media such as CD-ROMS, DVD-ROMS, and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, and the like ( 825 ).
- FIG. 10 is representative of a processing unit 820 capable of rendering or otherwise generating images. It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention.
- processing unit 820 may be a personal computer, handheld computer, server farm, or similar hardware.
- the techniques described below may be implemented upon a chip or an auxiliary processing board.
- Equation A1 The variance of the error in Equation 8 is as shown by Equation A1.
- the term remaining in square brackets is just a constant associated with the kernel, which we denote C 1 .
- Equation B3 Equation B3
- E [ ⁇ ( x, ⁇ right arrow over (w) ⁇ ,r )] E [ ⁇ ]( p U ⁇ right arrow over (w) ⁇ (0)+ rC 2 ) ⁇
- E[ ⁇ ]p U ⁇ right arrow over (w) ⁇ (0) rE[ ⁇ ]C 2 (Eqn. B5)
- Equation C1 For M photons in the photon beams estimate, each with their own kernel radius, the variance is as shown in Equation C1, where we use the harmonic mean of the radii in the last step, i.e.,
- Equation D1 where r A denotes the arithmetic mean of the beam radii.
- the Monte Carlo transmittance estimator increases variance per pass.
- the variance of the transmittance estimate increases with distance (where fewer random samples propagate). More precisely, at a distance where transmittance is 1%, only 1% of the beams contribute, which results in higher variance.
- the worst-case scenario is if both the camera and light source are very far away from the subject (or, conversely, if the medium is optically thick) since most of the beams terminate before reaching the subject, and most of the deep shadow map distances to the camera result in zero contribution.
- An unbiased transmittance estimator which falls off to zero in a piecewise-continuous, and not piecewise-constant, fashion might be used if this is an issue. Markov Chain Monte Carlo or adaptive sampling techniques could be used to reduce variance.
- adaptive techniques might be used to choose a to optimize convergence.
- progressive photon beam processing can render complex illumination in participating media. It converges to the gold standard of rendering, i.e., unbiased, noise free solutions of the radiative transfer and the rendering equation, while being robust to complex light paths including SDS and SMS subpaths. Such processing can be combined with photon beams in a simple and elegant way. In each iteration of a progressive process, a global scaling factor is applied to the beam radii and reduced each pass.
- Embodiments disclosed herein describe progressive photon beams, a new algorithm to render complex illumination in participating media.
- the main advantage of the algorithm disclosed herein is that it converges to the gold standard of rendering, i.e., unbiased, noise-free solutions of the radiative transfer and the rendering equation, while being robust to complex light paths including specular-diffuse-specular subpaths.
- the a parameter that controls the trade-off between reducing variance and bias is set to a constant. Some embodiments may adaptively determine this parameter to optimize convergence.
- photon beams can be sampled in various manners.
- combinations or sub-combinations of the above disclosed embodiments can be advantageously made.
- the block diagrams of the architecture and flow charts are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
Abstract
Description
- AKENINE-MÖLLER, T., HANES, E., and HOFFMAN, N. 2008. Real-Time Rendering 3rd Edition. A. K. Peters, Ltd., Natick, Mass., USA.
- CHANDRASEKAR, S. 1960. Radiative Transfer. Dover Publications.
- ENGELHARDT, T., NOVAK, J., and DACHSBACHER, C. 2010. Instant multiple scattering for interactive rendering of heterogeneous participating media. Tech. Rep., Karlsruhe Institute of Technology (December).
- HACHISUKA, T., and JENSEN, H. W. 2009. Stochastic progressive photon mapping. ACM Transactions on Graphics (December).
- HACHISUKA, T., OGAKI, S., and JENSEN, H. W. 2008. Progressive photon mapping. ACM Transactions on Graphics (December).
- HAVRAN, V. BITTNER, J., HERZOG, R., and SEIDEL, H.-P. 2005. Ray maps for global illumination. In Rendering Techniques, 43-54.
- HERZOG, R., HAVRAN, V. KINUWAKI, S., MYSZKOWSKI, K., and SEIDEL, H.-P. 2007. Global illumination using photon ray splatting. Computer Graphics Forum 26, 3 (September), 503-513.
- HU, W., DONG, Z., IHRKE, I., GROSCH, T., YUAN, G., and SEIDEL, H.-P. 2010. Interactive volume caustics in single-scattering media. In I3D, ACM.
- JAROSZ, W., ZWICKER, M., and JENSEN, H. W. 2008. The beam radiance estimate for volumetric photon mapping. Computer Graphics Forum 27, 2 (April), 557-566.
- JAROSZ, W., NOWROUZEZAHRAI, D., SADEGHI, I., and JENSEN, H. W. 2011. A comprehensive theory of volumetric radiance estimation using photon points and beams. ACM Transactions on Graphics.
- JENSEN, H. W., and CHRISTENSEN, P. H. 1998. Efficient simulation of light transport in scenes with participating media using photon maps. In Proceedings of SIGGRAPH.
- JENSEN, H. W. 2001, Realistic Image Synthesis Using Photon Mapping. A. K. Peters, Ltd., Natick, Mass., USA.
- KAJIYA, J. T. 1986. The rendering equation. In Computer Graphics (Proceedings of SIGGRAPH 86), 143-150.
- KNAUS, C., and ZWICKER, M. 2011. Progressive photon mapping: A probabilistic approach. ACM Transactions on Graphics.
- KRÜGER, J., BÜRGER, K., and WESTERMANN, R. 2006. Interactive screen-space accurate photon tracing on GPUs. In Rendering Techniques.
- LAFORTUNE, E. P., and WILLEMS, Y. D. 1993. Bi-directional path tracing. In Compugraphics.
- LAFORTUNE, E. P., and WILLEMS, Y. D. 1996. Rendering participating media with bidirectional path tracing. In EG Rendering Workshop.
- LASTRA, M., UREÑA, C., REVELLES, J., and MONTES, R. 2002. A particle-path based method for Monte Carlo density estimation. In EG Workshop on Rendering, EG Association.
- LIKTOR, G., and DACHSBACHER, C. 2011. Real-time volume caustics with adaptive beam tracing. In Symposium on Interactive 3D Graphics and Games, ACM, New York, N.Y., USA, I3D '11, 47-54.
- LOKOVIC, T., and VEACH, E. 2000. Deep shadow maps. In SIGGRAPH, ACM Press, New York, N.Y., USA, 385-392.
- MCGUIRE, M., and LUEBKE, D. 2009. Hardware-accelerated global illumination by image space photon mapping. In HPG, ACM.
- PARKER, S. G., BIGLER, J., DIETRICH, A., FRIEDRICH, H., HOBEROCK, J., LUEBKE, D., MCALLISTER, D., MCGUIRE, M., MORLEY, K., ROBISON, A., and STICH, M. 2010. Optix: A general purpose ray tracing engine. ACM Transactions on Graphics (July).
- PAULY, M., KOLLIG, T., and KELLER, A. 2000. Metropolis light transport for participating media. In Rendering Techniques, 11-22.
- PERLIN, K. 2001. Noise hardware. In Real-time Shading, ACM SIGGRAPH Course Notes.
- RAAB, M., SEIBERT, D, and KELLER, A. 2008. Unbiased global illumination with participating media. In Monte Carlo and Quasi-Monte Carlo Methods 2006. Springer, 591-606.
- SCHJØTH, L. FRISVAD, J. R., ERLEBEN, K., and SPORRING, J. 2007. Photon differentials. In GRAPHITE, ACM, New York.
- SILVERMAN, B. 1986. Density Estimation for Statistics and Data Analysis. Monographs on Statistics and Applied Probability. Chapman and Hall, New York.
- SUN, X., ZHOU, K., LIN, S., and GUO, B. 2010. Line space gathering for single scattering in large scenes. ACM Transactions on Graphics.
- SZIRMAY-KALOS, L., TÓTH, B. and MAGDICS, M. 2011. Free path sampling in high resolution inhomogeneous participating media.
Computer Graphics Forum 30, 1, 85-97. - VEACH, E., and GUIBAS, L. 1994. Bidirectional estimators for light transport. In Fifth Eurographics Workshop on Rendering, 147-162.
- VEACH, E., and GUIBAS, L. J. 1997. Metropolis light transport, In Proceedings of SIGGRAPH 97, Computer Graphics Proceedings, Annual Conference Series, 65-76.
- WALTER, B., ZHAO, S., HOLZSCHUCH, N., and BALA, K. 2009. Single scattering in refractive media with triangle mesh boundaries. ACM Transactions in Graphics 28, 3 (July), 92:1-92:8.
- WILLIAMS, L. 1978. Casting curved shadows on curved surfaces. In Computer Graphics (Proceedings of SIGGRAPH 78), 270-274.
- WOODCOCK, E., MURPHY, T., HEMMINGS, P., and T. C., L. 1965. Techniques used in the GEM code for Monte Carlo neutronics calculations in reactors and other systems of complex geometry. In App. of Computing Methods to Reactor Prob., Argonne National Laboratory.
- YUE, Y., IWASAKI, K., CHEN, B.-Y., DOBASHI, Y., and NISHITA, T. 2010. Unbiased, adaptive stochastic sampling for rendering inhomogeneous participating media. ACM Transactions on Graphics.
L(x,)=T r(s)L s(x s,)+L m(x,) (Eqn. 1)
L m(x,)=∫0 sσs(x w)T r(w)∫Ω
Since each image i uses a different photon map, the errors εi can be interpreted as samples of independent random variables. Hence, the variance (noise) and expected value (bias) of average error are as shown in Equations 4A and 4B, respectively.
Var[
E[
c=∫∫W(x,)L m(x,)dxd (Eqn. 6)
L m(x, ,r)=k r(u)γ+ε(x, ,r) (Eqn. 8)
Var[ε(x, ,r)]=(Var[γ]+E[γ] 2)p U (0)C 1 /r (Eqn. 9)
E[ε(x, ,r)]=rE[γ]C 2 (Eqn. 10)
Using Many Beams
This includes the expected behavior that variance decreases linearly with the number of emitted photon beams M.
E[ε(x, ,r 1 Kr M)]=r A E[γ]C 2 (Eqn. 12)
Var[ε i+1 ]/Var[ε i]=(i+1)/(i+α) (Eqn. 13)
Radius Reduction Sequence
R i+1 /R i =Var[ε i ]/V[ε i+1]=(i+α)/(i+1) (Eqn. 16)
Expected Value of Average Error
E[ε i ]=E[ε 1 ]R i (Eqn. 18)
Empirical Validation
Var[ε(x, ,r)]=Var[k r(U)γ−L()]
=Var[k r(U)γ](Var[γ]+E[γ] 2)+Var[γ]E[k r(U)]2 (Eqn. A1)
Var[k r(U)]= k r(ξ)2 p U (ξ)dξ−[ k r(ξ)p U (ξ)dξ] 2 (Eqn. A2)
E[ε(x, ,r)]=E[k r(U)γ−L()]
=E[γ]E[k r(U)]−L() (Eqn. B1)
E[k r(U)]=1/r k(ξ/r)p U (ξ)dξ
=1/r k(ξ/r)(p U (0)+(ξ)∇p U (0)+O(ξ2)dξ (Eqn. B2)
L({right arrow over (w)})=E[γ]E[δ(U)]=E[γ]p U {right arrow over (w)}(0) (Eqn. B4)
E[∈(x,{right arrow over (w)},r)]=E[γ](p U {right arrow over (w)}(0)+rC 2)−E[γ]p U {right arrow over (w)}(0)=rE[γ]C 2 (Eqn. B5)
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/164,358 US9245377B1 (en) | 2011-09-16 | 2014-01-27 | Image processing using progressive generation of intermediate images using photon beams of varying parameters |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/235,299 US8638331B1 (en) | 2011-09-16 | 2011-09-16 | Image processing using iterative generation of intermediate images using photon beams of varying parameters |
US14/164,358 US9245377B1 (en) | 2011-09-16 | 2014-01-27 | Image processing using progressive generation of intermediate images using photon beams of varying parameters |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/235,299 Continuation US8638331B1 (en) | 2011-06-10 | 2011-09-16 | Image processing using iterative generation of intermediate images using photon beams of varying parameters |
Publications (1)
Publication Number | Publication Date |
---|---|
US9245377B1 true US9245377B1 (en) | 2016-01-26 |
Family
ID=49957951
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/235,299 Active 2032-01-20 US8638331B1 (en) | 2011-06-10 | 2011-09-16 | Image processing using iterative generation of intermediate images using photon beams of varying parameters |
US14/164,358 Expired - Fee Related US9245377B1 (en) | 2011-09-16 | 2014-01-27 | Image processing using progressive generation of intermediate images using photon beams of varying parameters |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/235,299 Active 2032-01-20 US8638331B1 (en) | 2011-06-10 | 2011-09-16 | Image processing using iterative generation of intermediate images using photon beams of varying parameters |
Country Status (1)
Country | Link |
---|---|
US (2) | US8638331B1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150035831A1 (en) * | 2013-08-02 | 2015-02-05 | Disney Enterprises, Inc. | Methods and systems of joint path importance sampling |
US20160042553A1 (en) * | 2014-08-07 | 2016-02-11 | Pixar | Generating a Volumetric Projection for an Object |
US20220051786A1 (en) * | 2017-08-31 | 2022-02-17 | Gmeditec Co., Ltd. | Medical image processing apparatus and medical image processing method which are for medical navigation device |
US11494966B2 (en) * | 2020-01-07 | 2022-11-08 | Disney Enterprises, Inc. | Interactive editing of virtual three-dimensional scenes |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130100135A1 (en) * | 2010-07-01 | 2013-04-25 | Thomson Licensing | Method of estimating diffusion of light |
US9070208B2 (en) | 2011-05-27 | 2015-06-30 | Lucasfilm Entertainment Company Ltd. | Accelerated subsurface scattering determination for rendering 3D objects |
US8638331B1 (en) * | 2011-09-16 | 2014-01-28 | Disney Enterprises, Inc. | Image processing using iterative generation of intermediate images using photon beams of varying parameters |
US9013484B1 (en) * | 2012-06-01 | 2015-04-21 | Disney Enterprises, Inc. | Progressive expectation-maximization for hierarchical volumetric photon mapping |
GB2513698B (en) * | 2013-03-15 | 2017-01-11 | Imagination Tech Ltd | Rendering with point sampling and pre-computed light transport information |
US9953457B2 (en) * | 2013-04-22 | 2018-04-24 | Nvidia Corporation | System, method, and computer program product for performing path space filtering |
US10198856B2 (en) * | 2013-11-11 | 2019-02-05 | Oxide Interactive, LLC | Method and system of anti-aliasing shading decoupled from rasterization |
US10198788B2 (en) * | 2013-11-11 | 2019-02-05 | Oxide Interactive Llc | Method and system of temporally asynchronous shading decoupled from rasterization |
US9607426B1 (en) | 2013-12-20 | 2017-03-28 | Imagination Technologies Limited | Asynchronous and concurrent ray tracing and rasterization rendering processes |
US9697640B2 (en) | 2014-04-21 | 2017-07-04 | Qualcomm Incorporated | Start node determination for tree traversal in ray tracing applications |
US10235338B2 (en) * | 2014-09-04 | 2019-03-19 | Nvidia Corporation | Short stack traversal of tree data structures |
JP6393153B2 (en) * | 2014-10-31 | 2018-09-19 | 株式会社スクウェア・エニックス | Program, recording medium, luminance calculation device, and luminance calculation method |
EP3057067B1 (en) * | 2015-02-16 | 2017-08-23 | Thomson Licensing | Device and method for estimating a glossy part of radiation |
CN105118083B (en) * | 2015-08-11 | 2018-03-16 | 浙江大学 | A kind of Photon Mapping method for drafting of unbiased |
US9818221B2 (en) | 2016-02-25 | 2017-11-14 | Qualcomm Incorporated | Start node determination for tree traversal for shadow rays in graphics processing |
US9905054B2 (en) * | 2016-06-09 | 2018-02-27 | Adobe Systems Incorporated | Controlling patch usage in image synthesis |
US10269172B2 (en) * | 2016-10-24 | 2019-04-23 | Disney Enterprises, Inc. | Computationally efficient volume rendering in computer-generated graphics |
US10943390B2 (en) * | 2017-11-20 | 2021-03-09 | Fovia, Inc. | Gradient modulated shadow mapping |
CN108961372B (en) * | 2018-03-27 | 2022-10-14 | 北京大学 | Progressive photon mapping method based on statistical model test |
US11010963B2 (en) * | 2018-04-16 | 2021-05-18 | Nvidia Corporation | Realism of scenes involving water surfaces during rendering |
CN109509248B (en) * | 2018-09-28 | 2023-07-18 | 北京大学 | Photon mapping rendering method and system based on neural network |
US11436783B2 (en) | 2019-10-16 | 2022-09-06 | Oxide Interactive, Inc. | Method and system of decoupled object space shading |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8064726B1 (en) | 2007-03-08 | 2011-11-22 | Nvidia Corporation | Apparatus and method for approximating a convolution function utilizing a sum of gaussian functions |
US8638331B1 (en) * | 2011-09-16 | 2014-01-28 | Disney Enterprises, Inc. | Image processing using iterative generation of intermediate images using photon beams of varying parameters |
-
2011
- 2011-09-16 US US13/235,299 patent/US8638331B1/en active Active
-
2014
- 2014-01-27 US US14/164,358 patent/US9245377B1/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8064726B1 (en) | 2007-03-08 | 2011-11-22 | Nvidia Corporation | Apparatus and method for approximating a convolution function utilizing a sum of gaussian functions |
US8638331B1 (en) * | 2011-09-16 | 2014-01-28 | Disney Enterprises, Inc. | Image processing using iterative generation of intermediate images using photon beams of varying parameters |
Non-Patent Citations (36)
Title |
---|
Engelhardt, T., Novak, J., and Dachsbachfr, C., "Instant multiple scattering for interactive rendering of heterogeneous participating media." Technical Report. Karlsruhe Institute of Technology, Dec. 8, 2010. 9 pages. Retrieved from https://cg.ibds.kit.edu/downloads/InstantMultipleScattering.sub.--TechRepo- rt08Dec2010.pdf on Aug. 20, 2013. |
Hachisuka, T., and Jensen, H. W., "Stochastic progressive photon mapping.". ACM Transactions on Graphics (TOG), Dec. 2009. 8 pages. Retrieved from https://cs.au.dk/.about.toshiya/sppm.pdf on Aug. 20, 2013. |
Hachisuka, T., Ogaki, S., and Jensen, H. W., "Progressive photon mapping." ACM Transactions on Graphics (TOG). vol. 27. No. 5. ACM, 2008. 7 pages. Retrieved from https://cs.au.dk/.about.toshiya/ppm.pdf on Aug. 20, 2013. |
Hachisuka, Toshiya, Shinji Ogaki, and Henrik Wann Jensen. "Progressive photon mapping." ACM Transactions on Graphics (TOG). vol. 27. No. 5. ACM, 2008. * |
Hachisuka, Toshiya, Wojciech Jarosz, and Henrik Wann Jensen. "Stochastic Progressive Photon Mapping" ACM Transactions on Graphics (TOG) (Dec. 2009). * |
Havran, V,, Bittner, J., Herzog, R., and Seidel, H.-P., "Ray maps for global illumination." Rendering Techniques, 2005. 14 pages. Retrieved from https://www.mpi-inf.mpg.de/.about.rherzog/Papers/raymapsEGSR05.pdf on Aug. 20, 2013. |
Herzog, R., Havran, V,, Kinuwaki, S., Myszkowski, K., and Seidel, H.-P., "Global illumination using photon ray splatting." Computer Graphics Forum vol. 26, No. 3, Sep. 2007. (Blackwell Publishing Ltd.), 11 pages. Retrieved from https://www.mpi-inf.mpg.de/.about.rherzog/Papers/herzog07EG.pdf on Aug. 20, 2013. |
Herzog, Robert, et al. "Global illumination using photon ray splatting." Computer Graphics Forum. vol. 26. No. 3. Blackwell Publishing Ltd, 2007. * |
Hu, W., Dong, Z., Ihrke, I., Grosch, T., Yuan, G., and Seidel, H.-P., "Interactive volume caustics in single-scattering media." 13D, ACM .2010. 9 pages. Retrieved from https://www.graphics.cornell.edu/.about.zd/download/I3DFinal/I3D2020.sub.-- -Sub.pdf on Aug. 20, 2013. |
Jarosz, W., Nowrouzezahrai, D., Sadeghi, I., and Jensen, H. W., "A comprehensive theory of volumetric radiance estimation using photon points and beams." ACM Transactions on Graphics, 2011. 19 pages. Retrieved from https://zurich.disneyresearch.com/.about.wjarosz/publications/jaroz11compr- ehensive.pdf on Aug. 20, 2013. |
Jarosz, W., Zwicker, M., and Jensen, H. W., "The beam radiance estimate for volumetric photon mapping." Computer Graphics Forum vol. 27, No. 2, Apr. 2008. 10 pages. Retrieved from https://zurich.disneyresearch.com/.about.wjarosz/publications/jarosz08beam- .pdf on Aug. 20, 2013. |
Jarosz, Wojciech, Derek Nowrouzezahrai, Robert Thomas, Peter-Pike Sloan, and Matthias Zwicker, "Progressive Photon Beams", ACM SIGGRAPH Asia 2011, vol. 30, Issue 6, Article No. 181, ACM (Dec. 2011), 11 pages. |
Jarosz, Wojciech, Matthias Zwicker, and Henrik Wann Jensen. "The beam radiance estimate for volumetric photon mapping." ACM SIGGRAPH 2008 classes. ACM, 2008. * |
Jensen, H. W., and Christensen, P. H., "Efficient simulation of light transport in scenes with participating media using photon maps." Proceedings of SIGGRAPH, 1998. 10 pages. Retrieved from https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.118.6575&rep=rep- 1&type=pdf on Aug. 20, 2013. |
Kajiya, J. T., "The rendering equation." Computer Graphics, Proceedings of SIGGRAPH 86, 1986. 8 pages. Retrieved from https://x86.cs.duke.edu/courses/cps124/compsci344/.../16...p143-kajiya.pdf on Aug. 20, 2013. |
Knaus, C., and Zwicker, M., "Progressive photon mapping: A probabilistic approach." ACM Transactions on Graphics, 2011. 14 pages. Retrieved from https://www.cs.jhu.edu/.about.misha/ReadingSeminar/Papers/Knaus11.pdf on Aug. 20, 2013. |
Kruger, J., Burger, K., and Westermann, R., "Interactive screen-space accurate photon tracing on GPUs." In Rendering Techniques. 2006. 12 pages. Retrieved from https://wwwcg.in.tum.de/research/research/publications/2006/interactive-sc- reen-space-accurate-photon-tracing-on-gpus.html on Aug. 20, 2013. |
LaFortune, E. P., and Willems, Y. D., "Bi-directional path tracing." Proceedings of Compugraphics, vol. 93. 1993. 8 pages. |
LaFortune, E. P., and Willems, Y. D., "Rendering participating media with bidirectional path tracing." EG Rendering Workshop. 1996. 11 pages. Retrieved from https://luthuli.cs.uiuc.edu/.about.daf/courses/Rendering/Papers/lafortune9- 6rendering.pdf on Aug. 20, 2013. |
Lastra, M., Urena, C., Revelles, J., and Montes, R. "A particle-path based method for Monte-Carlo density estimation." EG Workshop on Rendering. 2002. 8 pages. Retrieved from https://lsi.ugr.es/about.curena/inves/egrw02/lastra-egrw02.ps.gz on Aug. 20, 2013. |
Liktor, G., and Dachsbacher, C., "Real-time volume caustics with adaptive beam tracing." Symposium on Interactive 3D Graphics and Games. ACM Press. New York, NY. 2011. 8 pages. Retrieved from https://cg.ibds.kit.edu/downloads/VolumeCaustics.sub.--Preprint.pdf on Aug. 20, 2013. |
Lokovic, Tom, and Eric Veach. "Deep shadow maps." Proceedings of the 27th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 2000. * |
Lokovic, Tom, and Eric Veach., "Deep shadow maps." Proceedings of the 27th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 2000. |
McGuire, M., and Luebke, D., "Hardware-accelerated global illumination by image space photon mapping." HPG, ACM. 2009. 12 pages. Retrieved from https://graphics.cs.williams.edu/papers/PhotonHPG09/ISPM-HPG09.pdf on Aug. 20, 2013. |
Parker, S. G., Bigler, J., Dietrich, A., Friedrich, H., Hoberock, J., Luebke, D., McAllister, D., McGuire, M., Morley, K., Robison, A., and Stich, M., "Optix: A general purpose ray tracing engine." ACM Transactions on Graphics. Jul. 2010. Retrieved from https://graphics.cs.williams.edu/papers/OptiXSIGGRAPH10/Parker1.0OptiX.pdf on Aug. 20, 2013. |
Pauly, M., Kollig, T., and Keller, A., "Metropolis light transport for participating media." Rendering Techniques. 2000. 13 pages. Retrieved from https://www.cse.ohio-state.edu/.about.parent/classes/782/Papers/Photo-nMap/metropolis.pdf on Aug. 20, 2013. |
Perlin, K., "Noise hardware." Real-time Shading, ACM SIG-GRAPH Course Notes. 2001. 26 pages. Retrieved from https://reality.sgiweb.org/olano/s2002c36/ch02.pdf on Aug. 20, 2013. |
Raab, M., Seibert, D and Keller, A., "Unbiased global illumination with participating media." Monte Carlo and Quasi-Monte Carlo Methods. 2006. 16 pages. Retrieved from https://www.uni-ulm.de/fileadmin/website.sub.--uni.sub.--.ulm/iui.inst.100/- institut/Papers/ugiwpm.pdf on Aug. 20, 2013. |
Schjoth, L,, Frisvad, J. R., Erleben, K., and SPORRING, J., "Photon differentials." Graphite, ACM, New York. 2007. 8 pages. Retrieved from https://orbit.dtu.dk/fedora/objects/orbit:63065/datastreams/file.sub.--550- 4929/content on Aug. 20, 2013. |
Sun, X., Zhou, K., Lin, S., and Guo, B., "Line space gathering for single scattering in large scenes." ACM Transactions on Graphics. 2010. 8 pages. Retrieved from https://www.kunzhou.net/2010/LSG.pdf on Aug. 20, 2013. |
Szirmay-Kalos, L., Toth, B,, and Magdics, M., "Free path sampling in high resolution inhomogeneous participating media." Computer Graphics Forum. 2011. 12 pages. Retrieved from https://sirkan.iit.bme.hu/.about.szirmay/woodcockperlin7.pdf Aug. 20, 2013. |
Veach, E., and Guibas, L. J., "Metropolis light transport." Proceedings of SIGGRAPH 97, Computer Graphics Proceedings, Annual Conference Series. 1997. 13 pages. Retrieved from https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.88.944&rep=rep1&- type on Aug. 20, 2013. |
Veach, E., and Guibas, L., "Bidirectional estimators for light transport." Fifth Eurographics Workshop on Rendering. 1994. 20 pages. Retrieved from https://www0.cs.ucl.ac.uk/research/vr/Projects/VLF/vlfpapers/multi-pass.su- b.--hybrid/Veach.sub.--E.sub.--Bi-directional.sub.--Estimators.sub.--for.s- ub.--Light.sub.--Transport.pdf on Aug. 20, 2013. |
Walter, B., Zhao, S., Holzschuch, N., and Bala, K., "Single scattering in refractive media with triangle mesh boundaries." ACM Transactions on Graphics28, 3, 92:1-92:8. Jul. 2009. 7 pages. Retrieved from https://shuangz.com/projects/amber-sg09/amber.pdf on Aug. 20, 2013. cited by applicant. |
Williams, L., "Casting curved shadows on curved surfaces." Computer Graphics, Proceedings of SIGGRAPH 78. 1978. 5 pages. Retrieved from https://www.cs.berkeley.edu/.about.ravir/6160-fall04/papers/p270-williams.- pdf on Aug. 20, 2013. |
Yue, Y., Iwasaki, K., Chen, B.-Y., Dobashi, Y., and Nishita, T., "Unbiased, adaptive stochastic sampling for rendering inhomogeneous participating media." ACM Transactions on Graphics. 2010. 7 pages. Retrieved from https://nishitalab.org/user/egaku/sigasia10/177.pdf on Aug. 20, 2013. |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150035831A1 (en) * | 2013-08-02 | 2015-02-05 | Disney Enterprises, Inc. | Methods and systems of joint path importance sampling |
US9665974B2 (en) * | 2013-08-02 | 2017-05-30 | Disney Enterprises, Inc. | Methods and systems of joint path importance sampling |
US20160042553A1 (en) * | 2014-08-07 | 2016-02-11 | Pixar | Generating a Volumetric Projection for an Object |
US10169909B2 (en) * | 2014-08-07 | 2019-01-01 | Pixar | Generating a volumetric projection for an object |
US20220051786A1 (en) * | 2017-08-31 | 2022-02-17 | Gmeditec Co., Ltd. | Medical image processing apparatus and medical image processing method which are for medical navigation device |
US11676706B2 (en) * | 2017-08-31 | 2023-06-13 | Gmeditec Co., Ltd. | Medical image processing apparatus and medical image processing method which are for medical navigation device |
US11494966B2 (en) * | 2020-01-07 | 2022-11-08 | Disney Enterprises, Inc. | Interactive editing of virtual three-dimensional scenes |
US11847731B2 (en) | 2020-01-07 | 2023-12-19 | Disney Enterprises, Inc. | Interactive editing of virtual three-dimensional scenes |
Also Published As
Publication number | Publication date |
---|---|
US8638331B1 (en) | 2014-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9245377B1 (en) | Image processing using progressive generation of intermediate images using photon beams of varying parameters | |
US10290142B2 (en) | Water surface rendering in virtual environment | |
Jarosz et al. | A comprehensive theory of volumetric radiance estimation using photon points and beams | |
Zeltner et al. | Monte Carlo estimators for differential light transport | |
US8493383B1 (en) | Adaptive depth of field sampling | |
US7952583B2 (en) | Quasi-monte carlo light transport simulation by efficient ray tracing | |
Heckbert | Simulating global illumination using adaptive meshing | |
Keller | Quasi-Monte Carlo image synthesis in a nutshell | |
US7940268B2 (en) | Real-time rendering of light-scattering media | |
US9013484B1 (en) | Progressive expectation-maximization for hierarchical volumetric photon mapping | |
US9208610B2 (en) | Alternate scene representations for optimizing rendering of computer graphics | |
US20210142555A1 (en) | Rendering images using modified multiple importance sampling | |
US10249077B2 (en) | Rendering the global illumination of a 3D scene | |
Belcour et al. | A local frequency analysis of light scattering and absorption | |
Dietrich et al. | Massive-model rendering techniques: a tutorial | |
US12067667B2 (en) | Using directional radiance for interactions in path tracing | |
Frolov et al. | Light transport in realistic rendering: state-of-the-art simulation methods | |
JP5718934B2 (en) | Method for estimating light scattering | |
Papaioannou | Real-time diffuse global illumination using radiance hints | |
Patel et al. | Instant convolution shadows for volumetric detail mapping | |
Pan et al. | Transient instant radiosity for efficient time-resolved global illumination | |
Wang et al. | Rendering transparent objects with caustics using real-time ray tracing | |
Reza | Efficient Sample Reusage in Path Space for Real-Time Light Transport | |
Belcour | A Frequency Analysis of Light Transport: from Theory to Implementation | |
Apers et al. | Interactive Light Map and Irradiance Volume Preview in Frostbite |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: THE WALT DISNEY COMPANY (SWITZERLAND) GMBH, SWITZE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAROSZ, WOJCIECH;NOWROUZEZAHRAI, DEREK;THOMAS, ROBERT;AND OTHERS;SIGNING DATES FROM 20110919 TO 20111101;REEL/FRAME:032119/0469 Owner name: DISNEY ENTERPRISES, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THE WALT DISNEY COMPANY (SWITZERLAND) GMBH;REEL/FRAME:032119/0678 Effective date: 20120101 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20240126 |