WO2005073924A1 - Apparatus and method for determining intersections - Google Patents

Apparatus and method for determining intersections Download PDF

Info

Publication number
WO2005073924A1
WO2005073924A1 PCT/GB2005/000306 GB2005000306W WO2005073924A1 WO 2005073924 A1 WO2005073924 A1 WO 2005073924A1 GB 2005000306 W GB2005000306 W GB 2005000306W WO 2005073924 A1 WO2005073924 A1 WO 2005073924A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
pathway
scene
operable
subfield
Prior art date
Application number
PCT/GB2005/000306
Other languages
French (fr)
Inventor
Melvyn Slater
Original Assignee
University College London
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University College London filed Critical University College London
Publication of WO2005073924A1 publication Critical patent/WO2005073924A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/40Hidden part removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing

Definitions

  • the projected ray is reflected/ efracted from the object (depending on the objects' properties) to define one or more secondary rays and the intersections of the secondary rays with other objects in the scene are identified.
  • the resulting image pixel value is then determined taking into account the object properties intersected by the primary ray and the object properties of objects intersected by the secondary rays.
  • Figure 7a schematically illustrates the orientation and location of two objects (labelled A and B) within a current PSF 33 whose orientation is defined by the angles ( ⁇ - j ) .
  • Figure 7a also schematically illustrates the tiles 51-1 to 51-9 of the PSF 33 and, as represented by the dots 35, some of the rays 35 within each tile 51.
  • each PSF 33 will be divided into a larger number of smaller tiles 51.
  • the number of tiles that each PSF 33 is divided into is a trade-off between the required storage space and the time taken to generate a 2D image 15 using the image renderer 5.
  • smaller tiles means that there will be less objects per tile and therefore the final rendering will be faster, but smaller tiles requires more storage space to store the intersection data.
  • each object is initially rotated, in step sl3, in three dimensions to take into account the orientation of the current PSF 33 into which the object is being inserted.
  • the central controller 39 uses a pre-stored rotation matrix that is associated with the current PSF 33 (which forms part of the PSF data retrieved in step si from the PSF data store 43) . This rotation matrix is then applied to the retrieved object definition data 45 to effect the rotation.
  • the processing then proceeds to step sl5 where the rotated object is projected vertically onto the base of a nominal vertically orientated PSF, which is computationally easier than projecting the object onto the base 36 of the original PSF 33.
  • step s49 the pixel value for the current pixel 16 is updated using the object definition data 45 for the nearest object intersected by the current secondary ray.
  • this processing to update the current image pixel value can be performed according to a number of conventional techniques, for example, as described in the paper "An Improved Illumination Model for Shaded Display” by Whitted T. mentioned above.
  • the image renderer 5 was able to generate 2D images 15 of the 3D scene 18 from a current user defined viewpoint 7. The user was also able to change the user defined viewpoint 7 and the image renderer 5 recalculated the 2D image 15 from the new viewpoint 7.
  • An embodiment will now be described in which the image renderer 5 can generate a new 2D image not only for a new viewpoint 7 but also taking into account movement or deformation of one or more of the objects 22 within the 3D scene 18.
  • the image renderer 5 of this second embodiment can therefore be used, for example, in virtual reality and game applications. The way in which the image renderer 5 achieves this will now be described with reference to Figures 15 and 16.
  • step s65 the central controller 83 determines whether or not any objects 22 within the 3D scene 18 have been modified (e.g. moved or deformed etc.) . If they have, then the processing proceeds to step s67 where the central controller 83 updates the intersection data 13 for the modified objects 22.
  • step s69 the central controller 83 checks to see if the user defined viewpoint 7 has changed. In either case, the processing then proceeds to step 71 where the central controller 83 generates a new 2D image 15 of the 3D scene 18. If, at step s69, the central controller determines that the viewpoint 7 has not changed, then in step s71, the new 2D image 15 is generated from the current, unchanged viewpoint 7 , but using the updated intersection data 13. If, however, the central controller determines, in step s69, that the viewpoint 7 has changed, then in step s71, the central controller 83 generates the new 2D image 15 of the 3D scene 18 from the new viewpoint 7 using the updated intersection data 13.
  • step s67 the central controller 83 updated the intersection data 13 when one or more objects 22 in the scene 18 were modified.
  • the central controller 83 retrieves, in step s77, the current PSF intersection data 53 for the first PSF 33 (i.e. 53-1 shown in Figure 8) from the intersection data store 77.
  • the processing then proceeds to step s79 where the central controller 83 removes the identifier for each object 22 to be modified from the current PSF intersection data 53.
  • This technique is a stochastic method that fires primary rays from the viewpoint through a pixel and then probabilistically follows the path of the ray through the scene accumulating energy at each bounce from surface to surface. The ray path is terminated probabilistically.
  • Each pixel may have more than forty such random ray paths and the final "colour" of each pixel is determined as the average of each of the ray paths .
  • the image renderer may allow the insertion or the deletion of objects into or from the scene. In this case, if an object is to be inserted, then there is no need to remove the object identifier from the intersection data as it will not exist. The image renderer only has to add the object identifier for the new object into the intersection data. Similarly, where an object is to be deleted, its object identifier is simply removed from the intersection data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Image Generation (AREA)

Abstract

A system is described for efficiently determining intersections between one or more pathways and one or more objects in a 3D scene. The 3D scene is pre­ processed to generate intersection data identifying intersections between objects within the 3D scene and a plurality of volumes arranged in a pre-defined structure. The intersection data is subsequently used by a pathway processing apparatus to determine an intersection between a pathway projected from a start point and the objects within the 3D scene. The processing to determine the intersection is advantageously reduced by using the generated intersection data to identify a subset of candidate objects which the projected pathway might intersect.

Description

APPARATUS AND METHOD FOR DETERMINING INTERSECTIONS
The present invention is concerned with a system for determining intersections between one or more pathways and one or more objects in a 3D scene (or environment) . The present invention has applications for example in the rendering of objects in a 3D computer model scene, the modelling of heat or sound in a scene or to facilitate navigation of an autonomous robot in a 3D scene.
Ray tracing is a common technique used to render realistic 2D images of a 3D computer model scene. In ray tracing, a ray is projected from a defined viewpoint into the 3D scene through each pixel in an image to be rendered, in order to determine a corresponding value for each pixel in the image. An object in the 3D scene which the projected ray intersects is identified, and the pixel value is calculated from the identified object's properties.
Additionally, in order to calculate shadows, reflections and refractions of objects in the scene, the projected ray is reflected/ efracted from the object (depending on the objects' properties) to define one or more secondary rays and the intersections of the secondary rays with other objects in the scene are identified. The resulting image pixel value is then determined taking into account the object properties intersected by the primary ray and the object properties of objects intersected by the secondary rays. Such a technique is described in "An Improved Illumination Model for Shaded Display" by T. hitted in Communications of the ACM 1980, 23(6), pages 343 to 349. One problem with the ray tracing technique is that intersection calculations must be performed between every object in the 3D scene and every ray (and secondary ray) that is projected into the 3D scene, in order to determine the objects that will contribute to the image pixel value. This results in a large number of intersection calculations that must be carried out, which significantly increases the computational requirements and the amount of time required to render an image. Additionally, because such a technique is computationally intensive, it is generally unsuitable for real-time applications.
A number of techniques have been developed to try to reduce the time required to render an image using ray tracing. For example, the papers "Automatic Creation of Object Hierarchy for Ray Tracing" by Goldsmith, J. and Salmon, J. (1987) IEEE CG&A. 7(5), pages 14-20 and "Ray Tracing Complex Scenes" by Kay, T.L. and Kajiya, J.T. (1986), Computer Graphics (SIGGRAPH) 20(4), pages 269-278 both describe a technique of creating hierarchies of bounding boxes around objects in the scene, such that an object can be disregarded efficiently if the projected ray does not intersect the bounding box.
As another example, the paper "Analysis of an Algorithm for Fast Ray Tracing Using Uniform Space Subdivision" by Cleary, J.G. and Wyvill, G. (1988) The Visual Computer, 4, pages 65-83 describes a technique of subdividing the 3D space into a plurality of equally sized sub-volumes or voxels, where each voxel has an associated list of the objects which lie at least partly within the voxel. During the rendering process, each voxel that the projected ray passes through is considered and only those objects on the list are checked for intersections. A similar technique is described in the paper "Space Subdivision for Fast Ray Tracing" by Glassner, A.S. (1984) IEEE Computer Graphics and Applications, 4(10), 15-22 where the space is subdivided into an octree data structure comprising compartments of space which include at least a part of the same list of objects, so that compartments which the projected ray passes through are considered to identify a set of candidate objects for intersection.
However, these techniques still require significant searching along each of the projected rays in order to determine intersections between objects in the 3D scene and the rays . The conventional way of identifying intersections is described in the book "Computer Graphics" by Foley, Van Dan, Feiner & Hughes from page 702 to 704. The present invention has been made with such problems in mind.
According to one aspect , the present invention provides a computer system comprising a pre-processor operable to pre-process 3D scene data defining a plurality of objects within a 3D scene and parallel subfield data defining a plurality of subfields, each having a respective direction and a plurality of volumes extending parallel to each other in the direction, to generate, for each volume, intersection data identifying the objects within the 3D scene that intersect with the volume; and a pathway processor operable to process pathway data defining a pathway which extends from a start point through the 3D scene to determine which of said subfields has a similar direction to the direction of the pathway and to process the pathway data for the pathway and the 3D scene data for candidate objects identified from intersection data for a volume of the determined subfield through which, or adjacent which, the pathway extends, to determine which candidate object is intersected by the pathway and is closest to the start point of the pathway.
The pre-processor and the pathway processor may be provided separately from each other wherein the preprocessor is operable to generate and output intersection data and wherein the pathway processor is operable to receive the intersection data and to determine which object in a 3D scene is intersected by a pathway and is closest to a start point of the pathway, using the received intersection data.
According to the above aspects, the present invention enables efficient determination of intersections between one or more pathways and one or more objects in a 3D scene (or environment) .
The above system has various uses. For example, it can be used in an image rendering system to efficiently render 2D images of the 3D scene, because the objects intersected by each of the pathways can be identified quickly and efficiently.
As another example, the above system can be used in an ambient occlusion apparatus to efficiently calculate an approximate diffuse reflection value, because the processing to identify the objects intersected by each of a large number of pathways can be performed quickly and efficiently. As yet another example, the above system can be used in an acoustic modelling system to efficiently model the propagation of sound energy within the 3D scene, because the objects intersected by pathways extending from the sound source can be identified quickly and efficiently using the present invention.
As yet another example, the above system can be used in a robot path finding system to efficiently plan a path through a 3D environment while avoiding the objects within the 3D scene, because the objects intersected by each of a sequence of pathways extending from the robot's current location can be identified quickly and efficiently.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 is a schematic block diagram illustrating the main components of an image rendering system embodying the present invention;
Figure 2 schematically illustrates the generation of a 2D image of a 3D scene from a defined viewpoint using a ray tracing technique;
Figure 3 is a schematic block diagram showing the functional components of a rendering pre-processor forming part of the system shown in Figure 1, which allows the generation of intersection data defining intersections between objects in a 3D scene and tiles of a parallel sub-field;
Figure 4, which comprises Figures 4a to 4c, illustrates a parallel sub-field as defined in an embodiment of the present invention;
Figure 5 is a flow diagram illustrating the processing steps employed by the rendering pre-processor shown in Figure 3, to generate the intersection data;
Figure 6 is a flow diagram illustrating the processing steps performed when inserting an object into a parallel subfield during the processing shown in Figure 5 ;
Figure 7, which comprises Figures 7a and 7b, illustrates the way in which the intersection data is generated by the rendering pre-processor shown in Figure 3 ;
Figure 8 illustrates the intersection data which is generated for the example objects shown in Figure 7;
Figure 9 is a schematic block diagram illustrating the main software modules of the central controller shown in Figure 3 ;
Figure 10 is a schematic block diagram showing the functional components of an image renderer which allows the generation of image data for a particular viewpoint using intersection data received from the rendering pre-processor shown in Figure 3;
Figure 11 is a flow diagram illustrating the processing steps employed by the image renderer shown in Figure 10 to generate image data for a particular viewpoint ; Figure 12 is a flow diagram illustrating the processing steps performed when generating 2D image data using 3D scene data and the intersection data obtained from the rendering pre-processor shown in Figure 3 ;
Figure 13a illustrates the way in which a projected ray is projected onto one candidate tile of a PSF, as performed by the image renderer shown in Figure 10;
Figure 13b illustrates the way in which a projected ray is projected onto a plurality of candidate tiles of a PSF, as performed by the image renderer shown in Figure 10;
Figure 14 is a schematic block diagram illustrating the main software modules of the central controller shown in Figure 10;
Figure 15 is a flow diagram illustrating the processing steps employed by the image renderer shown in Figure 10 to generate 2D image data for a particular viewpoint of the 3D scene according to another embodiment of the present invention; and
Figure 16 is a flow diagram illustrating the processing steps performed to remove an object from the intersection data, to allow dynamic movement of objects in the 3D scene.
FIRST EMBODIMENT Overview
Figure 1 schematically illustrates the main components of an image rendering system embodying the present invention. As shown, the system includes a rendering pre-processor 1 which pre-processes received 3D scene data 3 which defines the 3D scene to be rendered and an image renderer 5 which generates a 2D image of the 3D scene from a user defined viewpoint 7. In this embodiment, the rendering pre-processor 1 identifies intersections between a plurality of predefined rays and objects within the 3D scene. In this embodiment, the rays are arranged in a plurality of sub-sets, with the rays of each sub-set being parallel to each other and having a different orientation to the rays of other sub-sets. The sub-sets of parallel rays will be referred to hereinafter as parallel sub-fields (PSFs) and these are defined by the PSF data 9. The predefined rays defined by the PSFs attempt to provide a finite approximation to all possible rays that may be projected into the 3D scene by the image renderer 5 (and subsequent secondary rays as well) from a defined viewpoint.
In operation, the rendering pre-processor 1 identifies the intersections between the objects in the 3D scene and the predefined rays defined by the PSF data 9. The rendering pre-processor 1 then stores, in a storage device 11, intersection data 13 associated with each ray, that identifies all of the objects intersected by that ray.
After the intersection data 13 has been generated for a 3D scene, the image renderer 5 operates to process the 3D scene data 3 to generate 2D images of the 3D scene from the user defined viewpoint 7. In particular, in this embodiment, the image renderer 5 uses a ray tracing technique to project rays into the scene through the pixels of an image to be rendered from the user defined viewpoint 7. This is illustrated in Figure 2 , which schematically illustrates the viewpoint 7, the 2D image 15 and the ray 14 which is projected, through pixel 16 of the 2D image 15, into the scene 18. Figure 2 also illustrates the PSF 33 whose orientation is closest to that of the projected ray 14 and the PSF ray 20 which is closest to the projected ray 14. When projecting a ray 14 into the scene 18, instead of searching for objects 22 which may intersect with the projected ray 14, the image renderer 5 identifies the PSF ray 20 which is closest to the projected ray 14. The image renderer 5 then uses the intersection data associated with the identified PSF ray 20 to determine a candidate set of objects which the projected ray 14 is likely to intersect. This candidate set of objects is typically a small fraction of the full set of objects in the 3D scene. The image renderer 5 then uses any conventional ray-object traversal technique to identify the intersections 24 between the projected ray 14 and those candidate objects 22 and determines the corresponding pixel value in the image 15 based on the object properties at the intersection point 24 for the object 22 which is nearest the user defined viewpoint 7. Further, if shadows, reflections and refractions are to be considered, then the image renderer 5 projects the above described secondary ray or rays from the identified intersection point 24 of the nearest object to find the objects intersected by each secondary ray, again using the intersection data associated with the PSF ray that is closest to the secondary ray. The 2D image data 15 generated by the image renderer 5 is written into a frame buffer 17 for display on a display 19.
The inventors have found that operating the system in the above way allows the image renderer 5 to be able to generate 2D images of a 3D scene from a user defined viewpoint 7 substantially in real time. This is because when projecting rays 14 into the 3D scene, the image renderer 5 can identify the list of candidate objects 22 which the projected ray 14 may intersect, simply by identifying the PSF ray 20 which is closest (in orientation and position) to the projected ray 14. This is a constant time lookup operation which does not require the image renderer 5 to search along the projected ray 14 to identify objects 22 that it intersects.
As discussed above the processing of the 3D scene data 3 by the rendering pre-processor 1 is performed in advance of the image rendering performed by the image renderer 5. In the following more detailed discussion, it will be assumed that the rendering pre-processor 1 and the image renderer 5 form part of separate computer systems.
Rendering Pre -Processor
Figure 3 is a block diagram illustrating the main components of a computer system 21, such as a personal computer, which is programmed to function as the rendering pre-processor 1. The computer system 21 is programmed to operate in accordance with programming instructions input, for example, as data stored on a data storage medium 23 (such as an optical CDROM, semiconductor ROM or a magnetic recording medium) and/or as a signal 25 (for example an electrical or optical signal) input to the computer system 21, for example, from a remote database, by transmission over a communication network such as the Internet and/or entered by a user via a user input device 27, such as a keyboard, mouse etc.
As shown in Figure 3, the computer system 21 is operable to receive 3D scene data 3 and PSF data 9 either as data stored on a data storage medium 29 (such as an optical CD ROM, a semiconductor ROM etc.) and/or as a signal 31 (for example an electrical or optical signal) input to the computer system 21, for example, from a remote database, by transmission over a communication network such as the Internet and/or entered by a user via a user input device 27, such as a keyboard, mouse etc. As shown, the 3D scene data 3 and PSF data 9 is received by an input data interface 37 which stores the 3D scene data 3 in a 3D scene data store 47 and which stores the PSF data 9 in a PSF data store 43.
In operation, the central controller 39 operates in accordance with the programming instructions to carry out the functions of the rendering pre-processor 1, using working memory 41 and, where necessary, displaying messages to a user via a display controller 34 and a display 38.
Parallel Sub Field
As discussed above, the rendering pre-processor 1 identifies the intersections between the objects 22 within the scene 18 defined by the 3D scene data 3 and PSF rays 20 defined by the PSF data 9. Figure 4a schematically illustrates an example of a PSF 33 which has a particular orientation relative to the x-axis, the y-axis and the z-axis. The PSF 33 comprises a plurality of parallel rays 35 which all extend from a base 36 in the same direction and which are arranged in a grid to allow for easy referencing thereof. Since the PSFs 33 are intended to provide a finite approximation to all possible rays which may be projected into the 3D scene 18 by the image renderer 5, the PSF data 9 includes a plurality of PSFs oriented in different directions. In this embodiment the centre of each PSF 33 is defined as its origin and the PSFs 33 are overlaid so that their origins coincide, although this is not essential. Figure 4b illustrates a second PSF 33 rotated through an angle φ from the vertical about the y-axis so that the parallel rays 35 in the PSF 33 are orientated at an angle φ from the vertical. Figure 4c illustrates a third PSF 33 in which the PSF shown in Figure 4b is further rotated through an angle θ about the z-axis, so that the parallel rays 35 are orientated at an angle φ from the vertical and, when resolved into the x,y plane, are at an angle θ relative to the x-axis. The different PSFs 33 can therefore be identified by the different values of the angles φ and θ. A particular PSF ray 35 is therefore identified by its grid reference within the PSF 33 and the values of φ and θ for the PSF 33.
A detailed understanding of the way in which the PSFs are generated and defined is not essential to the present invention. A more detailed description of the generation of the PSFs 33 is given later in the section entitled "Generation of Parallel Sub Fields" . The reader is referred to this more detailed description for a better understanding of the PSFs 33.
In order to calculate the above described intersection data 13, the rendering pre-processor 1 effectively locates the 3D scene 18 defined by the 3D scene data 3 into the volume of space defined by the PSFs 33. The rendering pre-processor 1 then determines, for each ray 35 within each PSF 33, which objects 22 within the 3D scene 18 the ray 35 intersects.
As those skilled in the art will appreciate, a large amount of memory would be required to store data representative of the intersections between the objects 22 in the scene 18 and every ray 35 within every PSF 33. Therefore, in order to reduce the amount of memory required to store the intersection data 13 , in this embodiment, the base 36 of each PSF is partitioned into a plurality of tiles (each containing a plurality of the rays 35) and the rendering preprocessor 1 stores intersection data 13 for each tile, identifying all objects in the scene 18 which will be intersected by one or more of the rays 35 of the tile.
A more detailed description of the way in which the rendering pre-processor 1 determines the intersection data 13 will now be described with reference to Figure 5. Figure 5 is a flow chart illustrating the processing steps performed by a central controller 39 of the computer system 21, to calculate the intersection data 13 for each tile within each PSF 33. As shown, at step si, the central controller 39 retrieves the PSF data (e.g the φ and θ data which defines the PSF) for the first PSF 33 from the PSF data store 43. The processing then proceeds to step s3 where the central controller 39 inserts each object 22 (as defined by object definition data 45 stored in the 3D scene data store 47) into the current PSF 33 in order to determine intersections between the rays 35 of each tile and each object 22. The processing then proceeds to step s5 where the central controller 39 stores the intersection data 13 that is determined for each tile in the current PSF 33 in an intersection data store 49. The processing then proceeds to step s7 where the central controller 39 determines whether or not there are any more PSFs 33 to be processed. If there are, then the processing proceeds to step s9 where the central controller 39 retrieves the PSF data for the next PSF 33 from the PSF data store 43 and then the processing returns to step s3. Once all of the PSFs 33 have been processed in the above way, the processing ends.
Insert objects into PSF
In step s3 discussed above, the central controller 39 inserts each object 22 into the current PSF 33. The way in which this is achieved will now be described with reference to Figures 6 to 8. Figure 6 is a flow chart which illustrates in more detail the processing steps performed in step s3. As shown, in step sll, the central controller 39 retrieves the object definition data 45 for the first object 22 within the 3D scene 18, from the 3D scene data store 47. The object definition data 45 defines the location and orientation of the object in the scene 18 and comprises, in this embodiment, a list of vertex positions in three-dimensional space, a list of polygons constructed from the listed vertices and surface properties such as light reflection characteristics as defined by a bi-directional reflection distribution function (BRDF) .
Figure 7a schematically illustrates the orientation and location of two objects (labelled A and B) within a current PSF 33 whose orientation is defined by the angles (φ^θ-j) . Figure 7a also schematically illustrates the tiles 51-1 to 51-9 of the PSF 33 and, as represented by the dots 35, some of the rays 35 within each tile 51. As those skilled in the art will appreciate, although only nine tiles are defined, in practice, each PSF 33 will be divided into a larger number of smaller tiles 51. The number of tiles that each PSF 33 is divided into is a trade-off between the required storage space and the time taken to generate a 2D image 15 using the image renderer 5. In particular, smaller tiles means that there will be less objects per tile and therefore the final rendering will be faster, but smaller tiles requires more storage space to store the intersection data.
In order to identify the tiles 51 that have rays 35 which intersect with the objects A and B, the objects A and B are projected down onto the base 36 of the PSF 33. Since it is easier to perform this projection in a vertical direction, each object is initially rotated, in step sl3, in three dimensions to take into account the orientation of the current PSF 33 into which the object is being inserted. In order to perform this rotation, the central controller 39 uses a pre-stored rotation matrix that is associated with the current PSF 33 (which forms part of the PSF data retrieved in step si from the PSF data store 43) . This rotation matrix is then applied to the retrieved object definition data 45 to effect the rotation. The processing then proceeds to step sl5 where the rotated object is projected vertically onto the base of a nominal vertically orientated PSF, which is computationally easier than projecting the object onto the base 36 of the original PSF 33.
Figure 7b illustrates the objects A' and B' after they have been rotated and projected onto the base 36 of the nominal vertically orientated PSF 33. The central controller 39 then determines, in step sl7, which tiles 51 the projected object overlaps. The processing then proceeds to step sl9 where the central controller 39 adds an object identifier to the intersection list for the determined tiles 51 of the current PSF 33. For example, for object A' , this intersects with tiles 51- 2 to 51-6, 51-8 and 51-9 of the PSF orientated at (φijθj). The identifier for object A is therefore added to the intersection data for these tiles.
The processing then proceeds to step s21 where the central controller 39 determines whether or not there are any more objects 22 in the 3D scene 18 to be processed. If there are, then the processing proceeds to step s23 where the central controller 39 retrieves the object definition data 45 for the next object from the 3D scene data store 47. The processing then returns to step sl3 where the next object is rotated as before. After all of the objects 22 have been processed in the above way, the processing returns to step s5 shown in Figure 5.
After all of the PSFs 33 have been processed in the above way, the central controller 39 will have calculated, for each tile 51 in each PSF 33 a list of objects 22 which are intersected by one or more of the rays 35 in that tile 51. Figure 8 schematically illustrates the way in which this intersection data 13 is stored in this embodiment. As shown, the intersection data 13 includes an entry 53-1 to 53-N for each of the N PSFs 33. These entries 53 are indexed by the direction (φij) of the corresponding PSF 33. Figure 8 also illustrates that for each PSF entry 53 , the intersection data 13 includes an entry 55-1 to 55-9 for each tile 51 in the PSF 33. Within each tile entry 55, the intersection data 13 includes a list of the objects 22 that are intersected by one or more rays 35 that belong to that tile 51.
Figure 8 illustrates the intersection data 13 that will be generated for the PSF 33 having direction (φi/θj) from the two objects A and B shown in Figure 6. As shown, in this example, none of the rays 35 in the first tile 51-1 intersect either object A or object B, and therefore, an empty list is stored in the entry 55-1. The tiles 51-2, 51-3, 51-6 and 51-9 each include at least one ray which intersects only with object A and not with object B. Therefore, the object intersection lists stored in the corresponding entries 55 only include the object identifier for object A. Likewise, the object intersection list for tile 51-7 will only include the identifier for object B. However, tiles 51-4, 51-5 and 51-8 each include one or more rays 35 which intersect with object A and one or more rays 35 which intersect with object B. Therefore, the entries 55 for these tiles will include object identifiers for both object A and object B.
In this embodiment, the central controller 39 outputs the intersection data 13 that is generated for the received 3D scene via an output data interface 57. The intersection data 13 may be output either on a storage medium 59 or as a signal 61.
The operation of the central controller 34 has been described above . Figure 9 is a schematic block diagram illustrating the main software modules of the central controller 34 that are used to generate the above described intersection data 13. As shown, the modules include: i) a first receiver module 64 which receives the 3D scene data 3 defining the 3D scene, including data defining the objects and their locations within the 3D scene; ii) a second receiver module 66 which receives the PSF data 9 defining the above described PSFs 33; iii) a processor module 68 which processes the 3D scene data 3 and the PSF data 9 to determine intersections between each object within the 3D scene and the one or more rays 35 associated with the tiles of the PSFs 33; iv) an intersection data generator module 70 which receives the output from the processor module 68 and generates the intersection data for each tile, including data identifying the intersected objects but not including data identifying the location of the intersection within the 3D scene; and v) a data output module 72 which outputs the generated intersection data 13.
As discussed above, this intersection data 13 can be used by an appropriate image renderer 5 together with the 3D scene data 3 to generate 2D images of the 3D scene. The way in which this is achieved will be described below. Image renderer
Figure 10 is a block diagram illustrating the main components of a computer system 63 , such as a personal computer, which is programmed to function as the image renderer 5. As with computer system 21, the computer system 63 is programmed to operate in accordance with programming instructions input, for example, as data stored on a data storage medium 65 and/or as a signal 67 input to the computer system 63, for example, from a remote database, by transmission over a communication network such as the Internet and/or entered by a user via a user input device 69, such as a keyboard, mouse etc .
As shown in Figure 10, the computer system 63 includes an input data interface 71 which is operable to receive the intersection data generated by the rendering pre-processor 1, the 3D scene data 3 describing the scene 18 and the PSF data 9 describing the structure of the PSFs 33. This data may be input, for example, as data stored on a data storage medium 73 and/or as a signal 25 input to the computer system 63, for example, from a remote database, by transmission over a communication network such as the Internet or from the storage device 11 shown in Figure 1. The input data interface passes the received intersection data 13 to an intersection data store 77, the received 3D scene data 3 to a 3D scene data store 81 and the received PSF data 9 to a PSF data store 81.
In operation, a central controller 83 of the computer system 63 operates in accordance with the received programming instructions to generate rendered 2D images 15 of the 3D scene from the user defined viewpoint 7 using the 3D scene data 3, the PSF data 9 and the intersection data 13. The images 15 that are generated are stored in the frame buffer 17 for display to the user on a display 19 via a display controller 87.
As shown in Figure 10, the computer system 63 also includes an output data interface 89 for outputting the 2D images 15 generated by the central controller 83. These images 15 may be recorded and output on a storage medium 91 such as a CD ROM or the like or they may be output on a carrier signal 93 for transmission to a remote processing device over, for example, a data network such as the Internet .
The way in which the central controller 83 operates to render the 2D image 15 will now be described with reference to Figures 11 and 12. Figure 11 is a flow chart illustrating the processing steps performed by the central controller 83 of the computer system 63 to generate a 2D image 15 of the 3D scene 18 from the user defined viewpoint 7. As shown, in step s25, the central controller 83 receives the 3D scene data 3 and the intersection data 13. The processing then proceeds to step s27 where the central controller 83 receives the viewpoint 7 defined by the user via the user input device 69. The processing then proceeds to step s29 where the central controller 83 generates the 2D image 15 of the 3D scene from the user defined viewpoint 7 using the received 3D scene data 3 and the intersection data 13. The processing then proceeds to step s31 where the central controller 83 displays the generated 2D image 15 on the display 19. The processing then proceeds to step s33 where the central controller 83 determines whether or not the user has defined a new viewpoint 7. If the viewpoint 7 has changed, then the processing returns to step s29. Otherwise, the processing proceeds to step s35 where the central controller 83 checks to see if the rendering process is to end. If it is not to end, then the processing returns to step s33. Otherwise, the processing ends.
Generate 2D Image
Figure 12 is a flow chart illustrating the processing steps performed by the central controller 83 in step s29 shown in Figure 11, when generating a 2D image 15 of the 3D scene 18 from the user defined viewpoint 7. As shown, in step s37, the central controller 83 identifies a current pixel 16 in the 2D image 15 to be processed. Then, in step s39, the central controller 83 projects a ray 14 from the user defined viewpoint 7 through the current pixel 16 into the 3D scene 18 defined by the 3D scene data 3. The processing then proceeds to step s41 where the central controller 83 identifies the PSF 33 whose direction is nearest to the direction of the projected ray 14 (which is defined, in this embodiment, in terms of a vector extending from the viewpoint 7) . This can be achieved in a number of different ways. However, in this preferred embodiment, the PSF data stored in the PSF data store 81 includes a look-up table (not shown) which receives the vector defining the direction of the projected ray as an input and which identifies the closest PSF 33. The inventor's earlier paper entitled "Constant Time Queries on Uniformly Distributed Points on a Hemisphere", Journal of Graphics Tools, Volume 7, Number 1, 2002 describes a technique which may be used to implement this look-up table.
The processing then proceeds to step s43 where the central controller 83 determines one or more candidate tiles 51 in the identified PSF 33 for the projected ray 14. As will be appreciated, each tile effectively represents a volume extending from the base plane and the processing performed in step s43 tries to determine the volumes through which or adjacent which the projected ray 14 extends. In this embodiment, these candidate tiles are determined by: i) projecting the start and end point of the projected ray 14 onto the base 36 of the identified PSF 33; ii) identifying the line connecting the projected end points; and iii) identifying the tile or tiles traversed by the identified line. The number of tiles traversed by the identified line will depend on the difference between the direction of the projected ray 14 and the direction of the PSF 33 identified in step s41 and the density of tiles defined within each PSF. For example, as illustrated in Figure 13a, when the direction of the projected ray 14-1 is very close to the direction of the identified PSF 33 (i.e. when angle α is small) , both the start point (viewpoint 7- 1) and the end point 101-1 of the projected ray 14-1, when projected down onto the base 36 of the identified PSF 33, fall in the same tile 103-1. Therefore a single candidate tile 103-1 is determined for the projected ray 14-1. On the other hand, when the direction of the projected ray 14-2 is not close to the direction of the identified PSF 33 (i.e. when angle α is large) , the end points may fall in different tiles. This is illustrated in the example shown in Figure 13b, where the start point (viewpoint 7-2) of the projected ray 14-2 is projected down onto a point which lies in tile 103-2 and the end point 101-2 of the projected ray 14-2 is projected down onto a point which lies in a different tile 103-5. Consequently, the line 105 connecting the projected end points traverses a plurality of candidate tiles, in particular tiles 103-2, 103-3, 103-4 and 103-5.
As those skilled in the art will appreciate, the number of times that projected rays 14 will traverse across different tiles of the identified PSF 33 will depend on the resolution of the PSFs 33 defined by the PSF data 9. In particular, if a sufficient number of PSFs are defined at different orientations, the resolution of the PSFs 33 will be high enough that in the vast majority of cases, both end points 7,101 of the projected ray 14 will fall into the same tile. On the other hand, if the resolution of the PSFs 33 is low, then the projected viewpoint 7 and the projected end point 101 are more likely to fall in different tiles .
After the candidate tile or tiles 51 have been identified, the processing proceeds to step s45 where the central controller 83 retrieves the list or lists of objects (candidate objects) from the tile entry 55 in the intersection data 13 for the identified tile or tiles 51 of the nearest PSF 33. The processing then proceeds to step s47 where the central controller 83 identifies the intersection point 24 of the projected ray 14 with each of the candidate objects 22 in the list or lists and determines which of the intersected candidate objects 22 is closest to the user defined viewpoint 7. The processing then proceeds to step s49 where the central controller 83 updates the pixel value for the current pixel 16 using the object definition data 45 for the candidate object 22 that is closest to the user defined viewpoint 7. If, at step s47, the central controller 83 determines that the projected ray 14 does not intersect with any of the identified candidate objects 22, then the pixel value for the current pixel 16 is set to that of a background colour.
In this embodiment, the central controller 83 is arranged to consider reflections, refractions and shadows of objects 22 in the 3D scene 18. To achieve this, the central controller 83 must consider secondary rays which are projected from the point of intersection 24 between the original projected ray 14 and the nearest object 22. The number of secondary rays and their directions of propagation depend on the characteristics of the object 22 (as defined by the object definition data 45) and the type of ray tracing being used. For example, if the object 22 is defined as being a specular reflector then a single secondary ray will be projected from the intersection point 24 in a direction corresponding to the angle of incidence of the projected ray on the object (i.e. like a mirror where the angle of incidence equals the angle of reflectance) . If, however, the object is a diffuse reflector, then the number of secondary rays and their directions will depend on the ray tracing technique being used. For example, in Whitted-like ray tracing no secondary rays are reflected from a diffuse reflector, instead a simple estimation of the local colour is computed by reference to the angle of direction of the light source to the intersection point. Alternatively, if Kajiya-like path tracing is being used then a single ray (at most) is reflected from the surface in a direction determined stochastically from the BRDF of the object. Therefore, in this embodiment, after step s49, the processing proceeds to step s51 where the central controller 83 determines whether or not there are any more secondary rays to consider. If there are, then the processing proceeds to step s53 where the central controller 83 determines the direction of the current secondary projected ray. The processing then returns to steps s41, s43, s45 and s47 where a similar processing is performed for the secondary ray as was performed for the original primary ray 14 projected from the user defined viewpoint 7, except with the start point of the secondary ray corresponding to the intersection point 24 between the primary ray 14 and the nearest object that it intersects.
Then, in step s49, the pixel value for the current pixel 16 is updated using the object definition data 45 for the nearest object intersected by the current secondary ray. As those skilled in the art will appreciate, this processing to update the current image pixel value can be performed according to a number of conventional techniques, for example, as described in the paper "An Improved Illumination Model for Shaded Display" by Whitted T. mentioned above.
Once all of the secondary rays have been considered in the above way, the processing proceeds to step s55 where the central controller 83 determines if there are any more pixels 16 in the 2D image to be processed. If there are, then the processing returns to step s37. Otherwise, the processing returns to step s31 shown in Figure 11.
The operation of the central controller 83 has been described above. Figure 14 is a schematic block diagram illustrating the main software modules of the central controller 83 that are used to generate the above described 2D images 15. As shown, the modules include: a pathway processor module 80 which determines, for each of the above described projected rays 14 (pathways) , which object within the scene 18 is intersected by the ray 14 and is closest to a start point (eg. viewpoint 7 or intersection point 24) of the ray 14; and a 2D image generator module 82 which updates the pixel value for a pixel 16 associated with the projected ray 14 as described above.
As shown in Figure 14 , the pathway processor module includes : i) a first receiver module 84 which receives the 3D scene data 3 defining the 3D scene; ii) a second receiver module 86 which receives the PSF data 9; iii) a third receiver module 88 which receives the intersection data 13 generated by the above described rendering pre-processor 1; iv) a pathway data definer module 90 which defines pathway data representing a ray that is projected from the start point (viewpoint 7 or intersection point 24) through the 3D scene; and v) an object determiner module 92 which determines which object in the 3D scene is intersected by the ray and is closest to the start point.
As also shown in Figure 14, the object determiner module 92 includes: i) a first processor module 94 which processes the pathway data for a current ray 14 to identify the direction in which the ray extends ; ii) a subfield determiner module 96 which determines which of the subfields, defined by the PSF data 9, has a similar direction to the direction of the current ray 1 ; iii) a selector module 98 which selects the intersection data for a volume of the subfield having a similar direction, through which, or adjacent which, the current ray 14 extends; and iv) a second processor module 100 which processes the pathway data representing the current ray 14 and the 3D scene data 3 for objects identified from the selected intersection data, to identify which of those objects is intersected by the ray 14 and is closest to the start point. The identity of the closest intersected object is then passed to the 2D image generator module 82 so that the associated pixel value of the 2D image can be updated as described above, using the object definition data 45 (defined in the 3D scene data) for the identified object.
As those skilled in the art will appreciate, the image renderer 5 formed by the computer system 63 has a number of advantages over conventional image rendering systems. In particular, since the objects 22 that a projected ray 14 may intersect with are identified quickly using the stored intersection data 13 calculated in advance by the rendering pre-processor 1, the image renderer 5 does not need to check for intersections between all the objects 22 in the 3D scene 18 and each projected ray 14. The generation of the 2D image from the user-defined viewpoint 7 can therefore be calculated significantly faster than with conventional techniques, which allows substantially real time 2D image 15 rendering of the 3D scene 18. Further, since the user can change the currently defined viewpoint 7, the image renderer 5 can generate a real time "walk through" of the scene 18 in response to dynamic changes of viewpoint 7 made by the user. Although existing 3D walk through systems are available, these do not employ ray tracing techniques which provide realistic renderings that include reflections, refractions and shadows.
Generation of Parallel Sub-Fields
As described above with reference to Figure 4, a volume of rays is defined by a plurality of parallel sub-fields (PSFs) 33. Each PSF 33 comprises a cubic volume in x, y, z Cartesian space defined by.
-1 < x < 1
-1 < y < 1 -1 < z < 1
A rectangular grid is imposed on the x, y plane comprising n sub-divisions of equal width in each of the x and y directions to define the n2 tiles of the PSF 33.
All possible directions of rays can be described using the following ranges:
φ = 0 ; θ : don' t care 0 < φ < π/2; 0 < θ < 2π φ = π/2; 0 < θ < π
It should be noted that at φ = 0, the direction of the rays 35 is independent of the value of θ, and that at φ
= π/2, θ need only pass through a half revolution in order to cover all possible directions. A scene 18 is capable of being defined within the volume of rays defined by the PSFs . It is convenient to confine the scene 18 for example to a unit cube space defined by the ranges :
-0.5 < x < 0.5 -0.5 < y < 0.5 -0.5 < z < 0.5
This ensures that any orientation of the cubic volume of the PSF 33 will fully enclose the scene 18. That means that no part of the scene 18 is omitted from coverage by rays in any particular direction. In fact, since the cubic volume of the PSF 33 has smallest dimension of 2, which is its edge dimension, any scene 18 with longest dimension no greater than 2 can be accommodated within the PSF 33. However, at the extreme edge of the PSFs, the number of rays and ray directions may be limited relative to the density thereof in the central region of the PSFs, so it is preferred to apply a constraint on the size of the scene 18 smaller than the theoretical maximum size of the PSFs.
SECOND EMBODIMENT
In the first embodiment described above, the image renderer 5 was able to generate 2D images 15 of the 3D scene 18 from a current user defined viewpoint 7. The user was also able to change the user defined viewpoint 7 and the image renderer 5 recalculated the 2D image 15 from the new viewpoint 7. An embodiment will now be described in which the image renderer 5 can generate a new 2D image not only for a new viewpoint 7 but also taking into account movement or deformation of one or more of the objects 22 within the 3D scene 18. The image renderer 5 of this second embodiment can therefore be used, for example, in virtual reality and game applications. The way in which the image renderer 5 achieves this will now be described with reference to Figures 15 and 16.
In the second embodiment, the image renderer 5 is also run on a conventional computer system 63 such as that shown in Figure 8. The only difference will be in the programming instructions loaded into the central controller 83 used to control its operation. Figure 15 is a flow chart defining the operation of the central controller 83, programmed in accordance with the second embodiment .
As shown, at step s57, the central controller 83 receives the 3D scene data 3 , the PSF data 9 and the intersection data 13. The central controller 83 then receives, in step s59, the current viewpoint 7 defined by the user via the user input device 69. The processing then proceeds to step s61, where the central controller 83 generates a 2D image 15 of the 3D scene 18 from the current viewpoint 7, using the received 3D scene data 3 , PSF data 9 and intersection data 13. The processing steps performed in step s61 are the same as those performed in step s29 (shown in Figure 11) and will not, therefore, be described again. The processing then proceeds to step s63 where the generated 2D image 15 is displayed to the user on the display 19.
The processing then proceeds to step s65 where the central controller 83 determines whether or not any objects 22 within the 3D scene 18 have been modified (e.g. moved or deformed etc.) . If they have, then the processing proceeds to step s67 where the central controller 83 updates the intersection data 13 for the modified objects 22. The processing then proceeds to step s69 where the central controller 83 checks to see if the user defined viewpoint 7 has changed. In either case, the processing then proceeds to step 71 where the central controller 83 generates a new 2D image 15 of the 3D scene 18. If, at step s69, the central controller determines that the viewpoint 7 has not changed, then in step s71, the new 2D image 15 is generated from the current, unchanged viewpoint 7 , but using the updated intersection data 13. If, however, the central controller determines, in step s69, that the viewpoint 7 has changed, then in step s71, the central controller 83 generates the new 2D image 15 of the 3D scene 18 from the new viewpoint 7 using the updated intersection data 13.
Returning to step s65, if the central controller 83 determines that none of the objects 22 in the 3D scene 18 is to be modified, then the processing proceeds to step s73 where the central controller 83 determines whether or not the viewpoint 7 has changed. If it has, then the processing also proceeds to step s71, where the central controller 83 generates a new 2D image 15 of the 3D scene 18 from the new viewpoint 7 using the unchanged intersection data 13. In this way, a new 2D image is generated when the viewpoint and/or one or more of the objects are changed, otherwise a new 2D image is not generated.
If a new 2D image 15 is generated in step s71, then the processing proceeds to step s75, where the new 2D image 15 is displayed to the user on the display 19. The processing then proceeds to step s77, where the central controller 83 determines whether or not the rendering process is to end. Additionally, as shown in Figure 13, if at step s73, the central controller 83 determines that the viewpoint 7 has not changed, then the processing also proceeds to step s77. If the rendering process is not to end, then the processing returns to step s65 otherwise, the processing ends.
Update Intersection Data
In step s67 discussed above, the central controller 83 updated the intersection data 13 when one or more objects 22 in the scene 18 were modified. The way in which the central controller 83 performs this update will now be described with reference to Figure 14. As shown, initially, the central controller 83 retrieves, in step s77, the current PSF intersection data 53 for the first PSF 33 (i.e. 53-1 shown in Figure 8) from the intersection data store 77. The processing then proceeds to step s79 where the central controller 83 removes the identifier for each object 22 to be modified from the current PSF intersection data 53. This is achieved, in this embodiment, using a similar projection technique described with reference to Figure 6 except, at step sl9, instead of adding the object identifier to the intersection list, the object identifier is removed from the intersection list for all the determined tiles 51 of the current PSF.
The processing then proceeds to step s81 where the modified PSF intersection data 53 for the current PSF is returned to the intersection data store 77. The processing then proceeds to step s83 where the central controller 83 determines whether or not there are any more PSFs 33 to be processed. If there are, then the processing proceeds to step s85 where the central controller 83 retrieves the PSF intersection data 53 for the next PSF from the intersection data store 77 and then the processing returns to step s79.
Once all of the objects 22 to be modified have been removed from the intersection data 13, the processing then proceeds to step s87 where the objects are modified by modifying their object definition data 45. The modifications that are made may include changing the object's geometrical shape and/or position within the 3D scene 18. The processing then proceeds to step s89 where the central controller 83 retrieves the PSF intersection data 53 for the first PSF 33 from the intersection data store 77. The processing then proceeds to step s91 where the central controller 83 inserts the or each modified object 22 into the current PSF 33 and determines the corresponding intersection data. In this embodiment, the steps performed in step s91 are the same as those shown in Figure 6 and will not, therefore, be described again.
Once the PSF intersection data 53 for the current PSF has been updated to take into account the modified objects 22, the processing proceeds to step s93 where the modified PSF intersection data 53 is returned to the intersection data store 77. The processing then proceeds to step s95 where the central controller 83 determines if there are any more PSFs 33 to be processed. If there are, then the processing proceeds to step s97 where the central controller 83 retrieves the PSF intersection data 53 for the next PSF 33 and then the processing returns to step s91. After all the PSFs have been processed in the above way, the processing returns to step s69 shown in Figure 13.
As those skilled in the art will appreciate, in the above way, each of the objects 22 to be modified is firstly removed from the PSF intersection data 53, modified and then reinserted back into the PSF intersection data 53. The central controller 83 can then generate the new 2D image 15 for the modified scene 18 in the manner discussed above.
Summary and modifications and alternatives
A rendering system has been described above which can generate 2D images of 3D scenes in real time using a new real time ray tracing technique which provides realistic 2D images of scenes that include reflections, refractions and shadows. Because the ray tracing can be performed in real time, the image renderer can generate moving images of the 3D scene which include reflections, refractions and shadows as the user "walks through" or alters the scene. This allows, for example, in a virtual reality context or in a computer game, a person to walk through a scene and see virtual shadows and reflections of his/her "virtual body" in nearby surfaces . This is not possible using any general methods, with existing technology, in real time.
As those skilled in the art will appreciate, the software used to configure the computer systems as either the rendering pre-processor or as the image renderer may be sold separately and may form part of a general computer graphics library that other users can call and use in their own software routines, for example, the software used to create the intersection data for a 3D scene may be stored as one function which can be called by other software modules . Similarly, the software used to configure the computer device as the image renderer may also be stored as a computer graphics function which again can be called from other users ' software routines .
In the above embodiments, the rendering pre-processor used parallel sub-fields (PSFs) to provide a finite approximation to all possible ray paths that may be projected into the 3D scene by the image renderer. In the above embodiment, each PSF included a plurality of parallel rays which were divided into tiles . As those skilled in the art will appreciate, other parallel sub-fields may be defined. For example, parallel sub- fields may be defined which do not include any rays, but only include the above described tiles . This could be achieved in the same way as discussed above. In particular, each of the objects may be projected on to the base of the PSF and intersection data generated for each tile that the projected object intersects. Alternatively, if PSF rays are used, then it is not essential to group the rays into tiles. Instead, intersection data may be generated for each PSF ray.
Indeed, considered in general terms, each PSF effectively has a respective direction and a plurality of volumes which extend parallel to each other in that direction. Each of these "volumes" may be defined, for example, as the thickness of one of the PSF rays or as the volume of space defined by extending the tile in the direction of the PSF. In these general terms, the rendering pre-processor calculates intersection data for each of these volumes, which identifies the objects within the 3D scene that intersect with the volume. In the above embodiments, the rendering pre-processor and the image renderer were described as having a central controller which was programmed to operate in accordance with input programming instructions to carry out the various processing steps described with reference to the attached flow-charts. The particular software modules executed by the central controller were described with reference to Figures 9 and 14. As those skilled in the art will appreciate, the software modules may instead be provided as separate processors or hardware circuits which carry out the respective processing functions. For example, some of the processing steps may be performed by the processor (CPU) on a connected graphics card.
In the above embodiment, the base of each PSF was divided into square tiles. As those skilled in the art will appreciate, it is not essential to define the PSF to have such regular shaped tiles . Tiles of any size and shape may be provided. Further, it is not essential to have the tiles arranged in a continuous manner. The tiles may be separated from each other or they may overlap. However, regular shaped contiguous tiles are preferred since it reduces the complexity of the calculations performed by the image renderer.
In the above embodiment, the image renderer projected a ray through a 2D image plane into the 3D scene to be rendered. As those skilled in the art will appreciate, it is not essential to only project one ray through each pixel of the image to be generated. In particular, the image renderer may project several different rays through different parts of each pixel, with the pixel colour then being determined by combining the values obtained from the different rays projected through the same pixel (and any secondary rays that are considered) .
In the second embodiment described above, the image renderer could update the intersection data in order to take into account the modified properties of one or more of the objects within the 3D scene. As those skilled in the art will appreciate, the rendering preprocessor may also be arranged to be able to update the intersection data to take into account modified objects. In this way, the rendering pre-processor can update the intersection data without having to recompute intersection data for objects which have not changed.
In the above embodiment, the rendering pre-processor generated and output intersection data for a 3D scene. This intersection data was provided separately from the associated PSF data which was used during its calculation. As those skilled in the art will appreciate, the intersection data and the PSF data may be combined into a single data structure which may be output and stored on a data carrier for subsequent use by the image renderer.
In the first embodiment described above, the rendering pre-processor determined intersection data for each tile of each PSF. Depending on the size of the tiles, many neighbouring tiles will include the same intersection data. In such a case, instead of storing intersection data for each tile separately, tiles having the same intersection data may be grouped to define a larger tile. This may be achieved, for example, using Quad-tree techniques which will be familiar to those skilled in the art. Alternatively, if the rendering pre-processor determines that a second tile includes the same intersection data as a first tile, then instead of including the same intersection data for the second tile, it may simply include a pointer to the intersection data of the first tile.
In the above embodiment, the PSFs were defined and then the 3D scene was inserted into the volume of space defined by the PSFs. As those skilled in the art will appreciate, in an alternative embodiment, each of the PSFs may be defined around the 3D scene .
In the above embodiments, the PSF data, the intersection data and the 3D scene data were stored in separate data stores within the image renderer. As those skilled in the art will appreciate, when calculating real time 2D images from the 3D scene, this data will preferably be stored within the working memory of the image renderer in order to reduce the time required to access the relevant data.
In the above embodiment, the image renderer used a ray tracing technique to generate 2D images of the 3D scene using the intersection data generated by the rendering pre-processor. As those skilled in the art will appreciate, ray tracing is one specific technique which is commonly used to provide realistic 2D images of a 3D scene. However, the image renderer may instead be configured to render the 3D scene using any other rendering technique that uses a global illumination algorithm and which relies on efficient ray intersection calculations. For example, the generated intersection data could be used by the image renderer when rendering the 3D scene using the "path tracing" technique described in the paper "The Rendering Equation" by ajiya, J.T. (1986) Computer Graphics (SIGGRAPH) 20(4), pages 145 to 170. This technique is a stochastic method that fires primary rays from the viewpoint through a pixel and then probabilistically follows the path of the ray through the scene accumulating energy at each bounce from surface to surface. The ray path is terminated probabilistically. Each pixel may have more than forty such random ray paths and the final "colour" of each pixel is determined as the average of each of the ray paths .
The same technique may be used with Photon Mapping (Jensen, H.W. (1996) Global Illumination Using Photon
Maps, Rendering Techniques 96 , Proceedings of the 7th Eurographics Workshop on Rendering, 21-30) which follows ray paths emanating from light sources as well as from the viewpoint .
In the above embodiment, when the image renderer is generating a 2D image of the 3D scene, it determines, in step s47 shown in Figure 12, the nearest object (in the list of objects associated with the tile containing the projected ray) which intersects with the projected ray. On some occasions, none of the identified objects will intersect with the projected ray. This is most likely to happen if the projected ray intersects with the base of the closest PSF near the boundary of one of the tiles and/or if the direction of the projected ray falls approximately midway between the directions of two or more PSFs. Under these circumstances the image renderer can simply look at the object lists for neighbouring tiles of the same PSF or it can look at the intersection data for the or each other PSF having a similar direction. In a further alternative, if the image renderer determines that the projected ray intersects with a tile close to the tile boundary, then instead of simply searching for the nearest object in the object list for that tile, the system may also look at the object lists for the nearest neighbouring tiles as well.
In the above embodiment, the lists of object identifiers that were stored as the intersection data were not stored in any particular order. The image renderer therefore has to identify the intersection point between the projected ray and each of the objects defined by the intersection data for the identified tile. If, however, the objects within the 3D scene are constrained so that they cannot intersect with each other, then the list of objects stored for each tile are preferably sorted according to the distance of the objects from the base of the corresponding PSF. In this way, when the image is being rendered, the image renderer only has to find the first object within the sorted list that has an intersection with the projected ray. In this way, the image renderer does not need to determine intersection points between the projected ray and objects which are located behind the nearest object.
In the above embodiment, the rays of the PSF were divided into nine tiles or groups of rays. As those skilled in the art will appreciate, any number of tiles may be defined. For example, a less complex 3D scene may require fewer tiles to be defined because there may be fewer objects within the 3D scene. On the other hand, a more complex 3D scene may require a greater number of tiles to be defined for each PSF in order to avoid the situation that one or more of the tiles include rays that intersect with all objects within the scene (which would then not give any computational savings for the processing of that ray) . Therefore, the number of tiles within each PSF is preferably defined in dependence upon the complexity of the 3D scene to be rendered, so that each tile includes rays which intersect with only a sub-set of all of the objects within the 3D scene.
In the above embodiment, the rendering pre-processor and the image renderer were formed in two separate computer systems. As those skilled in the art will appreciate, both of these systems may be run on a single computer system.
In the second embodiment described above, when an object was modified within the scene, the tiles containing the original object were first identified by performing the same rotation and projection described with reference to Figure 6. In an alternative embodiment, the image renderer may simply search all of the lists of intersection data and remove the object identifier for the object that is to be modified from the lists.
In the second embodiment described above, objects within the scene could be moved or deformed and the image renderer generated a new 2D image for the modified scene. In an alternative embodiment, the image renderer may allow the insertion or the deletion of objects into or from the scene. In this case, if an object is to be inserted, then there is no need to remove the object identifier from the intersection data as it will not exist. The image renderer only has to add the object identifier for the new object into the intersection data. Similarly, where an object is to be deleted, its object identifier is simply removed from the intersection data.
As those skilled in the art will appreciate, in addition to allowing objects to move within the scene, the image renderer can also allow the reflective and refractive properties of the object to be modified in order to, for example, change its colour or transparency within the scene. In this case, the intersection data may not have to change, but only the object definition data for the object. The resulting change in colour or transparency of the object would then be observed in the re-rendered image of the 3D scene .
In the above embodiments, the intersection data is stored as a table having an entry for each PSF, and in which each PSF entry includes an entry for each tile in the PSF, which tile entry includes the list of objects for the tile. As those skilled in the art will appreciate, the intersection data may be stored in a variety of different data structures, for example, as a hash table or as a linked list etc.
In the above embodiments, the object definition data comprises a list of vertex positions defining the vertices of polygons and included surface properties defining the optical characteristics of the polygons. As those skilled in the art will appreciate, the object definition data may instead define the three- dimensional shape of the objects using appropriate geometric equations . In the above embodiment, the image renderer generated a 2D image of the 3D scene from a user defined viewpoint. As those skilled in the art will appreciate, it is not essential for the user to define the viewpoint. Instead, the viewpoint may be defined automatically by the computer system to show, for example, a predetermined walk through of the 3D scene.
In the embodiment described above, if the central controller determines in step s43 a plurality of candidate tiles in the identified PSF for the projected ray, then it retrieves the lists of objects for all of the identified tiles to determine a candidate set of objects as the union of sets of objects found in the candidate tiles traversed by the line. As those skilled in the art will appreciate, this is not necessary and the central controller could instead process the objects of the candidate tiles in a number of ways in order to further improve the efficiency.
As an example, in an embodiment where the intersection lists are stored in an unordered manner such that the objects identified in the lists are not sorted by distance to the viewpoint, the central controller could process the objects from the candidate tiles in turn along the length of the line 105, starting with the tile which contains the projected start point (viewpoint) . In such an embodiment, for the example shown in Figure 13b, the central controller would process the objects associated with candidate tile 103-2 first, as this is the tile which contains the projected start point (viewpoint 7-2) . If the central controller determines that the projected ray intersects with any of the objects associated with the first candidate tile 103-2, then it determines which of those intersected objects is closest to the viewpoint . There is no need to consider intersections between the projected ray and the objects associated with the other candidate tiles as the identified object will always be closer to the start point than the objects associated with the other candidate tiles.
However, if the central controller determines that the projected ray 14 does not intersect any objects associated with candidate tile 103-2, then the central controller will proceed to consider the objects associated with candidate tile 103-3. Similarly, if the projected ray 14 does not intersect any objects associated with candidate tile 103-3, then the central controller will consider the objects associated with candidate tile 103-4. Finally, if the projected ray does not intersect any objects associated with candidate tile 103-4, then the central controller will consider the objects associated with candidate tile
103-5.
In this way, the central controller will process the candidate objects from each of the candidate tiles 103-2,103-3,103-4,103-5 in turn until an object is identified and, because of the order in which the candidate tiles are considered, this identified object will be the closest object to the start point of the projected ray.
The processing by the central controller in such an embodiment would be further simplified if the lists of objects stored for each tile are sorted according to the distance of the objects from the base of the corresponding PSF. As described in the modification above, in such an embodiment, the central controller simply has to identify the first object within the sorted list for a candidate tile that has an intersection with the projected ray. The processing by the central controller is therefore more efficient because it does not need to determine intersection points between the projected ray and objects which are located behind the nearest object.
In the above embodiments, an image renderer was employed to generate a 2D image of the 3D scene from a user defined viewpoint . As those skilled in the art will appreciate, the present invention is not limited to the application of rendering 2D images of a 3D scene but can have many other applications where it is necessary to efficiently determine intersections between objects in a 3D scene or environment and a projected ray.
For example, the present invention may be embodied in an image processing system where a value is to be calculated for a point on a surface in a 3D scene which is representative of the approximate diffuse reflection at that point. The calculated value does not have to be used to contribute to a pixel value of a rendered 2D image. This processing is typically known as ambient occlusion and a large number of rays are projected from the point in question in different directions away from the surface. In order to calculate the approximate diffuse reflection value, the first object that each of the projected rays intersects must be determined. As those skilled in the art will appreciate, the present invention could be used to efficiently identify the first object in the 3D scene which each of the projected rays intersects, in the same way as the image renderer described above .
As another example, the present invention may be embodied in visibility culling processing of an image rendering system. In such image rendering systems utilising visibility culling, processing must be performed in order to determine whether an object is in front of another object, in which case that object must be rendered, or whether the object is behind another object, in which case that object does not need to be rendered. As those skilled in the art will appreciate, the present invention could be used to identify the first object that is intersected by a projected ray to therefore reduce the processing steps which are required in order to perform the visibility culling.
In the embodiment described above, the pathways were used to represent light energy within a 3D scene or environment. As those skilled in the art will appreciate, the pathways could be used to model propagation of other energies, such as sound or heat. For example, the present invention may be embodied in an acoustic processing system for modelling propagation of sound energy within a 3D environment, such as in a system for optimising the placement of loudspeakers in a room. In particular, the present invention could be utilised in the way described above to identify intersections between a projected pathway radiating from a sound source and objects in the 3D environment .
As yet a further alternative, the pathways do not need to represent energy. The present invention may instead be embodied in robot collision avoidance processing such as in a path finding system where the 3D environment is modelled and stored in the robot's memory. In such a system, the robot must typically determine a sequence of paths from a starting location to a destination which avoids the objects within the environment. Therefore, the robot path finding system must be able to efficiently determine the intersection between a pathway that the robot will follow and the nearest object in its path so that the robot can turn away to avoid the nearest object. As those skilled in the art will appreciate, this processing is very much like the image processing embodiment above and the present invention may be utilised to determine the nearest object that a planned pathway will intersect. In this way, collision avoidance processing is more efficient as there is no need to search through every object in the 3D environment.

Claims

CLAIMS :
1. A computer system for determining which of a plurality of objects within a 3D scene is intersected by a pathway and is closest to a start point of the pathway, the system comprising: a pre-processor having: a first receiver operable to receive 3D scene data defining the 3D scene, which 3D scene data includes data defining objects and their locations within the 3D scene; a second receiver operable to receive parallel subfield data defining a plurality of subfields, each subfield having a respective direction and a plurality of volumes which extend parallel to each other in the direction; a first processor operable to process said 3D scene data and said parallel subfield data to determine intersections between said volumes and each object within the 3D scene; and an intersection data generator operable to generate intersection data for each volume, which intersection data includes data identifying the objects within the 3D scene that intersect with the volume; and a pathway processor having: a pathway data definer operable to define pathway data representing a pathway which extends from a start point through the 3D scene; and an object determiner operable to determine which object in the 3D scene is intersected by the pathway and is closest to the start point, using the pathway data, the intersection data and the 3D scene data, the object determiner comprising: a second processor operable to process the pathway data for the pathway, to identify the direction in which the pathway extends; a subfield determiner operable to determine which of said subfields has a similar direction to the direction of the pathway identified by the second processor; a selector operable to select the intersection data for a volume of the determined subfield through which, or adjacent which, the pathway extends; and a third processor operable to process the pathway data for the pathway and the 3D scene data for objects identified from the selected intersection data, to determine which of those objects is intersected by the pathway and is closest to said start point.
2. A system according to claim 1, wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that each subfield has an associated base plane from which said parallel volumes extend, and such that the parallel volumes are arranged in an array extending perpendicular to the base plane of the subfield.
3. A system according to claim 2 , wherein said second receiver is operable to receive parallel subfield data that includes an index for each subfield which is indicative of the direction of the subfield and wherein said subfield determiner is operable to determine which of said subfields has a similar direction to the direction identified by the second processor using the index of the parallel subfield data.
4. A system according to claim 2 or 3 , wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that the base plane of each subfield is divided into a plurality of tiles, each associated with a respective volume of the subfield, and wherein said intersection data generator is operable to generate intersection data associated with each of said tiles .
5. A system according to any of claims 1 to 3, wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that each volume of a subfield represents a predetermined pathway which extends in the direction of the subfield and wherein said selector is operable to select the intersection data for the predetermined pathway of the determined subfield which is immediately adjacent the pathway.
6. A system according to any preceding claim, wherein said selector is operable to select the intersection data for a plurality of volumes of the determined subfield through which, or adjacent which, the pathway extends .
7. A system according to any preceding claim, wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that each subfield has an associated base plane from which said parallel volumes extend and wherein said selector is operable to: i) project the start point and an end point of the pathway onto the base plane of the determined subfield; ii) identify a line connecting the projected start point and end point; iii) identify the tile or tiles traversed by the identified line; and iv) select the intersection data for the respective volume of the identified tile or tiles.
8. A system according to any preceding claim, wherein said pathway data definer is operable to define a vector associated with the pathway, which vector defines the direction of the pathway.
9. A system according to claim 8, wherein said subfield determiner is operable to determine which of said subfields has a similar direction to the direction of the pathway using the vector associated with the pathway.
10. A system according to any preceding claim, wherein said subfield determiner is operable to determine which of said subfields has a direction nearest to the direction of the pathway identified by said second processor.
11. A system according to any preceding claim, wherein: the intersection data associated with each volume includes data for each of the intersected objects; said intersection data generator is operable to sort the data for each intersected object according to the distance of the objects from a base plane of the associated subfield; and said third processor is operable to process the sorted intersection data to determine the first object which is intersected by the pathway.
12. A system according to any preceding claim, wherein.- said pathway data definer is operable to define primary pathway data representing a primary pathway which extends from said start point through the 3D scene; said pathway data definer is further operable to define secondary pathway data representing a secondary pathway which extends through the 3D scene from an intersection point between the primary pathway and the object identified by said third processor for that primary pathway; and said object determiner is operable to determine which object in the 3D scene is intersected by the secondary pathway and is closest to the intersection point, using the secondary pathway data, the intersection data and the 3D scene data.
13. A system according to claim 12, wherein said pathway data definer is operable to define secondary pathway data representing a plurality of secondary pathways each of which extends from the same start point through the 3D scene and wherein said object determiner is operable to determine, for each of said plurality of secondary pathways, which object in the 3D scene is intersected by that secondary pathway and is closest to the start point.
14. A system according to any preceding claim, wherein said pathway processor further comprises a start point data receiver operable to receive data defining a new start point, and wherein said pathway processor is operable, in response to said received new start point, to define new pathway data representing a new pathway which extends from the new start point through the 3D scene and to determine which object in the 3D scene is intersected by the new pathway and is closest to the new start point, using the new pathway data, the intersection data and the 3D scene data.
15. A system according to any preceding claim, wherein said pathway processor further comprises : an object modification data receiver operable to receive object modification data identifying an object within the 3D scene to be modified and the modification to be made; and an intersection data updater operable to update the intersection data in response to said received object modification data; and wherein said pathway processor is operable to determine which object in the 3D scene is intersected by the pathway and is closest to the start point, using the pathway data, the updated intersection data and the 3D scene data.
16. A system according to claim 15, wherein said pathway processor further comprises a start point data receiver operable to receive data defining a new start point, and wherein: said pathway data definer is operable, in response to said received new start point, to define new pathway data representing a new pathway which extends from the new start point through the 3D scene; and said object determiner is operable to determine which object in the 3D scene is intersected by the new pathway and is closest to the new start point, using the new pathway data, the intersection data and the 3D scene data.
17. A system according to claim 15 or 16, wherein said intersection data updater comprises : a data remover operable to remove data for the object to be modified from the intersection data; a scene data modifier operable to modify the 3D scene data for the identified object in accordance with said object modification data; a fourth processor operable to process the modified 3D scene data and said parallel subfield data to determine intersections between said volumes and the modified object within the 3D scene; and an intersection data modifier operable to modify the intersection data associated with each of said volumes which intersect with the modified object, to include data identifying the modified object.
18. A system according to any preceding claim, wherein said pathway processor is an image renderer operable to generate a 2D image of a 3D scene from a defined viewpoint and wherein: said pathway data definer is operable to define pathway data representing a plurality of pathways which extend from said viewpoint through the 3D scene, each pathway being associated with a pixel of the 2D image to be generated; said object determiner is operable to determine, for each of said plurality of pathways, which object in the 3D scene is intersected by that pathway and is closest to the viewpoint, using the pathway data, the intersection data and the 3D scene data; and said image renderer further comprises •. a pixel value determiner operable to determine the pixel value for the pixel associated with each of the pathways in dependence upon the 3D scene data for the object identified by the object determiner.
19. A system according to claim 18, wherein said pathway processor is operable to receive a sequence of new viewpoints and is operable to generate a corresponding 2D image of the 3D scene from each viewpoint to generate a sequence of 2D images .
20. A system according to claim 19, wherein said pathway processor is operable to generate the sequence of 2D images of the 3D scene in substantially real time.
21. An apparatus for generating intersection data, the apparatus comprising: a first receiver operable to receive 3D scene data defining a 3D scene, which 3D scene data includes data defining objects and their locations within the 3D scene; a second receiver operable to receive parallel subfield data defining a plurality of subfields, each subfield having a respective direction and a plurality of volumes which extend parallel to each other in the direction; a processor operable to process said 3D scene data and said parallel subfield data to determine intersections between said volumes and each object within the 3D scene; an intersection data generator operable to generate intersection data for each volume, which intersection data includes data identifying the objects within the 3D scene that intersect with the volume but which does not include data identifying the location of the intersection within the 3D scene; and a data outputter operable to output said intersection data.
22. Apparatus according to claim 21, wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that each subfield has an associated base plane from which said parallel volumes extend, and such that the parallel volumes are arranged in an array extending perpendicular to the base plane of the subfield.
23. Apparatus according to claim 22, wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that the base plane of each subfield is divided into a plurality of tiles, each associated with a respective volume of the subfield, and wherein said intersection data generator is operable to generate intersection data associated with each of said tiles.
24. Apparatus according to claims 21 or 22, wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that each volume of a subfield represents a predetermined pathway which extends in the direction of the subfield.
25. Apparatus according any one of claims 21 to 24, wherein the intersection data associated with each volume includes data for each of the intersected objects and wherein said intersection data generating means is operable to sort the data for each intersected object according to the distance of the objects from a base plane of the associated subfield.
26. Apparatus according to any one of claims 21 to 25, wherein said data outputter is operable to output the 3D scene data received by the first receiver.
27. Apparatus according to any one of claims 21 to 26, wherein said data outputter is operable to output the parallel subfield data received by the second receiver.
28. Apparatus according to any one of claims 21 to 27, wherein said data outputter is operable to output data on a recording medium.
29. Apparatus according to any one of claims 21 to 27, wherein said data outputter is operable to output data on a carrier signal.
30. An apparatus for determining which of a plurality of objects within a 3D scene is intersected by a pathway and is closest to a start point of the pathway, the apparatus comprising: a first receiver operable to receive 3D scene data defining the 3D scene, which 3D scene data includes data defining objects and their locations within the 3D scene; a second receiver operable to receive parallel subfield data defining a plurality of subfields, each subfield having a respective direction and a plurality of volumes which extend parallel to each other in the direction; a third receiver operable to receive intersection data associated with each volume of the subfields, which intersection data includes data identifying the objects within the 3D scene that intersect with the volume,- a pathway data definer operable to define pathway data representing a pathway which extends from a start point through the 3D scene; and an object determiner operable to determine which object in the 3D scene is intersected by the pathway and is closest to the start point, using the pathway data, the intersection data and the 3D scene data, the object determiner comprising: a first processor operable to process the pathway data for the pathway to identify the direction in which the pathway extends; a subfield determiner operable to determine which of said subfields has a similar direction to the direction of the pathway identified by the first processor; a selector operable to select the intersection data for a volume of the determined subfield through which, or adjacent which, the pathway extends ; and a second processor operable to process the pathway data representing the pathway and the 3D scene data for objects identified from the selected intersection data, to identify which of those objects is intersected by the pathway and is closest to said start point .
31. Apparatus according to claim 30, wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that each subfield has an associated base plane from which said parallel volumes extend, and such that the parallel volumes are arranged in an array extending perpendicular to the base plane of the subfield.
32. Apparatus according to claim 31, wherein said second receiver is operable to receive parallel subfield data that includes an index for each subfield which is indicative of the direction of the subfield and wherein said subfield determiner is operable to determine which of said subfields has a similar direction to the direction identified by the first processor using the index of the parallel subfield data.
33. Apparatus according to claim 31 or 32, wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that the base plane of each subfield is divided into a plurality of tiles, each associated with a respective volume of the subfield, and wherein said third receiver is operable to receive intersection data which is associated with each of said tiles.
34. Apparatus according to any of claims 30 to 32, wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that each volume of a subfield represents a predetermined pathway which extends in the direction of the subfield and wherein said selector is operable to select the intersection data for the predetermined pathway of the determined subfield which is immediately adjacent the pathway.
35. Apparatus according to any of claims 30 to 34, wherein said selector is operable to select the intersection data for a plurality of volumes of the determined subfield through which, or adjacent which, the pathway extends .
36. Apparatus according to any of claims 30 to 35, wherein said second receiver is operable to receive parallel subfield data which defines said subfields such that each subfield has an associated base plane from which said parallel volumes extend and wherein said selector is operable to: i) project the start point and an end point of the pathway onto the base plane of the determined subfield; ii) identify a line connecting the projected start point and end point ; iii) identify the tile or tiles traversed by the identified line; and iv) select the intersection data for the respective volume of the identified tile or tiles.
37. Apparatus according to any of claims 30 to 36, wherein said pathway data definer is operable to define a vector associated with each pathway, which vector defines the direction of the pathway.
38. Apparatus according to claim 37, wherein said subfield determiner is operable to determine which of said subfields has a similar direction to the direction of the pathway using the vector associated with the pathway.
39. Apparatus according to any one of claims 30 to 38, wherein said subfield determiner is operable to determine which of said subfields has a direction nearest to the direction of the pathway identified by said first processor.
40. Apparatus according to any of claims 30 to 39, wherein said third receiver is operable to receive intersection data that includes data for each of the intersected objects which is sorted according to the distance of the objects from a base plane of the associated subfield and wherein said second processor is operable to process the sorted intersection data to determine the first object which is intersected by the pathway.
41. Apparatus according to any of claims 30 to 40, wherein: said pathway data definer is operable to define primary pathway data representing a primary pathway which extends from said start point through the 3D scene; said pathway data definer is further operable to define secondary pathway data representing a secondary pathway which extends through the 3D scene from an intersection point between the primary pathway and the object identified by said second processor for that primary pathway; and said object determiner is operable to determine which object in the 3D scene is intersected by the secondary pathway and is closest to the intersection point, using the secondary pathway data, the intersection data and the 3D scene data.
42. Apparatus according to claim 41, wherein said pathway data definer is operable to define secondary pathway data representing a plurality of secondary pathways each of which extends from the same start point through the 3D scene and wherein said object determiner is operable to determine, for each of said plurality of secondary pathways, which object in the 3D scene is intersected by that secondary pathway and is closest to the start point .
43. Apparatus according to any of claims 30 to 42, further comprising a start point data receiver operable to receive data defining a new start point, and wherein: said pathway data definer is operable, in response to said received new start point, to define new pathway data representing a new pathway which extends from the new start point through the 3D scene; and said object determiner is operable to determine which object in the 3D scene is intersected by the new pathway and is closest to the new start point, using the new pathway data, the intersection data and the 3D scene data.
44. Apparatus according to any one of claims 30 to 43, further comprising: an object modification data receiver operable to receive object modification data identifying an object within the 3D scene to be modified and the modification to be made; and an intersection data updater operable to update the intersection data in response to said received object modification data; and wherein said pathway processor is operable to determine which object in the 3D scene is intersected by the pathway and is closest to the start point, using the pathway data, the updated intersection data and the 3D scene data.
45. Apparatus according to claim 44, wherein said intersection data updater comprises: a data remover operable to remove data for the object to be modified from the intersection data; a scene data modifier operable to modify the 3D scene data for the identified object in accordance with said object modification data; a fourth processor operable to process the modified 3D scene data and said parallel subfield data to determine intersections between said volumes and the modified object within the 3D scene; and an intersection data modifier operable to modify the intersection data associated with each of said volumes which intersect with the modified object, to include data identifying the modified object.
46. Apparatus according to any of claims 30 to 45, wherein said apparatus is an image renderer operable to generate a 2D image of a 3D scene from a defined viewpoint and wherein: said pathway data definer is operable to define pathway data representing a plurality of pathways which extend from said viewpoint through the 3D scene, each pathway being associated with a pixel of the 2D image to be generated; said object determiner is operable to determine, for each of said plurality of pathways, which object in the 3D scene is intersected by that pathway and is closest to the viewpoint, using the pathway data, the intersection data and the 3D scene data; and said image renderer further comprises: a pixel value determiner operable to determine the pixel value for the pixel associated with each of the pathways in dependence upon the 3D scene data for the object identified by the object determiner.
47. Apparatus according to claim 46, wherein said pathway processor is operable to receive a sequence of new viewpoints and to generate a 2D image of the 3D scene from each viewpoint to generate a corresponding sequence of 2D images.
48. Apparatus according to claim 47, wherein said pathway processor is operable to generate the sequence of 2D images of the 3D scene in substantially real time.
49. Apparatus according to any of claims 45 to 48, wherein said pathway processor further comprises a 2D image output operable to output the generated 2D image data.
50. An image rendering apparatus for generating a 2D image of a 3D scene, the apparatus comprising: a receiver operable to receive user input defining a viewpoint; a pathway generator operable to generate a plurality of pathways, which extend from said viewpoint through the 3D scene, each pathway being associated with a pixel of the 2D image to be generated; an apparatus according to any of claims 30 to 49 operable to determine, for each of said plurality of pathways, which of a plurality of objects within the 3D scene is intersected by that pathway and is closest to said viewpoint; an image generator operable to generate a 2D image of the 3D scene from said defined viewpoint by determining a pixel value for each pixel associated with each of the pathways in dependence upon the 3D scene data for the objects determined by the apparatus; and an output operable to output the generated 2D image to the user.
51. An ambient occlusion apparatus for calculating the approximate diffuse reflection for a point on a surface in a 3D scene, the apparatus comprising: a pathway generator operable to generate a plurality of pathways, which extend from said point through the 3D scene; an apparatus according to any of claims 30 to 49 operable to determine, for each of said plurality of pathways, which of a plurality of objects within the
3D scene is intersected by that pathway and is closest to said point; and a calculator operable to calculate a value representative of the approximate diffuse reflection at said point, in dependence upon the 3D scene data for each object identified by the apparatus.
52. A method of pre-processing a 3D scene to generate intersection data for the scene, the method comprising: a first receiving step of receiving 3D scene data defining the 3D scene, which 3D scene data includes data defining objects and their locations within the 3D scene; a second receiving step of receiving parallel subfield data defining a plurality of subfields, each subfield having a respective direction and a plurality of volumes which extend parallel to each other in the direction; processing said 3D scene data and said parallel subfield data to determine intersections between said volumes and each object within the 3D scene; generating intersection data for each volume, which intersection data includes data identifying the objects within the 3D scene that intersect with the volume but which does not include data identifying the location of the intersection within the 3D scene; and outputting said intersection data.
53. A method according to claim 52, wherein said output step outputs the 3D scene data received in said first receiving step.
54. A method according to claim 52 or 53, wherein said output step outputs the parallel subfield data received in said second receiving step.
55. A method according to any one of claims 52 to 54, wherein said output step outputs data on a recording medium.
56. A method according to any one of claims 52 to 54, wherein said output step outputs data on a carrier signal.
57. A method of determining which of a plurality of objects within a 3D scene is intersected by a pathway and is closest to a start point of the pathway, the method comprising: a first receiving step of receiving 3D scene data defining the 3D scene, which 3D scene data includes data defining objects and their locations within the
3D scene; a second receiving step of receiving parallel subfield data defining a plurality of subfields, each subfield having a respective direction and a plurality of volumes which extend parallel to each other in the direction; a third receiving step of receiving intersection data associated with each volume of the subfields, which intersection data includes data identifying the objects within the 3D scene that intersect with the volume ; defining pathway data representing a pathway which extends from a start point through the 3D scene; and determining which object in the 3D scene is intersected by the pathway and is closest to the start point, using the pathway data, the intersection data and the 3D scene data, the determining step comprising: a first processing step of processing the pathway data for the pathway to identify the direction in which the pathway extends ; determining which of said subfields has a similar direction to the direction of the pathway identified in the first processing step; selecting the intersection data for a volume of the determined subfield through which or adjacent which the pathway extends; and a second processing step of processing the pathway data representing the pathway and the 3D scene data for objects identified from the selected intersection data, to identify which of those objects is intersected by the pathway and is closest to said start point .
58. A method according to claim 57, wherein: said step of defining said pathway data defines primary pathway data representing a plurality of primary pathways which extend from said start point through the 3D scene ; said pathway data defining step further comprises the step of defining secondary pathway data representing a secondary pathway which extends through the 3D scene from an intersection point between the primary pathway and the object identified by said second processing step for that primary pathway; and said object determining step determines which object in the 3D scene is intersected by the secondary pathway and is closest to the start point, using the secondary pathway data, the intersection data and the 3D scene data.
59. A method according to any of claims 57 or 58, further comprising the step of generating a 2D image of a 3D scene from a defined viewpoint and wherein: said pathway data defining step defines pathway data representing a plurality of pathways which extend from said viewpoint through the 3D scene, each pathway being associated with a pixel of the 2D image to be generated; said object determining step determines, for each of said plurality of pathways, which object in the 3D scene is intersected by that pathway and is closest to the start point, using the pathway data, the intersection data and the 3D scene data; and said 2D image generating step comprises : a pixel value determining step of determining the pixel value for the pixel associated with each of the pathways in dependence upon the 3D scene data for the object identified by the object determiner.
60 . A method according to claim 59, further comprising a step of outputting the generated 2D image data.
61. A method according to claim 60, wherein said generated 2D image data is output on a recording medium.
Figure imgf000070_0001
69 5484199
62. A method according to claim 60, wherein said generated 2D image data is output on a carrier signal.
63. A computer readable medium storing processor executable instructions for programming a computer apparatus to become configured as the apparatus of any of claims 21 to 51.
64. A signal carrying processor executable instructions for programming a computer apparatus to become configured as the apparatus of any of claims 21 to 51.
65. A computer system comprising: a pre-processor operable to pre-process 3D scene data defining a plurality of objects within a 3D scene and parallel subfield data defining a plurality of subfields, each having a respective direction and a plurality of volumes extending parallel to each other in the direction, to generate, for each volume, intersection data identifying the objects within the 3D scene that intersect with the volume; and a pathway processor operable to process pathway data defining a pathway which extends from a start point through the 3D scene to determine which of said subfields has a similar direction to the direction of the pathway and to process the pathway data for the pathway and the 3D scene data for candidate objects identified from intersection data for a volume of the determined subfield through which, or adjacent which, the pathway extends, to determine which candidate object is intersected by the pathway and is closest to the start point of the pathway.
PCT/GB2005/000306 2004-01-29 2005-01-28 Apparatus and method for determining intersections WO2005073924A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0401957.6 2004-01-29
GB0401957A GB2410663A (en) 2004-01-29 2004-01-29 3d computer graphics processing system

Publications (1)

Publication Number Publication Date
WO2005073924A1 true WO2005073924A1 (en) 2005-08-11

Family

ID=31971670

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2005/000306 WO2005073924A1 (en) 2004-01-29 2005-01-28 Apparatus and method for determining intersections

Country Status (2)

Country Link
GB (1) GB2410663A (en)
WO (1) WO2005073924A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2596566B (en) 2020-07-01 2022-11-09 Sony Interactive Entertainment Inc Image rendering using ray-tracing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729672A (en) * 1993-07-30 1998-03-17 Videologic Limited Ray tracing method and apparatus for projecting rays through an object represented by a set of infinite surfaces

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5864342A (en) * 1995-08-04 1999-01-26 Microsoft Corporation Method and system for rendering graphical objects to image chunks
US6023279A (en) * 1997-01-09 2000-02-08 The Boeing Company Method and apparatus for rapidly rendering computer generated images of complex structures
US6466207B1 (en) * 1998-03-18 2002-10-15 Microsoft Corporation Real-time image rendering with layered depth images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729672A (en) * 1993-07-30 1998-03-17 Videologic Limited Ray tracing method and apparatus for projecting rays through an object represented by a set of infinite surfaces

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MEL SLATER: "A note on virtual light fields", RESEARCH NOTE RN/00/26/ DEPARTMENT OF COMPUTER SCIENCE, UNIVERSITY COLLEGE LONDON, 5 April 2000 (2000-04-05), pages 1 - 4, XP002325690 *
UPSON C ET AL: "V-BUFFER: VISIBLE VOLUME RENDERING", COMPUTER GRAPHICS, NEW YORK, NY, US, vol. 22, no. 4, 1988, pages 59 - 64, XP000878782, ISSN: 0097-8930 *

Also Published As

Publication number Publication date
GB2410663A (en) 2005-08-03
GB0401957D0 (en) 2004-03-03

Similar Documents

Publication Publication Date Title
Zhang et al. Visibility culling using hierarchical occlusion maps
US8411088B2 (en) Accelerated ray tracing
US7733341B2 (en) Three dimensional image processing
Cohen‐Or et al. Conservative visibility and strong occlusion for viewspace partitioning of densely occluded scenes
US6618047B1 (en) Visibility calculations for 3d computer graphics
El-Sana et al. Integrating occlusion culling with view-dependent rendering
US9208610B2 (en) Alternate scene representations for optimizing rendering of computer graphics
JPH10208077A (en) Method for rendering graphic image on display, image rendering system and method for generating graphic image on display
US6130670A (en) Method and apparatus for providing simple generalized conservative visibility
US20030160798A1 (en) Bucket-sorting graphical rendering apparatus and method
JPH05114032A (en) Shadow testing method in three-dimensional graphics
US6791544B1 (en) Shadow rendering system and method
Dietrich et al. Massive-model rendering techniques: a tutorial
Jeschke et al. Layered environment-map impostors for arbitrary scenes
KR100256472B1 (en) Efficient rendering utilizing user defined rooms and windows
Erikson et al. Simplification culling of static and dynamic scene graphs
KR100693134B1 (en) Three dimensional image processing
WO1998043208A2 (en) Method and apparatus for graphics processing
Van Kooten et al. Point-based visualization of metaballs on a gpu
WO2005073924A1 (en) Apparatus and method for determining intersections
Gomez et al. Time and space coherent occlusion culling for tileable extended 3D worlds
Aliaga Automatically reducing and bounding geometric complexity by using images
Popescu et al. Sample-based cameras for feed forward reflection rendering
Gummerus Conservative From-Point Visibility.
Selçuk et al. Walkthrough in Complex Environments at Interactive Rates using Level-of-Detail

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase