Disclosure of Invention
In order to overcome the problems in the related art, the present specification provides an augmented reality image display method, apparatus, and device.
According to a first aspect of embodiments of the present specification, there is provided an image presentation method for augmented reality, the method including:
acquiring a figure image acquired by a figure camera module, and determining the relative position of the human eyes and the figure camera module by utilizing the relationship between the human eye area in the figure image and the figure image;
determining human eye position information at least based on the relative positions of human eyes and the human figure camera module and the position information of the scene camera module;
and rendering the three-dimensional model by taking the position information of human eyes as the position information of the scene camera module in the rendering parameters to obtain a projection image projected on the display screen, wherein the three-dimensional model is obtained by combining the virtual object with the real scene scanned by the scene camera module.
In one embodiment, the method is applied to an electronic device, the character camera module comprises a front camera of the electronic device, and the scene camera module comprises a rear camera of the electronic device.
In one embodiment, the step of constructing the three-dimensional model comprises:
performing three-dimensional reconstruction on a real scene by using a real scene image acquired by a scene camera module to obtain a scene model;
and based on a preset superposition strategy, superposing the virtual object to the scene model to obtain a three-dimensional model.
In one embodiment, the determining the position information of the human eyes based on at least the relative positions of the human eyes and the human figure camera module and the position information of the scene camera module comprises:
acquiring the relative positions of a character camera module and a scene camera module;
converting the relative positions of the human eyes and the character camera module into the relative positions of the human eyes and the scene camera module by using the relative positions of the character camera module and the scene camera module;
and calculating to obtain the position information of the human eyes by combining the relative positions of the human eyes and the scene camera module and the position information of the scene camera module.
In one embodiment, the method further comprises:
before the position of the human eye is determined, the change of the relative position of the human eye and the human image pickup module is judged according to the relative position of the human eye and the human image pickup module obtained at present and the relative position of the human eye and the human image pickup module obtained last time.
According to a second aspect of embodiments herein, there is provided an augmented reality image presentation device, the device comprising:
a relative position determination module to: acquiring a figure image acquired by a figure camera module, and determining the relative position of the human eyes and the figure camera module by utilizing the relationship between the human eye area in the figure image and the figure image;
a human eye position determination module to: determining human eye position information at least based on the relative positions of human eyes and the human figure camera module and the position information of the scene camera module;
an image rendering module to: and rendering the three-dimensional model by taking the position information of human eyes as the position information of the scene camera module in the rendering parameters to obtain a projection image projected on the display screen, wherein the three-dimensional model is obtained by combining the virtual object with the real scene scanned by the scene camera module.
In one embodiment, the device is provided on an electronic device, the character camera module comprises a front camera of the electronic device, and the scene camera module comprises a rear camera of the electronic device.
In one embodiment, the apparatus further comprises a three-dimensional model building module to:
performing three-dimensional reconstruction on a real scene by using a real scene image acquired by a scene camera module to obtain a scene model;
and based on a preset superposition strategy, superposing the virtual object to the scene model to obtain a three-dimensional model.
In one embodiment, the human eye position determining module is specifically configured to:
acquiring the relative positions of a character camera module and a scene camera module;
converting the relative positions of the human eyes and the character camera module into the relative positions of the human eyes and the scene camera module by using the relative positions of the character camera module and the scene camera module;
and calculating to obtain the position information of the human eyes by combining the relative positions of the human eyes and the scene camera module and the position information of the scene camera module.
In one embodiment, the apparatus further comprises a location determination module configured to:
before the position of the human eye is determined, the change of the relative position of the human eye and the human image pickup module is judged according to the relative position of the human eye and the human image pickup module obtained at present and the relative position of the human eye and the human image pickup module obtained last time.
According to a third aspect of embodiments herein, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the method as in any one of the above.
The technical scheme provided by the embodiment of the specification can have the following beneficial effects:
the method comprises the steps of obtaining a figure image collected by a figure camera module, and determining the relative position of the human eyes and the figure camera module by utilizing the relation between human eye areas in the figure image and the figure image; determining the position information of the human eyes at least based on the relative positions of the human eyes and the figure camera module and the position information of the scene camera module; the human eye position information is used as the position information of the scene camera module in the rendering parameters, the three-dimensional model is rendered by combining other rendering parameters, the projection image projected on the display screen is obtained, the augmented reality display content is changed from the projection image at the visual angle of the camera module to the projection image at the visual angle of human eyes, and the projection image is changed along with the change of the human eye position.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Augmented Reality (AR) technology is a new technology for seamlessly integrating real world information and virtual world information, virtual information can be applied to the real world through a computer technology, and a real environment and a virtual object are superimposed on the same picture or space in real time and exist at the same time.
One common application scenario of AR technology is that a user photographs a real environment through a camera module in a mobile device, such as a handheld or wearable device, and software providing an AR service may render one or more virtual objects on initial image data based on the photographed initial image data. The key to realizing the above scenario is how to combine the virtual object with the real scene actually photographed, and on one hand, the software that can provide the AR service may pre-configure one or more models corresponding to the virtual object, where each model of the virtual object specifies a state evolution rule corresponding to the virtual object to determine different motion states of the virtual object. On the other hand, the software can also determine the position of the virtual object in the real scene according to the image data shot by the equipment, and further determine the position of the virtual object rendered on the image data, and after the virtual object is successfully rendered, the user can view the picture superimposed with the virtual object based on the real environment.
However, when rendering a three-dimensional model constructed from a virtual object and a real scene, the rendering is performed from the perspective of the camera module. The enhanced display scheme relies on the gyroscope, acceleration and gravity sensors of the device to sense changes in the angle of the device. Therefore, if the camera module does not move and the photographer/viewer moves, the imaged image does not respond accordingly, and the feeling of substitution and stereoscopic sensation are poor.
For example, as shown in fig. 1, the description provides a schematic view of capturing an AR scene according to an exemplary embodiment. In fig. 1, the virtual object is taken as a puppy, and the AR system adopts a common display as an example, in which case, the user can see the fusion effect of the real environment and the virtual object from the display screen without wearing any display device. A photographer/viewer shoots a real scene by using a rear camera of the mobile phone and shows a projection image including a puppy in a screen of the mobile phone. However, when the photographer/viewer keeps the mobile phone still and the relative position of the eyes and the mobile phone changes, the picture displayed on the mobile phone screen does not change.
In view of this, the present specification provides an augmented reality image display method, in which the augmented reality display content of the device is changed from a composite image of a camera view angle to a composite image of a human eye view angle, so that the displayed image is closer to the effect of the human eye view angle, and the stereoscopic impression and the substitution impression are enhanced. The camera module is mainly used for shooting and imaging a shot object into an image by a lens and projecting the image on an imaging surface of a camera tube or a solid-state imaging device. The camera lens can encompass a wide range of scenes, often expressed in terms of angles, which may be referred to as the angle of view of the lens. The human eye viewing angle in the embodiments of the present disclosure does not refer to all the viewing angles that can be seen by human eyes, but may refer to the viewing angle that can be seen through the display screen.
The embodiments of the present specification are described below with reference to the accompanying drawings.
As shown in fig. 2, it is a flowchart of an image display method for augmented reality according to an exemplary embodiment shown in this specification, the method includes:
in step 202, acquiring a figure image acquired by a figure camera module, and determining the relative positions of the human eyes and the figure camera module by using the relationship between the human eye area in the figure image and the figure image;
in step 204, determining human eye position information at least based on the relative positions of the human eyes and the human figure camera module and the position information of the scene camera module;
in step 206, a three-dimensional model is rendered by using the human eye position information as the position information of the scene camera module in the rendering parameters to obtain a projection image projected on the display screen, wherein the three-dimensional model is obtained by combining the virtual object with the real scene scanned by the scene camera module.
In the embodiment of the present specification, the person camera module and the scene camera module are different camera modules, and the two camera modules have different shooting areas. In one example, the shooting directions of the character camera module and the scene camera module are opposite, the camera of the character camera module and the display screen are on the same surface of the electronic device, even the camera mirror surface of the character camera module and the display screen are on the same plane, and further, the two camera modules are arranged on the same electronic device. For example, in practical applications, due to the fact that the image acquired by the rear camera has higher definition than the image acquired by the front camera, a photographer/viewer often uses the rear camera to shoot a real scene, and meanwhile, the mirror surface of the front camera and the display screen are on the same plane. Therefore, the character camera module can be a front camera, and the scene camera module can be a rear camera, so that the application of augmented reality by the rear camera is realized, and AR enhancement is assisted by the front camera.
It is understood that the person photographing module and the scene photographing module are both photographing modules, and are named differently only to distinguish different photographing modules. In other examples, some terminals may have display screens on both front and back sides, so that the rear camera may be used as a character camera module and the front camera may be used as a scene camera module; alternatively, the human camera module and the scene camera module are camera modules provided in different devices.
The eye position information is used to indicate the position of the photographer/viewer's eye in space, and may be three-dimensional coordinates of the eye in the world coordinate system or the scene camera module coordinate system. Steps 202 and 204 describe how the eye position information is determined. As an example of an application, the relative position between the human eye and the human camera module may be determined, and then the human eye position information may be determined according to the relative position between the human eye and the human camera module and the position information of the scene camera module.
With respect to step 202, the character camera module may capture an image of a character, particularly an image of a photographer within a range that the character camera module can capture. The relative position of the human eyes and the human camera module can be a relative pose, including a relative distance and a relative direction. In one example, the relative position may be represented by a vector with a direction.
The relative position of the human eyes and the human camera module can be obtained by utilizing a human face detection algorithm to carry out human face detection on the human image. For example, a face region in a person image may be detected, an eye region may be determined from the face region according to a relationship between human eyes and a person, and a relative position between the human eyes and the person camera module may be determined according to a relationship between the eye region and the image.
In one embodiment, a deep learning training model may be utilized to determine the relative positions of human eyes and a human camera module. For example, a training sample may be constructed from a person image in which the relative positions of the human eyes and the camera module are marked, and a preset initial model may be trained using the training sample to obtain a detection model for detecting the relative positions of the human eyes and the camera module. In the application stage, the detection model is used for detecting the image to be detected, and the relative position of the human eyes and the camera module is obtained. It is understood that in other examples, other sample features that help improve the relative position detection result, such as a face region box, may also be included in each set of training samples. In addition, other methods may also be adopted, and the relative positions of the human eyes and the human camera module are obtained through the recognition of the human image, which is not described herein again.
Regarding step 204, the position information of the scene camera module is used to indicate the position of the scene camera module in space, and may be three-dimensional space coordinates of the scene camera module in a world coordinate system or a scene camera module coordinate system, for example, the position information of the scene camera module may be obtained when the scene camera module is subjected to camera calibration. It will be appreciated that the eye position and the scene camera module position are coordinates in the same coordinate system. In the image measuring process and machine vision application, in order to determine the correlation between the three-dimensional geometric position of a certain point on the surface of a space object and the corresponding point in an image, a geometric model of camera imaging can be established, and the geometric model parameters are camera parameters. The camera parameters may include internal parameters, external parameters, distortion parameters, and the like. In practical applications, calibration methods in related technologies, such as a linear calibration method, a nonlinear optimization calibration method, a classical two-step calibration method by Tsai, etc., may be used to calibrate the camera, which is not limited herein.
After obtaining the relative position of the human eye and the human camera module and the position information of the scene camera module, the human eye position information may be determined based on at least the relative position of the human eye and the human camera module and the position information of the scene camera module.
In some application scenes, the setting position of the character camera module is closer to the setting position of the scene camera module, and the relative position between the two modules can be ignored. Particularly, in the case where the character camera module and the scene camera module are arranged in a back-to-back manner, the relative position between the two modules can be ignored, and therefore, the position information of the human eyes can be determined directly according to the relative position between the human eyes and the character camera module and the position information of the scene camera module. For example, assuming that the position of the rear camera in the scene is X and the position of the human eye relative to the front camera is Y, the human eye position may be X + Y and the orientation may be-Y.
In some application scenes, in order to improve the accuracy of the human eye position information, the human eye position information can be determined by combining the relative positions of the human image pickup module and the scene image pickup module. In the case where the character camera module and the scene camera module are provided on the same device, the relative positions of the character camera module and the scene camera module are fixed and can be determined based on the device information of the device in which they are provided. Correspondingly, the determining the position information of the human eye based on at least the relative position of the human eye and the human figure camera module and the position information of the scene camera module may include:
acquiring the relative positions of a character camera module and a scene camera module;
converting the relative positions of the human eyes and the character camera module into the relative positions of the human eyes and the scene camera module by using the relative positions of the character camera module and the scene camera module;
and calculating to obtain the position information of the human eyes by combining the relative positions of the human eyes and the scene camera module and the position information of the scene camera module.
Therefore, in the embodiment, the relative position of the human eye and the scene camera module can be obtained through the relative position of the human image pickup module and the scene camera module, so that the accuracy of the human eye position information is improved.
In the embodiment of the present specification, the visual angle of the camera is to be replaced by the visual angle of the human eye, so that the background scene (real scene) and the virtual object are dynamically rendered at the visual angle of the human eye, and the stereoscopic impression and the substitution feeling are enhanced. Therefore, the position information of human eyes is used as the position information of the scene camera module in the rendering parameters to render the three-dimensional model, so as to obtain a projection image projected on the display screen, wherein the three-dimensional model is obtained by combining the virtual object and the real scene scanned by the scene camera module, and the rendering parameters are parameters required when the three-dimensional model is rendered.
When the model is rendered to obtain the projection image, the most important rendering parameters include the camera position and the projection plane information, and the camera position in the rendering parameters is mainly adjusted in the embodiment, so that the camera view angle is adjusted to the human eye view angle. Therefore, in this embodiment, the human eye position information may be used as the position information of the scene camera module in the rendering parameters to replace the viewing angle of the scene camera module with the human eye viewing angle, and the three-dimensional model is rendered by using the adjusted rendering parameters, so as to obtain the projection image projected on the display screen. The projection plane information in the rendering parameters may be determined according to the display screen information. In addition, the rendering parameters also include other parameters required in rendering, such as lighting parameters, and the like, which are not listed here.
In the embodiment, the rendering parameters can be adjusted through the position of human eyes, the rendering parameters and the three-dimensional model are input into the rendering module, and the projection image is rendered by the rendering module.
In the traditional process of the AR system, the real world can be started, digital imaging is performed, and then the system perceptively understands the three-dimensional world through the image data and the sensor data, and simultaneously obtains the understanding of the three-dimensional interaction. The purpose of 3D interactive understanding is to inform the system about what is to be "enhanced". The purpose of 3D environment understanding is to inform the system of the location to be "enhanced". Once the system has determined the content and location to be enhanced, it can do the virtual-real join, i.e. done by the rendering module. Finally, the synthesized video is transmitted to the visual system of the user, so that the effect of augmented reality is achieved.
The three-dimensional model in the present specification may be a model obtained by combining a virtual object with a scene scanned and displayed by a scene camera module. The three-dimensional model is obtained based on scene modeling and virtual object superposition. One of the three-dimensional model building methods is listed below, and in this embodiment, the building step of the three-dimensional model may include:
performing three-dimensional reconstruction on a real scene by using a real scene image acquired by a scene camera module to obtain a scene model;
and based on a preset superposition strategy, superposing the virtual object to the scene model to obtain a three-dimensional model.
The scene model is also referred to as a spatial model, including but not limited to an initialization scene model for realizing augmented reality. In the embodiment of the present description, a scene model may be obtained by performing three-dimensional reconstruction on a real scene. Three-dimensional Reconstruction (3D Reconstruction) is the building of a 3D model of an object in a real scene from input data. The vision-based three-dimensional reconstruction can be realized by acquiring a data image of a scene object through a camera, analyzing and processing the image, and deducing three-dimensional information of the object in a real environment by combining computer vision knowledge.
In one embodiment, a three-dimensional scene model in a scene may be reconstructed using a two-dimensional image as an input. Three-dimensional models of objects can be reconstructed by using relevant computer graphics and vision techniques on RGB images shot at different angles of the objects.
With the advent of depth cameras, in another embodiment, the scene camera module may be a depth camera. For points in a real scene, each frame of data scanned by the depth camera not only includes color RGB images of the points in the scene, but also includes a distance value from each point to a vertical plane in which the depth camera is located. The distance values may be referred to as depth values (depth), which together make up the depth image of the frame. A depth image may be understood as a grayscale image, wherein the grayscale value of each point in the image represents the depth value of the point, i.e. the real distance from the position of the point in reality to the vertical plane in which the camera is located. Thus, a three-dimensional scene model in a scene may be reconstructed with the RGB image and the depth image captured by the depth camera as input.
In the three-dimensional reconstruction process, the three-dimensional reconstruction method can relate to the processes of image acquisition, camera calibration, feature extraction, stereo matching, three-dimensional reconstruction and the like. Since the three-dimensional reconstruction technique is a mature prior art, it is not described here. For example, a method such as SLAM (simultaneous localization and mapping) may be used to realize three-dimensional reconstruction of a real scene.
After the scene model is obtained, the virtual object can be screened out based on a preset superposition strategy, and the position where the virtual object needs to be superposed is positioned, so that the virtual object is superposed on the scene model, and then the three-dimensional model is obtained. The preset overlay policy may be a policy for determining content and a location to be enhanced, and is not limited herein.
It can be seen from the above embodiments that when the scene camera module is used for augmented reality application, the character camera module is used for positioning human eyes, so that a real scene and a virtual object are dynamically rendered at a human eye visual angle, and when the positions of the human eyes and the camera module are changed, displayed projected images make adaptive responses, thereby enhancing the stereoscopic impression and the substitution feeling.
In one embodiment, before the position of the human eye is determined, it is determined that the relative position of the human eye and the human camera module is changed according to the relative position of the human eye and the human camera module obtained at the present time and the relative position of the human eye and the human camera module obtained at the last time. Therefore, when the relative position of the human eyes and the human camera module is changed, the steps 204 and 206 are executed, and when the relative position of the human eyes and the human camera module is not changed, the steps 204 and 206 are not executed, so that the waste of resources caused by real-time calculation is avoided.
The various technical features in the above embodiments can be arbitrarily combined, so long as there is no conflict or contradiction between the combinations of the features, but the combination is limited by the space and is not described one by one, and therefore, any combination of the various technical features in the above embodiments also belongs to the scope disclosed in the present specification.
One of the combinations is exemplified below.
Fig. 3A is a flowchart illustrating another augmented reality image presentation method according to an exemplary embodiment. The method can be applied to the mobile equipment, and the augmented reality display content of the mobile equipment is changed into a composite image of a human eye visual angle from a composite image of a camera visual angle. The method may include:
in step 302, a three-dimensional reconstruction of a scene on the back of the device is performed through an image acquired by the rear camera, and a virtual object is superimposed to obtain a three-dimensional model.
In step 304, the position of the human eyes of the user is detected by using a face detection algorithm through the image collected by the front camera.
In step 306, the projection of the reconstructed three-dimensional scene on the device screen and the projection of the virtual object at the device screen position are recalculated according to the position of the human eyes, and a projection image is obtained.
In step 308, the projected image is presented on the device screen.
Fig. 3A is similar to the related art in fig. 2, and is not repeated here.
For convenience of understanding, the display position of the virtual object in the present embodiment is compared with the display position of the virtual object in the prior art with reference to fig. 3B. The viewing angle of the rear camera is often larger than the viewing angle of the scenery seen by human eyes through the screen frame (for short, the viewing angle of human eyes), so the sheltering area of the virtual object under the viewing angle of the camera is larger than the sheltering area of the virtual object under the viewing angle of human eyes. In the figure, 32 represents the display position of the virtual object in the screen after the solution of the embodiment, and 34 represents the display position of the virtual object in the screen after the solution of the prior art.
According to the embodiment, the display content is adjusted through the judgment of the front camera on the positions of the human eyes, so that the display scene is closer to the visual angle of the human eyes, and the substitution feeling and the stereoscopic impression are stronger. By modeling the background by using three-dimensional scene reconstruction, the background with different angles can be better displayed in response to the change of the position of the human eyes. Meanwhile, the method for reconstructing the three-dimensional scene can make more appropriate response to the scene that the device is static and the background moves.
Corresponding to the embodiment of the augmented reality image display method, the present specification further provides embodiments of an augmented reality image display apparatus and an electronic device applied thereto.
The embodiment of the image display device for augmented reality in the specification can be applied to computer equipment. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. The software implementation is taken as an example, and is formed by reading corresponding computer program instructions in the nonvolatile memory into the memory for running through the processor of the computer device where the software implementation is located as a logical means. From a hardware aspect, as shown in fig. 4, the hardware structure diagram of the computer device in which the augmented reality image display apparatus is located in this specification is shown, except for the processor 410, the network interface 420, the memory 430, and the nonvolatile memory 440 shown in fig. 4, the computer device in which the augmented reality image display apparatus 431 is located in the embodiment may also include other hardware according to the actual function of the device, which is not described again.
As shown in fig. 5, the present specification is a block diagram of an image display apparatus for augmented reality according to an exemplary embodiment, where the apparatus includes:
a relative position determination module 52 to: acquiring a figure image acquired by a figure camera module, and determining the relative position of the human eyes and the figure camera module by utilizing the relationship between the human eye area in the figure image and the figure image;
a human eye position determination module 54 for: determining human eye position information at least based on the relative positions of human eyes and the human figure camera module and the position information of the scene camera module;
an image rendering module 56 to: and rendering the three-dimensional model by taking the position information of human eyes as the position information of the scene camera module in the rendering parameters to obtain a projection image projected on the display screen, wherein the three-dimensional model is obtained by combining the virtual object with the real scene scanned by the scene camera module. In one embodiment, the device is provided on an electronic device, the character camera module comprises a front camera of the electronic device, and the scene camera module comprises a rear camera of the electronic device.
In one embodiment, the apparatus further comprises a three-dimensional model building module (not shown in FIG. 5) for:
performing three-dimensional reconstruction on a real scene by using a real scene image acquired by a scene camera module to obtain a scene model;
and based on a preset superposition strategy, superposing the virtual object to the scene model to obtain a three-dimensional model.
In one embodiment, the human eye position determining module is specifically configured to:
acquiring the relative positions of a character camera module and a scene camera module;
converting the relative positions of the human eyes and the character camera module into the relative positions of the human eyes and the scene camera module by using the relative positions of the character camera module and the scene camera module;
and calculating to obtain the position information of the human eyes by combining the relative positions of the human eyes and the scene camera module and the position information of the scene camera module.
In one embodiment, the apparatus further comprises a location determination module (not shown in fig. 5) configured to:
before the position of the human eye is determined, the change of the relative position of the human eye and the human image pickup module is judged according to the relative position of the human eye and the human image pickup module obtained at present and the relative position of the human eye and the human image pickup module obtained last time.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
Accordingly, embodiments of the present specification further provide a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the following method when executing the program:
acquiring a figure image acquired by a figure camera module, and determining the relative position of the human eyes and the figure camera module by utilizing the relationship between the human eye area in the figure image and the figure image;
determining human eye position information at least based on the relative positions of human eyes and the human figure camera module and the position information of the scene camera module;
and rendering the three-dimensional model by taking the position information of human eyes as the position information of the scene camera module in the rendering parameters to obtain a projection image projected on the display screen, wherein the three-dimensional model is obtained by combining the virtual object with the real scene scanned by the scene camera module.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
A computer storage medium having stored therein program instructions, the program instructions comprising:
acquiring a figure image acquired by a figure camera module, and determining the relative position of the human eyes and the figure camera module by utilizing the relationship between the human eye area in the figure image and the figure image;
determining human eye position information at least based on the relative positions of human eyes and the human figure camera module and the position information of the scene camera module;
and rendering the three-dimensional model by taking the position information of human eyes as the position information of the scene camera module in the rendering parameters to obtain a projection image projected on the display screen, wherein the three-dimensional model is obtained by combining the virtual object with the real scene scanned by the scene camera module.
Embodiments of the present description may take the form of a computer program product embodied on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having program code embodied therein. Computer-usable storage media include permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.