WO2018086295A1 - Application interface display method and apparatus - Google Patents
Application interface display method and apparatus Download PDFInfo
- Publication number
- WO2018086295A1 WO2018086295A1 PCT/CN2017/078027 CN2017078027W WO2018086295A1 WO 2018086295 A1 WO2018086295 A1 WO 2018086295A1 CN 2017078027 W CN2017078027 W CN 2017078027W WO 2018086295 A1 WO2018086295 A1 WO 2018086295A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- eye image
- interface
- displayed
- right eye
- left eye
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
Definitions
- the present application relates to the field of computer applications, and in particular, to an application interface display method and apparatus.
- Virtual Reality (VR) technology is a computer simulation system that can create and experience virtual worlds. It uses computer to generate a simulation environment. It is a multi-source information fusion, interactive 3D dynamic vision and entity. System simulation of behavior immerses users in the environment. With the demand for quality of life, the development of virtual reality display technology has become the focus of social attention.
- the virtual reality device requires that the left-eye image and the right-eye image are respectively rendered to generate a stereoscopic effect, and most of the existing application interfaces are two-dimensional (2D), which cannot meet the requirements of the virtual reality device. This makes a large number of applications impossible to use in virtual reality systems.
- the prior art writes a virtual reality scene into a frame buffer of an Android system by using an Open Graphics Library (OpenGL) function in a left-right split screen manner, and uses the Android system to read the content in the frame buffer to perform rendering.
- OpenGL Open Graphics Library
- the two-dimensional application interface simultaneously renders the left and right eye images, and has a three-dimensional effect.
- rendering a two-dimensional application interface into an image with a three-dimensional visual effect takes a long time, and the rendered result is lagging behind, and cannot be attached to the user's field of view, which easily causes the user to have a visual misalignment, thereby causing dizziness and experience. difference.
- the embodiment of the present application provides an application interface display method, which can avoid the image position and the user's visual field misalignment caused by the change of the user's head posture during the process of rendering the two-dimensional application interface into an image having a three-dimensional visual effect. Dizziness and enhance the user experience.
- the first aspect of the present application provides an application interface display method, which is used by an application interface display device to display an interface of a two-dimensional application on a VR device, and the method includes:
- the application interface display device obtains an interface to be displayed.
- the interface to be displayed is an interface of the two-dimensional application.
- the interface to be displayed is subjected to dimension conversion processing to obtain a first left eye corresponding to the interface to be displayed.
- An image and a first right eye image, the first left eye image and the second right eye image are used to present a to-be-displayed interface having a three-dimensional visual effect, and then the application interface display device acquires a current head posture of the user, that is, the first head And the second left eye image and the second right eye image are obtained by adjusting the first left eye image and the first right eye image according to the first head posture, and finally displaying the second left in the left eye field of view of the VR device.
- the eye image displays the second right eye image in the right eye field of view of the VR device.
- the dimension conversion process refers to converting an interface of a two-dimensional application into an interface having a three-dimensional visual effect.
- the left eye image refers to an image generated for the user's left eye field of view
- the right eye image refers to an image generated for the user's right eye field of view.
- the VR device is divided into a left-eye view area and a right-eye view area according to the left and right eyes of the user, and the left-eye view area is a screen area or an optical lens group in the VR device aligned with the user's left-eye view, and the right-eye view area is in the VR device.
- a screen area or optical lens set that is aligned with the user's right eye field of view.
- the current head posture of the user is obtained, and the image corresponding to the left screen and the right screen is adjusted according to the head posture, and then the adjustment is performed.
- the subsequent images are displayed on the left and right screens, respectively. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
- the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
- the process of performing the dimension conversion processing by the application interface display device may specifically include:
- the application interface display device performs binocular rendering on the display interface, obtains a third left eye image and a third right eye image of the interface to be displayed, and performs barrel distortion processing on the third left eye image and the third right eye image to obtain a to-be-distorted process.
- the first left eye image of the interface and the first right eye image of the interface to be displayed are displayed.
- the magnification away from the optical axis region is lower than that near the optical axis, and the convex scene shown in the figure appears in the image plane is called barrel distortion, and the barrel distortion in the present application.
- the process is used to counteract the distortion produced by the optical lens in the VR device.
- the embodiment of the present application After obtaining an image with a three-dimensional visual effect by binocular rendering, the embodiment of the present application also performs barrel distortion processing on the image to offset the distortion generated by the optical lens in the VR device, thereby improving image quality and enhancing user experience.
- the application interface display device performs binocular rendering on the display interface to obtain a third left eye image and a third right eye image.
- the process specifically includes:
- the application interface display device After acquiring the interface to be displayed, acquires the current head posture of the user, that is, the second head posture, and then determines the first region, that is, the second region, according to the second head posture, and draws the to-be-displayed region in the first region.
- the third left eye image is obtained by the interface
- the third right eye image is obtained by drawing the interface to be displayed in the second area, wherein the first area is an area for displaying the interface to be displayed in the left eye image of the preset three-dimensional scene, and the second area is The area in the right eye image of the preset three-dimensional scene for displaying the interface to be displayed.
- the embodiment of the present application performs binocular rendering by drawing the interface to be displayed to the preset three-dimensional scene, so that the user can browse the interface to be displayed in the preset three-dimensional scene, thereby improving the flexibility of the solution and further enhancing the user experience.
- the application interface display device adjusts the first left eye according to the first head posture
- the process of obtaining the second left eye image and the second right eye image by the image and the first right eye image specifically includes:
- the application interface display device performs asynchronous time warping on the first left eye image according to the first partial posture to obtain the second left eye.
- the image is subjected to asynchronous time warping of the first right eye image to obtain a second right eye image.
- asynchronous time warping is a technique of image correction.
- the head motion has been too fast, and the delay of rendering the scene, that is, the head has already passed, but the image has not been rendered.
- the asynchronous time warping solves this delay problem by distorting the image before being sent to the display device.
- the embodiment of the present application provides a specific manner of adjusting an image, which improves the achievability of the solution.
- the application interface display device obtains the interface to be displayed, and the application interface display device obtains the interface to be displayed from the mobile terminal;
- the application interface display device displays the second left eye image in the left eye view area of the VR device, and displays the second right eye image in the right eye view area of the VR device.
- the left eye image and the second right eye image are sent to the mobile terminal
- the screen of the mobile terminal includes a third area and a fourth area, the third area corresponds to a left eye view area of the VR device, and the fourth area corresponds to a right eye view of the VR device
- the mobile terminal displays the second left eye image in the third area of the screen, and displays the second image in the fourth area of the screen.
- the embodiment of the present application provides a specific manner for obtaining an interface to be displayed and displaying an interface to be displayed, thereby improving the achievability of the solution.
- the mobile terminal includes a SurfaceFlinger module, where the SurfaceFlinger is a module responsible for display synthesis in the Android system.
- the position in the final composite image of each layer can be calculated, and then the final display buffer is generated and displayed on a particular display device.
- the application interface display device may obtain the to-be-displayed interface from the mobile terminal by: the application interface display device acquires the to-be-displayed interface from the SurfaceFlinger module;
- the application interface display device can send the second left eye image and the second right eye image to the mobile terminal by: the application interface display device sending the second left eye image and the second right eye image to the SurfaceFlinger module, So that the SurfaceFlinger module displays the second left eye image in the third area of the screen of the mobile terminal, and displays the second right eye image in the fourth area of the screen.
- the application interface display device in the embodiment of the present application may be a device that is independent of the Android system, that is, the application interface display method in the embodiment of the present application may not depend on the Android system, and may reduce the computing load of the mobile terminal, and is used in the method.
- the algorithm needs to be updated, the update can be performed independently of the Android system.
- the algorithm used in the method does not need to be modified accordingly, and the flexibility and versatility are higher.
- the second aspect of the present application provides an application interface display device for displaying an interface of a two-dimensional application on a VR device, the device comprising:
- a first acquiring module configured to acquire an interface to be displayed, where the interface to be displayed is an interface of a 2D application
- a processing module configured to perform dimension conversion processing on the interface to be displayed acquired by the first acquiring module, to obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, and the first left eye image and the first right eye image Presenting an interface to be displayed having a three-dimensional visual effect;
- a second acquiring module configured to acquire a first head posture of the user
- the adjusting module is configured to adjust the first left eye image to obtain the second left eye image according to the first head posture acquired by the second acquiring module, and adjust the first right eye image to obtain the second right eye image;
- a display module configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device.
- the current head posture of the user is obtained, and the image corresponding to the left screen and the right screen is adjusted according to the head posture, and then the adjustment is performed.
- the subsequent images are displayed on the left and right screens, respectively. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
- the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
- the processing module specifically includes:
- a rendering unit configured to perform binocular rendering on the display interface, and obtain a third left eye image and a third right eye image of the interface to be displayed;
- the processing unit is configured to perform barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image of the interface to be displayed and a first right eye image of the interface to be displayed.
- the processing unit of the embodiment of the present application can perform distortion processing on the image to offset the distortion generated by the optical lens in the VR device, improve image quality, and enhance user experience.
- the rendering unit may specifically include:
- a first acquiring subunit configured to acquire a second head posture of the user
- Determining a sub-unit configured to respectively determine a first area and a second area according to the second head posture, where the first area is an area for displaying an interface to be displayed in a left eye image of a preset three-dimensional scene, and the second area is a pre- The area of the right eye image of the three-dimensional scene for displaying the interface to be displayed;
- the drawing subunit is configured to draw the interface to be displayed in the first area to obtain the third left eye image, and draw the interface to be displayed in the second area to obtain the third right eye image.
- the rendering unit of the embodiment of the present application draws the interface to be displayed into the preset three-dimensional scene by using the drawing sub-unit, so that the user can browse the interface to be displayed in the preset three-dimensional scene, thereby improving the flexibility of the solution and further enhancing the user experience.
- the adjustment module may specifically include:
- the time warping unit is configured to perform asynchronous time warping on the first left eye image according to the first head posture to obtain a second left eye image, and perform asynchronous time warping on the first right eye image to obtain a second right eye image.
- the embodiment of the present application provides a specific manner for adjusting an image of an adjustment module, which improves the achievability of the solution.
- the first obtaining module may specifically include:
- An obtaining unit configured to acquire the to-be-displayed interface from the mobile terminal
- the display module may specifically include:
- a sending unit configured to send the second left eye image and the second right eye image to the mobile terminal, so that the mobile terminal displays the second left eye image in the third area of the screen, and displays the second right eye image in the fourth area of the screen
- the screen of the mobile terminal includes a third area and a fourth area, the third area corresponds to a left eye view area of the VR device, and the fourth area corresponds to a right eye view area of the VR device.
- the embodiment of the present application provides a specific manner for an acquiring unit to obtain an interface to be displayed and a display module to display an interface to be displayed, thereby improving the achievability of the solution.
- the mobile terminal includes a SurfaceFlinger module
- the acquiring unit may specifically include:
- a second obtaining subunit configured to obtain an interface to be displayed from the SurfaceFlinger module
- the sending unit may specifically include:
- a sending subunit configured to send the second left eye image and the second right eye image to the SurfaceFlinger module, so that the SurfaceFlinger module end displays the second left eye image in the third area of the screen of the mobile terminal, in the fourth area of the screen The second right eye image is displayed.
- the application interface display device in the embodiment of the present application may be a device independent of the Android system, that is, the method in the embodiment of the present application may be executed independently of the Android system, and the computing load of the mobile terminal may be reduced, and the algorithm used in the method may be used.
- the update can be performed independently of the Android system.
- the algorithm used in the method does not need to be modified accordingly, and the flexibility and versatility are higher.
- a third aspect of the present application provides a computer readable storage medium having stored therein instructions that, when run on a computer, cause the computer to perform the first aspect, the first aspect of the first aspect A method as claimed in any one of the fifth implementations.
- a fourth aspect of the present application provides a computer program product comprising instructions which, when executed on a computer, cause the computer to perform the first aspect described above, any one of the first to fifth implementations of the first aspect The method described in the manner.
- the embodiments of the present application have the following advantages:
- the current head posture of the user is obtained, and the image corresponding to the left screen and the right screen is adjusted according to the head posture, and then the adjustment is performed.
- the subsequent images are displayed on the left and right screens, respectively. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
- the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
- FIG. 1 is a flow chart of an embodiment of an application interface display method in an embodiment of the present application
- FIG. 2 is a flow chart of another embodiment of an application interface display method in an embodiment of the present application.
- FIG. 3 is a schematic diagram of a third left eye image and a fourth left eye image in the embodiment of the present application;
- FIG. 4 is a schematic diagram of an embodiment of an application interface display system in an embodiment of the present application.
- FIG. 5 is a flowchart of another embodiment of an application interface display method in an embodiment of the present application.
- FIG. 6 is a schematic diagram of an embodiment of an application interface display device in an embodiment of the present application.
- FIG. 7 is a schematic diagram of another embodiment of an application interface display device in an embodiment of the present application.
- FIG. 8 is a schematic diagram of another embodiment of an application interface display device in an embodiment of the present application.
- FIG. 9 is a schematic diagram of another embodiment of an application interface display device in an embodiment of the present application.
- the embodiment of the present application provides an application interface display method and device for avoiding dislocation of an image position and a user's visual field due to a change in a user's head posture during rendering of a two-dimensional application interface into an image having a three-dimensional visual effect.
- the resulting dizziness enhances the user experience.
- VR devices refer to hardware products related to the field of virtual reality technology and are hardware devices used in virtual reality solutions. At present, the hardware devices commonly used in virtual reality can be roughly divided into four types: modeling devices, three-dimensional visual display devices, sound devices, and interactive devices.
- the VR device in the embodiment of the present application refers to a three-dimensional visual display device, such as a three-dimensional display system, a large projection system (such as CAVE), a head-mounted display device, and the like.
- the VR head-mounted display device is a kind of use of a head-mounted display device to close the user's visual and auditory sense to the outside world, and guide the user to create a feeling in the virtual environment.
- the VR device in the embodiment of the present application includes a left-eye view area for displaying a left-eye image in a left eye of the user, and a right-eye view for displaying a right-eye image in the right eye of the user. After the user's left and right eyes respectively display the left eye image and the right eye image with differences, the user can create a stereoscopic effect in the mind.
- VR head display can be subdivided into three categories: external connector display, integrated machine head display, mobile terminal display.
- the external connector display and the integrated machine head have an independent screen, and the external connector displays the left eye image and the right eye image on the self-contained screen through the data input by the external device, so that the user is immersed in the virtual environment
- the headphone display allows users to immerse themselves in a virtual environment without any input/output devices.
- the mobile terminal display also called the VR glasses box, needs to put the mobile terminal into the VR glasses box, and displays the left eye image and the right eye image on the screen of the mobile terminal, and the user obtains the left side of the mobile terminal through the VR glasses box.
- the eye image and the right eye image produce a sense of three-dimensionality and immersion in the mind.
- the application interface display method in the embodiment of the present application is used for the interface of the application interface display device to display the 2D application on the VR device.
- the application interface display device may be the external connector display, or may be an input device capable of being connected to the external connector, such as a personal computer (PC), a mobile phone, etc.;
- the application interface display device may be the integrated head display, or may be a component for rendering an image in the integrated head display;
- the application interface display device may be the mobile terminal display, or A mobile terminal capable of being placed in the mobile terminal for displaying a left eye image and a right eye image.
- the application interface display device may also be other devices capable of communicating with the above three types of headphones, or input devices, or mobile terminals, such as a cloud server.
- an embodiment of the application interface display method in the embodiment of the present application includes:
- the application interface display device obtains an interface to be displayed
- the application interface display device obtains an interface to be displayed.
- the interface to be displayed is an interface that needs to be displayed on a screen of the display device.
- the interface to be displayed may be an interface of any two-dimensional application, or may be a plurality of two-dimensional applications.
- the interface synthesized by the interface is not limited herein. It should be understood that a two-dimensional application refers to an application developed based on two-dimensional display.
- the application interface display device performs dimension conversion processing on the display interface, and obtains a first left eye image and a first right eye image of the interface to be displayed;
- the application interface display device After the application interface display device obtains the interface to be displayed, performing dimension conversion processing on the interface to be displayed, and obtaining a first left eye image and a first right eye image of the interface to be displayed, the first left eye image and the first right eye image Used to render an interface to be displayed with a three-dimensional visual effect.
- the dimension conversion process refers to converting an interface of a two-dimensional application into an interface having a three-dimensional visual effect.
- the left eye image in the embodiment of the present application refers to an image generated for the user's left eye field of view
- the right eye image refers to an image generated for the user's right eye field of view.
- the application interface display device acquires a first head posture of the user
- the posture of the user's head may change.
- the application interface display device performs the step 102 and obtains the first left eye image and the first right eye image, Get the user's latest head gesture, the first head gesture.
- the head posture may specifically include a user's head yaw direction, a yaw angle of the user's head, or a motion mode of the user's head, and may also include other posture information, which is not limited herein.
- the application interface display device adjusts the first left eye image according to the first head posture to obtain a second left eye image, and adjusts the first right eye image to obtain a second right eye image.
- the application interface display device After acquiring the first head posture, the application interface display device adjusts the first left eye image according to the first head posture to obtain a second left eye image, and adjusts the first right eye image to obtain a second right eye image.
- the application interface display device displays the second left eye image in the left eye view area of the VR device, and displays the second right eye image in the right eye view area of the VR device.
- the VR device is divided into a left-eye view area and a right-eye view area according to the left and right eye views of the user. Specifically, if the VR device has an independent screen, the left-eye view area is the left of the user on the screen.
- the application interface display device displays the second left eye image in the area, and the right eye view area is the user on the screen
- the right eye field of view is an optical lens group on which the user's right eye is aligned on the VR device
- the application interface display device displays the second right eye image on the external screen in the area where the optical lens group is aligned, and the second left eye image And the second right eye image is finally displayed in the left and right eyes of the user through the optical path deformation.
- the left eye field of view and the right eye field of view of the VR device respectively display the second left eye image and the second right eye image in the left and right eyes of the user, and the user can synthesize a stereoscopic image in the brain.
- the current head posture of the user is acquired, and the image corresponding to the left and right eyes is adjusted according to the head posture, and the adjusted image is respectively displayed in the image.
- the left and right eye view areas of the VR device are to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
- the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
- the application interface display device can convert the interface of the two-dimensional application into an interface with a three-dimensional visual effect in various manners, and one of the following uses the application interface as an example in the embodiment of the present application.
- the display method is described in detail.
- another embodiment of the application interface display method in the embodiment of the present application includes:
- the application interface display device obtains an interface to be displayed.
- the application interface display device obtains an interface to be displayed.
- the interface to be displayed is an interface that needs to be displayed on a screen of the display device.
- the interface to be displayed may be an interface of any two-dimensional application, or may be a plurality of two-dimensional applications.
- the interface synthesized by the interface is not limited herein.
- a two-dimensional application refers to an application developed based on two-dimensional display.
- SurfaceFlinger is the module responsible for display synthesis in Android system. It can receive windows and layers as input, and calculate the final synthesis of each layer according to the parameters such as depth, transparency, size and position of each layer. The position in the image is then generated into the final display buffer (Buffer) and displayed on a specific display device.
- the application interface display device performs binocular rendering on the display interface, and obtains a third left eye image and a third right eye image of the interface to be displayed;
- the left eye and the right eye of the user can independently view the object, and there is a certain distance between the left and right eyes, so for the same target, the image in the left eye of the user is different from the image in the right eye of the user.
- the difference produced by observing one target from two points with a certain distance is called parallax.
- the user's brain can fuse the left-eye image and the right-eye image with parallax to produce a stereoscopic visual effect, so that the user can see the stereoscopic object.
- the left eye image and the right eye image of the interface to be displayed are drawn for the left eye and the right eye of the user, that is, the Stereoscopic Rendering of the display interface is obtained.
- the right eye image is referred to as a third left eye image and a third right eye image.
- 3 is an example of a third left eye image and a third right eye image.
- the third left eye image and the third right eye image are acquired by the VR device, and the user brain can fuse the two images to generate a stereo image. The visual effect allows the user to see the three-dimensional interface to be displayed.
- the application interface display device may draw a third left eye image and a third right eye image of the interface to be displayed for the left eye and the right eye of the user by acquiring the second head posture of the user according to the second head
- the first posture and the second region are respectively determined
- the third left eye image is obtained by drawing the interface to be displayed in the first region
- the third right eye image is obtained by drawing the interface to be displayed in the second region
- the first region is An area for displaying the interface to be displayed in the left eye image of the preset three-dimensional scene
- the second area is an area for displaying the interface to be displayed in the right eye image of the preset three-dimensional scene.
- the user or the system may preset one or more three-dimensional scenes, and draw a left-eye image and a right-eye image of the three-dimensional scene for the left and right eyes of the user, and the user points the left eye to the VR device.
- the left eye field of view and the right eye are aligned with the right eye field of view of the VR device to obtain the left eye image and the right eye image, and the brain synthesis can generate a stereoscopic feeling and a immersive feeling, so that the user is placed in the preset In a 3D scene.
- the preset three-dimensional scenes include a display area for displaying an interface to be displayed.
- the corresponding area in the left eye image of the three-dimensional scene is the first area, and the corresponding area in the right eye image of the three-dimensional scene.
- the area is the second area.
- the application interface display device draws the to-be-displayed interface in the first area, that is, the second area, when the user is in the preset three-dimensional scene, the display area sees the to-be-displayed interface.
- the preset three-dimensional scene may be a movie theater, a shopping mall, a classroom, etc., and is not enumerated here.
- the corresponding display area may be a screen in a movie theater, an advertisement screen in a shopping mall, a blackboard in a classroom, and the like.
- the VR technology is to immerse the user in the simulation environment, so the three-dimensional scene that the user sees using the VR device simulates the reality situation.
- the three-dimensional scene seen by the user also rotates, and the scene thereof The elements in the room will change.
- the user's initial field of view is set at the center of the classroom.
- the user can see the table and chairs in front, the podium and the entire blackboard, and when the user's head is lifted up, The user can only see the upper half of the blackboard and ceiling. Therefore, as the user's head moves, the position of the display area in the user's field of view changes, even within the user's field of view.
- the application interface display device acquires the current head posture of the user, that is, the second head posture, and then determines the position of the display area in the user's field of view according to the second head posture, that is, determines the first position information of the first area. And the second location information of the second area, where the location information may be the coordinate information corresponding to the vertices of the area in the screen, or may be other information that can determine the location, which is not limited herein. Then, the application interface display device may draw the interface to be displayed into the first area according to the first location information to obtain a third left eye image, and draw the interface to be displayed into the second area according to the second location information to obtain a third right eye. image.
- the second head posture refers to a head posture acquired when the application interface display device performs binocular rendering on the display interface by the above manner, and the first header in the following step 204
- the attitude refers to the head posture acquired before the image adjustment is performed after the barrel distortion processing.
- the first head pose and the second head pose are application display interface devices that acquire the user's head pose at different times, the first head pose for binocular rendering and the second head pose for image adjustment.
- the first head posture or the second head posture in the embodiment of the present application is determined by a sensor, and may be a sensor in the VR device, a sensor in the application interface display device, or other external devices. The sensor is not limited herein.
- the application interface display device can also obtain the third left eye of the interface to be displayed by other means.
- the image and the third right eye image are not limited herein.
- the application interface display device performs barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image and a first right eye image of the interface to be displayed.
- the VR device includes a plurality of sets of optical lenses
- the edge of the image may be distorted to different degrees.
- the application interface display device draws a third left image for the left and right eyes of the user. After the three right eye images, the third left eye image is subjected to barrel distortion processing to obtain a first left eye image, and the third right eye image is subjected to barrel distortion processing to obtain a first right eye image, thereby generating an optical lens.
- the distortion is offset.
- the application interface display device may use a shader to perform barrel distortion on each of the fifth image and the sixth image to the first left eye image and the first right eye image through a set of preset parameters.
- the preset parameters are set for lens parameters in the VR device, such as thickness, refractive index, pitch, and the like.
- the application interface display device can also perform the barrel distortion processing in other manners, which is not limited herein.
- the application interface display device acquires a first head posture of the user.
- the posture of the user's head may change, and after the application interface display device performs step 203 to obtain the first left eye image and the first right eye image, the application The interface display device acquires the user's latest head posture, that is, the first head posture.
- the head posture may specifically include a user's head deflection direction, a deflection angle of the user's head, or a motion mode of the user's head.
- the motion mode may specifically be a left-right swing, a swing up and down, or the like, which is not limited herein.
- the gesture may also include other gesture information, which is not limited herein.
- the application interface display device adjusts the first left eye image to obtain a second left eye image according to the first head posture, and adjusts the first right eye image to obtain a second right eye image.
- the application interface display device After acquiring the first head posture, the application interface display device adjusts the first left eye image according to the first head posture to obtain a second left eye image, and adjusts the first right eye image to obtain a second right eye image. Specifically, the application interface display device may calculate a transformation matrix according to the first head posture, transform the first left eye image according to the transformation matrix to obtain a second left eye image, and transform the first right eye image to obtain a second right image.
- the eye image that is, the first left eye image is asynchronously time warped to obtain a second left eye image
- the first right eye image is asynchronously time warped to obtain a second right eye image.
- the application interface display device may perform an asynchronous time warping operation on the texture data of the first left eye image and the first right eye image through a set of preset parameters by using a shader to obtain a second left eye image and a second image.
- Right eye image The second left eye image and the second right eye image may be obtained by performing asynchronous time warping in other manners, which is not limited herein.
- Asynchronous Time Warp is a technique of image correction.
- the delay of the scene rendering due to the head movement is too fast, that is, the head has been turned, but the image It has not been rendered, or the image of the previous frame is rendered.
- the asynchronous time warping solves this delay problem by distorting the image before being sent to the display device.
- the asynchronous time warping refers to an operation of stretching and shifting an image, for example, when the acquired first head posture is rotated to the left, the application interface display device according to the first head a posture, stretching and translating the first left eye image and the first right eye image to the left to obtain a second left eye image and a second right eye image, for example, when the acquired first head posture is downward rotation And the application interface display device stretches and translates the first left eye image downward according to the first head posture to obtain the second left eye image and the second right eye. image.
- the manner of adjustment is different based on the acquired first head posture information, and will not be enumerated here.
- the application interface display device displays the second left eye image on the left eye view area of the VR device, and displays the second right eye image on the right eye view area of the VR device.
- the VR device is divided into a left-eye view area and a right-eye view area according to the left and right eye views of the user. Specifically, if the VR device has an independent screen, the left-eye view area is the left of the user on the screen.
- the application interface display device displays the second left eye image in the area
- the right eye view area is the area seen by the user's right eye on the screen
- the application interface display device displays the second right in the area
- the eye image, the second left eye image and the second right eye image are displayed in the left and right eyes of the user through the corresponding optical lens group; if the VR device does not have a separate screen, the left eye visual field is aligned on the left eye of the VR device
- the application interface display device displays the second left eye image in the area of the external screen where the optical lens group is aligned
- the right eye field of view is the optical lens group on which the user's right eye is aligned on the VR device
- the application The interface display device displays the second right eye image on the area of the external screen where the optical lens group is aligned, and the second left eye image and the second right eye image are finally deformed by the optical path.
- the left eye field of view and the right eye field of view of the VR device respectively display the second left eye image and the second right eye image in the left and right eyes of the user, and the user can synthesize a stereoscopic image in the brain.
- a three-dimensional effect to be displayed interface is displayed in the left and right eyes of the user.
- the current head posture of the user is acquired, and the image corresponding to the left and right eyes is adjusted according to the head posture, and the adjusted image is respectively displayed in the image.
- the left and right eye view areas of the VR device are to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
- the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
- the embodiment of the present application provides binocular rendering of the display interface, and after rendering the two-dimensional application interface into an image having a three-dimensional visual effect, the image is also subjected to barrel distortion to eliminate optical lens generation in the VR device. Distortion improves image quality and enhances user experience.
- the embodiment of the present application provides a variety of manners for image adjustment according to the head posture, which improves the flexibility of the solution.
- FIG. 4 a schematic diagram of a system component structure applied to the application interface display method and apparatus provided by the embodiments of the present application is provided.
- the system can include a mobile terminal display 401 and a mobile terminal 402.
- the mobile terminal 402 includes a screen, and the screen includes a third area and a fourth area. The user first needs to place the mobile terminal in the mobile terminal display 401, and the third area and the mobile end are used.
- the left eye field of view of the head display 401 is aligned, the fourth area is aligned with the right eye field of view of the moving head display 401, and then the moving end display 401 is worn, and the left eye is aligned with the moving end display 401.
- the right eye is aligned with the right-eye view area of the moving end display 401.
- the left-eye view area and the right-eye view area of the mobile terminal respectively include at least one set of optical lenses for optically processing the image displayed by the mobile terminal 402, and displaying the processed image on the user's retina. It creates a sense of three-dimensionality and immersion in the minds of users.
- the mobile terminal display 401 may also include a pass for tracking the posture of the user's head. Sensor, CPU for processing data, etc.
- another embodiment of the application interface display method in the embodiment of the present application includes:
- the application interface display device obtains an interface to be displayed from the mobile terminal
- the mobile terminal determines an interface of the two-dimensional application that needs to be displayed according to the user operation, and the application interface display device moves from the mobile device.
- the interface to be displayed is obtained at the terminal.
- the mobile terminal may include a SurfaceFlinger module.
- SurfaceFlinger is a module responsible for display synthesis in Android. It can receive windows and layers as input, and calculate the position in the final composite image of each layer according to the depth, transparency, size, position and other parameters of each layer. , then generate the final display buffer (Buffer), and then display it to a specific display device.
- the mobile terminal can generate the to-be-displayed interface by using a SurfaceFlinger module, and the application interface display device acquires the to-be-displayed interface from the SurfaceFlinger module.
- the application interface display device may be the mobile terminal, and the mobile terminal synthesizes the to-be-displayed interface through the SurfaceFlinger module, and then transmits the to-be-displayed interface to another independent Android through the cross-process communication interface.
- the following steps 502 to 503 are performed in the process.
- the application interface display device in the embodiment of the present application may also be other user equipments, such as PCs, that are independent of the Android system.
- the user equipment can establish a connection with the mobile terminal by using a data line, a wireless network, a Bluetooth, or other means. After the mobile terminal synthesizes the to-be-displayed interface through the SurfaceFlinger module, the user equipment obtains the to-be-displayed interface from the mobile terminal through the connection. And perform the following steps 502 to 503.
- the application interface display device in the embodiment of the present application may further be a cloud server independent of the Android system, the mobile terminal communicates with the cloud server through the wireless network, and transmits the interface to be displayed synthesized by the SurfaceFlinger module to the cloud server.
- the cloud server receives the interface to be displayed, and performs the following steps 502 to 503.
- the application interface display device performs a dimension conversion process on the display interface, and obtains a first left eye image and a first right eye image corresponding to the interface to be displayed.
- the first left eye image and the first right eye image may be obtained by performing dimension conversion processing on the display interface in the manner described in steps 202 to 203 in the corresponding embodiment of FIG. 2 above.
- the first left eye image and the first right eye image of the interface to be displayed may be obtained by other methods, which are not limited herein.
- the application interface display device acquires a first head posture of the user.
- the sensor in the mobile terminal display can track the head posture of the user in real time, and after the application interface display device obtains the first left eye image and the second right eye image, the application interface display device moves from the moving head.
- the sensor in the display acquires the current head posture of the user, that is, the first head posture.
- the application interface display device adjusts the first left eye image to obtain a second left eye image according to the first head posture, and adjusts the first right eye image to obtain a second right eye image.
- the first left eye image and the first right eye image may be adjusted to obtain the second left eye image and the second manner as described in step 505 of the corresponding embodiment of FIG. 2 .
- the first left eye image and the first right eye image may be adjusted by other methods, which are not limited herein.
- the application interface display device sends the second left eye image and the second right eye image to the mobile terminal.
- the application interface display device After the application interface display device obtains the second left eye image and the second right eye image, the second left eye image and the second right eye image are sent to the mobile terminal, so that the mobile terminal displays the second left eye in the third area of the screen.
- the image displays the second right eye image in a fourth area of the screen.
- the left eye of the user acquires the second left eye image in the third area by moving the headed left eye field of view
- the right eye acquires the second right eye image in the fourth area by moving the headed right eye field of view.
- the second left eye image and the second right eye image can be combined into a stereoscopic image to present a three-dimensional effect to be displayed interface.
- the application interface display device may be the mobile terminal, and when the mobile terminal performs the steps 502 and 503 in another process independent of the Android system, the second left eye image and the second image are obtained.
- the human left eye image and the second right eye image are sent to the SurfaceFlinger module through the cross-process communication interface, the display buffer is generated by the SurfaceFlinger module, and the second left eye image is displayed on the screen and Second right eye image.
- the application interface display device in the embodiment of the present application may be another user device or a cloud server that is independent of the Android system.
- the user device or the cloud server performs steps 502 and 503
- the second left eye image and the second image are obtained.
- the second left eye image and the second right eye image may be sent to the SurfaceFlinger module through a wireless network or other manner, the display buffer is produced by the SurfaceFlinger module, and the second left eye image is displayed on the screen. And a second right eye image.
- the current head posture of the user is acquired, and the image corresponding to the left and right eyes is adjusted according to the head posture, and the adjusted image is respectively displayed in the image.
- the left and right eye view areas of the VR device are to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
- the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
- the application interface display method in the embodiment of the present application may be run in another process independent of the Android system in the mobile terminal, or may be run in a user device or a cloud server independent of the Android system. That is, the application interface display method in the embodiment of the present application does not depend on the Android system, and can reduce the computational burden of the mobile terminal.
- the algorithm used in the method needs to be updated, the update can be performed independently of the Android system, when the Android system When the internal architecture in the update is updated, the algorithm used in the method does not need to be modified accordingly, and the flexibility and versatility are higher.
- the mobile terminal may be an Android-based mobile phone, which includes a SurfaceFlinger process, and another independent.
- the 3DConverter process for Android is also possible.
- the mobile phone when the user clicks the icon of the two-dimensional application in the Android mobile phone, the mobile phone starts the process corresponding to the two-dimensional application (Process100), and SurfaceFlinger creates a layer for the Process 100.
- a graphic buffer (GraphicBuffer) corresponding to the Surface is created.
- the graphic buffer corresponding to the Surface is referred to as a first image buffer (gb100).
- SurfaceFlinger passes the data in the first graphics buffer to Process100 through the Binder mechanism, and Process100 maps the data in gb100 to the process space.
- Process100 performs the drawing operation through the OpenGL function according to the drawing logic of the application. The drawing result is written into the process space, and the SurfaceFlinger drawing is notified through the Binder mechanism.
- SurfaceFlinger detects whether the data in gb100 is updated every fixed period. If there is an update, it marks gb100.
- the content of the mark is mainly SurfaceFlinger's synthesis strategy for gb100.
- SurfaceFlinger is through the graphics processor ( Graphics Processing Unit (GPU) or Hardware Compose (HWC) handles gb100.
- graphics processor Graphics Processing Unit (GPU) or Hardware Compose (HWC) handles gb100.
- processing refers to the synthesis of graphics buffers of multiple applications and sends them to the framebuffer display.
- SurfaceFlinger traverses the data in gb100 to be displayed, and calls the data into the graphics buffer (gb200) corresponding to the framebufferSurface through the call of glDrawArray function.
- the 3DConverter After the mobile phone starts the 3DConverter process, the 3DConverter obtains the data in the gb200 from the SurfaceFlinger through the cross-process communication interface (Interfacer100), that is, the texture data of the interface to be displayed, and then updates the texture data to the first texture block (P200_texture100), and then the first A texture block is used as input to the OpenGL function to render the display interface once and store the result of one rendering into the second texture block (P200_texture200).
- Interfacer100 the cross-process communication interface
- P200_texture100 the texture data of the interface to be displayed
- the first texture block P200_texture100
- the first A texture block is used as input to the OpenGL function to render the display interface once and store the result of one rendering into the second texture block (P200_texture200).
- the specific process of rendering is as follows:
- the 3DConverter After updating the data in the gb200 to the first texture block, the 3DConverter determines that the preset three-dimensional scene is a cinema scene, and the scene includes a screen (display area), and the 3DConverter acquires the current head of the user through the sensor in the VR device. Gesture (first head pose), then model data of the cinema scene (including the vertices, geometry, color, etc. of the cinema model), texture data of the interface to be displayed stored in the first texture block, and the acquired head pose, etc.
- the process calls glDrawArray twice, calculates the position of the screen in the virtual scene, obtains the coordinates of the four vertices, and then draws the interface to be displayed to the three-dimensional according to the coordinates of the vertex and the texture data of the interface to be displayed.
- a third left eye image and a third right eye image corresponding to the three-dimensional scene are obtained.
- the third left eye image and the third right eye image are barrel-distorted to the first left eye image and the first right eye image through a set of preset parameters, and the first left is
- the eye image and the first right eye image storage are stored in texture form into the second texture block.
- the 3DConverter After the 3DConverter stores the first left eye image and the first right eye image to the second texture block, the second texture block is used as an input of the OpenGL function, and the secondary rendering result is stored in the first texture block.
- the specific process of secondary rendering is as follows:
- the 3DConverter acquires the current head posture (first head posture) of the user through the sensor in the VR device, and then calculates a transformation matrix according to the head posture, and uses the transformation matrix to transform the image stored in the second texture block, And drawing the transformed image.
- the OpenGL shader performs an asynchronous time warping operation on the texture data in the second texture block by another set of preset parameters to obtain the second left eye image and the first Two right eye images, and the second left eye image and the second right eye image are stored in a texture form into the first texture block.
- the texture block is an input of OpenGL drawing
- the frame buffer is an output of the drawing
- the embodiment of the present application outputs the drawing result to the texture block
- the first texture block is associated with the first frame buffer. (p200_faramebuffer100) at the color mount point
- the second texture block is associated with the color mount point of the second pin buffer (p200_faramebuffer200)
- the first render buffer can be used to store the first render result to the second.
- the result of the secondary rendering can be stored in the first texture block by calling the second frame buffer.
- 3DConverter notifies SurfaceFlinger of the end of rendering through the cross-process communication interface (Interfacer200), and sends the texture data in the first texture block to SurfaceFlinger.
- SurfaceFlinger displays the second left-eye image in the third area of the screen of the mobile phone.
- the second right eye image is displayed in the fourth area of the screen.
- the user points the left eye to the left screen of the mobile phone, the right eye to the right screen of the mobile phone, and feels in the preset movie theater scene, and sees the to-be-displayed on the screen of the movie theater. interface.
- the application interface display device in the embodiment of the present application is described above. The following describes the application interface display device in the embodiment of the present application. It should be understood that the application interface display device in the embodiment of the present application is used to display a 2D application on the VR device.
- the interface of the program, the application interface display device may be the VR device, or may be a communication device capable of connecting with the VR device, such as a PC, a mobile terminal, a cloud server, etc., or may be a component in the VR device or the communication device. Specifically, it is not limited here.
- an embodiment of the application interface display device in the embodiment of the present application includes:
- the first obtaining module 601 is configured to obtain an interface to be displayed, where the interface to be displayed is an interface of a 2D application;
- the processing module 602 is configured to perform dimension conversion processing on the interface to be displayed acquired by the first obtaining module 601, and obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, and the first left eye image and the first right eye.
- the image is used to present an interface to be displayed having a three-dimensional visual effect;
- a second acquiring module 603, configured to acquire a first head posture of the user
- the adjusting module 604 is configured to adjust the first left eye image to obtain the second left eye image according to the first head posture acquired by the second acquiring module, and adjust the first right eye image to obtain the second right eye image;
- the display module 605 is configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device.
- the processing module 602 performs the dimension conversion processing on the display interface to obtain the image corresponding to the left and right eyes
- the second obtaining module 603 acquires the current head posture of the user
- the adjusting module 604 adjusts the image corresponding to the left and right eyes according to the head posture.
- the display module then displays the adjusted images on the left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
- the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
- the processing module can convert the interface of the two-dimensional application into an interface having a three-dimensional visual effect in a plurality of manners, and one of the following is an example of the application interface display device in the embodiment of the present application.
- FIG. 7 Another embodiment of the application interface display device in the embodiment of the present application includes:
- the first obtaining module 701 is configured to obtain an interface to be displayed, where the interface to be displayed is an interface of a 2D application.
- the processing module 702 is configured to perform dimension conversion processing on the interface to be displayed acquired by the first obtaining module 701, and obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, a first left eye image and a first right eye.
- the image is used to present an interface to be displayed having a three-dimensional visual effect;
- a second acquiring module 703, configured to acquire a first head posture of the user
- the adjusting module 704 is configured to adjust the first left eye image according to the first head posture acquired by the second acquiring module a second left eye image, and adjusting the first right eye image to obtain a second right eye image;
- the display module 705 is configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device;
- the processing module 702 includes:
- a rendering unit 7021 configured to perform binocular rendering on the display interface, and obtain a third left eye image and a third right eye image of the interface to be displayed;
- the processing unit 7022 is configured to perform barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image of the interface to be displayed and a first right eye image of the interface to be displayed.
- the rendering unit 7021 may include:
- a first obtaining subunit 70211 configured to acquire a second head posture of the user
- the determining subunit 70212 is configured to respectively determine the first area and the second area according to the second head posture, where the first area is an area for displaying an interface to be displayed in a left eye image of the preset three-dimensional scene, and the second area is An area in the right eye image of the preset three-dimensional scene for displaying the interface to be displayed;
- the drawing sub-unit 70213 is configured to draw an interface to be displayed in the first area to obtain a third left-eye image, and draw an interface to be displayed in the second area to obtain a third right-eye image.
- the adjusting module 704 may include:
- the time warping unit 7041 is configured to perform asynchronous time warping on the first left eye image according to the first head posture to obtain a second left eye image, and perform asynchronous time warping on the first right eye image to obtain a second right eye image.
- the processing module 702 performs the dimension conversion processing on the display interface to obtain the image corresponding to the left and right eyes
- the second obtaining module 703 acquires the current head posture of the user
- the adjusting module 704 adjusts the image corresponding to the left and right eyes according to the head posture.
- the display module displays the adjusted images on the left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
- the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
- the embodiment of the present application provides binocular rendering of the display interface, and after rendering the two-dimensional application interface into an image having a three-dimensional visual effect, the image is also subjected to barrel distortion to eliminate optical lens generation in the VR device. Distortion improves image quality and enhances user experience.
- the embodiment of the present application provides a method for performing image adjustment according to the head posture, which improves the achievability of the solution.
- another embodiment of the application interface display device in the embodiment of the present application includes:
- the first obtaining module 801 is configured to obtain an interface to be displayed, where the interface to be displayed is an interface of a 2D application;
- the processing module 802 is configured to perform dimension conversion processing on the interface to be displayed acquired by the first obtaining module 801, and obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, a first left eye image and a first right eye.
- the image is used to present an interface to be displayed having a three-dimensional visual effect;
- a second acquiring module 803, configured to acquire a first head posture of the user
- the adjusting module 804 is configured to adjust the first left eye image to obtain a second left eye image according to the first head posture acquired by the second acquiring module, and adjust the first right eye image to obtain a second right eye image;
- the display module 805 is configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device;
- the first obtaining module 801 includes:
- the obtaining unit 8011 is configured to acquire the to-be-displayed interface from the mobile terminal;
- the display module 805 includes:
- the sending unit 8051 is configured to send the second left eye image and the second right eye image to the mobile terminal, so that the mobile terminal displays the second left eye image in the third area of the screen, and displays the second right eye in the fourth area of the screen.
- the image, the screen of the mobile terminal includes a third area and a fourth area, the third area corresponds to a left eye view area of the VR device, and the fourth area corresponds to a right eye view area of the VR device;
- the obtaining unit 8011 may include:
- a second obtaining subunit 80111 configured to obtain an interface to be displayed from the SurfaceFlinger module
- the sending unit 8051 can include:
- the sending subunit 80511 is configured to send the second left eye image and the second right eye image to the SurfaceFlinger module, so that the SurfaceFlinger module end displays the second left eye image in the third area of the screen of the mobile terminal, in the fourth screen The area displays the second right eye image;
- the application interface display device may be a mobile terminal as shown in FIG. 4, and may be other user devices independent of the Android system, such as a PC, and may be a cloud server independent of the Android system. It can also be other devices, which are not limited herein.
- the processing module 802 performs the dimension conversion processing on the display interface to obtain the image corresponding to the left and right eyes
- the second obtaining module 803 obtains the current head posture of the user
- the adjusting module 804 adjusts the image corresponding to the left and right eyes according to the head posture.
- the display module then displays the adjusted images on the left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched.
- the user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
- the application interface display device in the embodiment of the present application may be a user device or a cloud server that is independent of the Android system, that is, the application interface display method in the embodiment of the present application does not depend on the Android system, and the computing burden of the mobile terminal may be reduced.
- the algorithm used in the method needs to be updated, the update can be performed independently of the Android system.
- the algorithm used in the method does not need to be modified accordingly, and the flexibility and More versatile.
- the application interface display device in the embodiment of the present application is introduced from the perspective of the function module.
- the application interface display device in the embodiment of the present application is introduced from the perspective of the physical hardware. Please refer to FIG. 9 , which is an application of the embodiment of the present application.
- the application interface display device 90 can include an input device 910, an output device 920, a processor 930, and a memory 940.
- Memory 940 can include read only memory and random access memory and provides instructions and data to processor 930. A portion of the memory 940 may also include a Non-Volatile Random Access Memory (NVRAM).
- NVRAM Non-Volatile Random Access Memory
- Memory 940 stores the following elements, executable modules or data structures, or subsets thereof, or their extended sets:
- Operation instructions include various operation instructions for implementing various operations.
- Operating system Includes a variety of system programs for implementing various basic services and handling hardware-based tasks.
- the application interface display device or the VR device includes at least one display, and the processor 930 in the application interface display device is specifically configured to:
- the control display displays a second left eye image in the left eye view area of the VR device and a second right eye image in the right eye view area of the VR device.
- the processor 930 controls the operation of the application interface display device 90, which may also be referred to as a Central Processing Unit (CPU).
- Memory 940 can include read only memory and random access memory and provides instructions and data to processor 930. A portion of the memory 940 can also include an NVRAM.
- the components of the application interface display device 90 are coupled together by a bus system 950.
- the bus system 950 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 950 in the figure.
- Processor 930 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 930 or an instruction in a form of software.
- the processor 930 may be a general-purpose processor, a digital signal processing (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or Other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
- DSP digital signal processing
- ASIC application specific integrated circuit
- FPGA field-programmable gate array
- the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
- the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
- the steps of the method disclosed in the embodiments of the present application may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
- the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
- the storage medium is located in memory 940, and processor 930 reads the information in memory 940 and, in conjunction with its hardware, performs the steps of the above method.
- it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
- When implemented in software it may be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer instructions. Loading and executing the computer on a computer
- the program or function described in the embodiment of the present application is generated in whole or in part when the program is instructed.
- the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
- the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
- wire eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
- the computer readable storage medium can be any available media that can be stored by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
- the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a Solid State Disk (SSD)) or the like.
- the disclosed system, apparatus, and method may be implemented in other manners.
- the device embodiments described above are merely illustrative.
- the division of the unit is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
- the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
- the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
- a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
- Processing Or Creating Images (AREA)
Abstract
Disclosed are an application interface display method and apparatus. The method in the embodiments of present application is used for displaying an interface of a 2D application program on a VR device by an application interface display apparatus. The method comprises: acquiring an interface to be displayed, wherein the interface to be displayed is an interface of a 2D application program; performing dimension conversion processing on the interface to be displayed to obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, wherein the first left eye image and the first right eye image are used for presenting the interface to be displayed having a three-dimension visual effect; acquiring a first head posture of a user; adjusting the first left eye image to obtain a second left eye image, and adjusting the first right eye image to obtain a second right eye image according to the first head posture; and displaying the second left eye image in a left eye view field of the VR device, and displaying the second right eye image in a right eye view field of the VR device.
Description
本申请要求于2016年11月8日提交中国专利局、申请号为201610980760.2、发明名称为“一种低时延立体显示二维图像的方法和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to Chinese Patent Application No. 201610980760.2, entitled "A Method and Apparatus for Low-Delay Stereoscopic Display of Two-Dimensional Images", filed on November 8, 2016, the entire contents of which are incorporated herein by reference. This is incorporated herein by reference.
本申请涉及计算机应用领域,尤其涉及一种应用界面显示方法及装置。The present application relates to the field of computer applications, and in particular, to an application interface display method and apparatus.
虚拟现实(Virtual Reality,VR)技术是一种可以创建和体验虚拟世界的计算机仿真系统,它利用计算机生成一种模拟环境,是一种多源信息融合的、交互式的三维动态视景和实体行为的系统仿真使用户沉浸到该环境中。随着用户对生活质量的需求,虚拟现实显示技术的发展成了社会关注的焦点。Virtual Reality (VR) technology is a computer simulation system that can create and experience virtual worlds. It uses computer to generate a simulation environment. It is a multi-source information fusion, interactive 3D dynamic vision and entity. System simulation of behavior immerses users in the environment. With the demand for quality of life, the development of virtual reality display technology has become the focus of social attention.
虚拟现实设备要求分别渲染出左眼画面和右眼画面来产生立体感,而现有的大部分应用程序的界面都是二维(two dimension,2D)的,无法满足虚拟现实设备的需求,这就使得大量的应用程序无法在虚拟现实系统中使用。The virtual reality device requires that the left-eye image and the right-eye image are respectively rendered to generate a stereoscopic effect, and most of the existing application interfaces are two-dimensional (2D), which cannot meet the requirements of the virtual reality device. This makes a large number of applications impossible to use in virtual reality systems.
现有技术通过对开放图形库(Open Graphics Library,OpenGL)函数以左右分屏方式将虚拟现实场景写入安卓系统的帧缓存中,并利用安卓系统读取帧缓存中的内容进行绘制来实现虚拟现实场景在虚拟设备的左右屏幕上的显示,形成虚拟现实场景中的虚拟屏幕,最后直接将获取的待显示二维应用界面的纹理分别绘制到左右屏幕的虚拟现实场景中的虚拟屏幕上,从而使得二维应用界面同时渲染出左右眼的画面,具有立体感。The prior art writes a virtual reality scene into a frame buffer of an Android system by using an Open Graphics Library (OpenGL) function in a left-right split screen manner, and uses the Android system to read the content in the frame buffer to perform rendering. Displaying the reality scene on the left and right screens of the virtual device, forming a virtual screen in the virtual reality scene, and finally directly drawing the acquired texture of the two-dimensional application interface to be displayed on the virtual screen in the virtual reality scene of the left and right screens, thereby The two-dimensional application interface simultaneously renders the left and right eye images, and has a three-dimensional effect.
但是将二维应用界面渲染成具有三维视觉效果的图像,需要的时间较长,渲染出来的结果较滞后,不能与用户的视野贴合,容易使用户产生视觉错位,从而导致晕眩,体验较差。However, rendering a two-dimensional application interface into an image with a three-dimensional visual effect takes a long time, and the rendered result is lagging behind, and cannot be attached to the user's field of view, which easily causes the user to have a visual misalignment, thereby causing dizziness and experience. difference.
发明内容Summary of the invention
本申请实施例提供了一种应用界面显示方法,可以避免在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。The embodiment of the present application provides an application interface display method, which can avoid the image position and the user's visual field misalignment caused by the change of the user's head posture during the process of rendering the two-dimensional application interface into an image having a three-dimensional visual effect. Dizziness and enhance the user experience.
有鉴于此,本申请第一方面提供一种应用界面显示方法,该方法用于应用界面显示装置在VR设备上显示二维应用程序的界面,该方法包括:In view of this, the first aspect of the present application provides an application interface display method, which is used by an application interface display device to display an interface of a two-dimensional application on a VR device, and the method includes:
应用界面显示装置获取待显示界面,该待显示界面为二维应用程序的界面,应用界面显示装置获取待显示界面后,对该待显示界面进行维度转换处理得到待显示界面对应的第一左眼图像及第一右眼图像,该第一左眼图像及第二右眼图像用于呈现具有三维视觉效果的待显示界面,然后,应用界面显示装置获取用户当前的头部姿态,即第一头部姿态,再根据该第一头部姿态调整第一左眼图像及第一右眼图像得到第二左眼图像及第二右眼图像,最后在VR设备的左眼视野区域显示该第二左眼图像,在VR设备的右眼视野区域显示该第二右眼图像。
The application interface display device obtains an interface to be displayed. The interface to be displayed is an interface of the two-dimensional application. After the application interface display device obtains the interface to be displayed, the interface to be displayed is subjected to dimension conversion processing to obtain a first left eye corresponding to the interface to be displayed. An image and a first right eye image, the first left eye image and the second right eye image are used to present a to-be-displayed interface having a three-dimensional visual effect, and then the application interface display device acquires a current head posture of the user, that is, the first head And the second left eye image and the second right eye image are obtained by adjusting the first left eye image and the first right eye image according to the first head posture, and finally displaying the second left in the left eye field of view of the VR device. The eye image displays the second right eye image in the right eye field of view of the VR device.
需要说明的是,维度转换处理指的是将二维应用程序的界面转换成具有三维视觉效果的界面。左眼图像指的是针对用户左眼视野生成的图像,右眼图像指的是针对用户右眼视野生成的图像。VR设备根据用户的左右眼分为左眼视野区域和右眼视野区域,左眼视野区域为VR设备中与用户左眼视野对准的屏幕区域或光学镜片组,右眼视野区域为VR设备中与用户右眼视野对准的屏幕区域或光学镜片组。It should be noted that the dimension conversion process refers to converting an interface of a two-dimensional application into an interface having a three-dimensional visual effect. The left eye image refers to an image generated for the user's left eye field of view, and the right eye image refers to an image generated for the user's right eye field of view. The VR device is divided into a left-eye view area and a right-eye view area according to the left and right eyes of the user, and the left-eye view area is a screen area or an optical lens group in the VR device aligned with the user's left-eye view, and the right-eye view area is in the VR device. A screen area or optical lens set that is aligned with the user's right eye field of view.
本申请实施例对待显示界面进行维度转换处理得到左屏幕及右屏幕对应的图像后,再获取用户当前的头部姿态,并根据该头部姿态调整左屏幕及右屏幕对应的图像,再将调整后的图像分别显示在左屏幕及右屏幕上。也就是说本申请在对待显示界面进行维度转换得到具有三维视觉效果的图像后,还会根据用户最新的头部姿态对转换后的结果进行调整,从而使得最终显示的图像的位置更加贴合与用户的视野,避免了在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。After performing the dimension conversion processing on the display interface to obtain the image corresponding to the left screen and the right screen, the current head posture of the user is obtained, and the image corresponding to the left screen and the right screen is adjusted according to the head posture, and then the adjustment is performed. The subsequent images are displayed on the left and right screens, respectively. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched. The user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
结合本申请第一方面,在本申请第一方面的第一种实现方式中,应用界面显示装置进行维度转换处理的过程具体可以包括:With reference to the first aspect of the present application, in the first implementation manner of the first aspect of the present application, the process of performing the dimension conversion processing by the application interface display device may specifically include:
应用界面显示装置对待显示界面进行双目渲染,得到待显示界面的第三左眼图像及第三右眼图像,再对第三左眼图像及第三右眼图像进行桶形畸变处理,得到待显示界面的第一左眼图像及待显示界面的第一右眼图像。The application interface display device performs binocular rendering on the display interface, obtains a third left eye image and a third right eye image of the interface to be displayed, and performs barrel distortion processing on the third left eye image and the third right eye image to obtain a to-be-distorted process. The first left eye image of the interface and the first right eye image of the interface to be displayed are displayed.
需要说明的是,经过光学处理使得远离光轴区域的放大率比光轴附近的低,在像平面内出现图中所示的外凸情景称为桶形畸变,而本申请中的桶形畸变处理用于抵消VR设备中的光学镜片产生的畸变。It should be noted that, after optical processing, the magnification away from the optical axis region is lower than that near the optical axis, and the convex scene shown in the figure appears in the image plane is called barrel distortion, and the barrel distortion in the present application. The process is used to counteract the distortion produced by the optical lens in the VR device.
本申请实施例通过双目渲染得到具有三维视觉效果的图像后,还会对图像进行桶形畸变处理,以抵消VR设备中的光学镜片产生的畸变,提升了图像质量,增强用户体验。After obtaining an image with a three-dimensional visual effect by binocular rendering, the embodiment of the present application also performs barrel distortion processing on the image to offset the distortion generated by the optical lens in the VR device, thereby improving image quality and enhancing user experience.
结合本申请第一方面的第一种实现方式,在本申请第一方面的第二种实现方式中,应用界面显示装置对待显示界面进行双目渲染得到第三左眼图像及第三右眼图像的过程具体包括:With reference to the first implementation manner of the first aspect of the present application, in a second implementation manner of the first aspect of the application, the application interface display device performs binocular rendering on the display interface to obtain a third left eye image and a third right eye image. The process specifically includes:
应用界面显示装置获取待显示界面后,获取用户当前的头部姿态,即第二头部姿态,然后根据第二头部姿态分别确定第一区域即第二区域,并在第一区域绘制待显示界面得到第三左眼图像,在第二区域绘制待显示界面得到第三右眼图像,其中,第一区域为预设三维场景的左眼图像中用于展示待显示界面的区域,第二区域为预设的三维场景的右眼图像中用于展示待显示界面的区域。After acquiring the interface to be displayed, the application interface display device acquires the current head posture of the user, that is, the second head posture, and then determines the first region, that is, the second region, according to the second head posture, and draws the to-be-displayed region in the first region. The third left eye image is obtained by the interface, and the third right eye image is obtained by drawing the interface to be displayed in the second area, wherein the first area is an area for displaying the interface to be displayed in the left eye image of the preset three-dimensional scene, and the second area is The area in the right eye image of the preset three-dimensional scene for displaying the interface to be displayed.
本申请实施例通过将待显示界面绘制到预设三维场景的方式进行双目渲染,使得用户能够置身在预设的三维场景中进行待显示界面的浏览,提升了方案的灵活性,进一步增强了用户体验。The embodiment of the present application performs binocular rendering by drawing the interface to be displayed to the preset three-dimensional scene, so that the user can browse the interface to be displayed in the preset three-dimensional scene, thereby improving the flexibility of the solution and further enhancing the user experience.
结合本申请第一方面,第一方面的第一或第二种实现方式,在本申请第一方面的第三种实现方式中,应用界面显示装置根据该第一头部姿态调整第一左眼图像及第一右眼图像得到第二左眼图像及第二右眼图像的过程具体包括:With reference to the first aspect of the present application, the first or second implementation manner of the first aspect, in the third implementation manner of the first aspect of the application, the application interface display device adjusts the first left eye according to the first head posture The process of obtaining the second left eye image and the second right eye image by the image and the first right eye image specifically includes:
应用界面显示装置根据第一有部姿态对第一左眼图像进行异步时间扭曲得到第二左眼
图像,并对第一右眼图像进行异步时间扭曲得到第二右眼图像。需要说明的是,异步时间扭曲是一种图像修正的技术,在使用虚拟现实设备时,由于头部运动过快,而造场景渲染的延迟,即头已经转过去了,但是图像还没有渲染出来,或者渲染的是上一帧的图像,异步时间扭曲通过扭曲一副被送往显示设备之前图像,来解决这个延迟问题。The application interface display device performs asynchronous time warping on the first left eye image according to the first partial posture to obtain the second left eye.
The image is subjected to asynchronous time warping of the first right eye image to obtain a second right eye image. It should be noted that asynchronous time warping is a technique of image correction. When using a virtual reality device, the head motion has been too fast, and the delay of rendering the scene, that is, the head has already passed, but the image has not been rendered. Or rendering the image of the previous frame, the asynchronous time warping solves this delay problem by distorting the image before being sent to the display device.
本申请实施例提供了一种调整图像的具体方式,提高了方案的可实现性。The embodiment of the present application provides a specific manner of adjusting an image, which improves the achievability of the solution.
结合本申请第一方面,第一方面的第一至第三种实现方式中的任意一种实现方式,在本申请第一方面的第四种实现方式中,With reference to the first aspect of the present application, any one of the first to third implementation manners of the first aspect, in the fourth implementation manner of the first aspect of the present application,
应用界面显示装置获取待显示界面具体可以通过如下方式:应用界面显示装置从移动终端获取待显示界面;The application interface display device obtains the interface to be displayed, and the application interface display device obtains the interface to be displayed from the mobile terminal;
对应地,应用界面显示装置在VR设备的左眼视野区域显示第二左眼图像,并在VR设备的右眼视野区域显示第二右眼图像具体可以通过如下方式:应用界面显示装置将第二左眼图像及第二右眼图像发送给移动终端,该移动终端的屏幕包括第三区域和第四区域,第三区域对应VR设备的左眼视野区域,第四区域对应VR设备的右眼视野区域,则移动终端接收到应用界面显示装置发送的第二左眼图像及第二右眼图像后,移动终端在屏幕的第三区域显示第二左眼图像,在屏幕的第四区域显示第二右眼图像。Correspondingly, the application interface display device displays the second left eye image in the left eye view area of the VR device, and displays the second right eye image in the right eye view area of the VR device. The left eye image and the second right eye image are sent to the mobile terminal, the screen of the mobile terminal includes a third area and a fourth area, the third area corresponds to a left eye view area of the VR device, and the fourth area corresponds to a right eye view of the VR device In the area, after the mobile terminal receives the second left eye image and the second right eye image sent by the application interface display device, the mobile terminal displays the second left eye image in the third area of the screen, and displays the second image in the fourth area of the screen. Right eye image.
本申请实施例提供了一种获取待显示界面及显示待显示界面的具体方式,提高了方案的可实现性。The embodiment of the present application provides a specific manner for obtaining an interface to be displayed and displaying an interface to be displayed, thereby improving the achievability of the solution.
结合本申请实施例第一方面的第四种实现方式,在本申请实施例第一方面的第五种实现方式中,该移动终端包括SurfaceFlinger模块,该SurfaceFlinger是安卓系统中负责显示合成的模块,能够计算出每个图层最终合成图像中的位置,然后生成最终的显示缓冲,再显示到特定的显示设备上。With reference to the fourth implementation manner of the first aspect of the embodiment of the present application, in a fifth implementation manner of the first aspect of the embodiment, the mobile terminal includes a SurfaceFlinger module, where the SurfaceFlinger is a module responsible for display synthesis in the Android system. The position in the final composite image of each layer can be calculated, and then the final display buffer is generated and displayed on a particular display device.
则应用界面显示装置具体可以通过如下方式从移动终端获取待显示界面:应用界面显示装置从该SurfaceFlinger模块获取待显示界面;The application interface display device may obtain the to-be-displayed interface from the mobile terminal by: the application interface display device acquires the to-be-displayed interface from the SurfaceFlinger module;
对应地,应用界面显示装置可以通过如下方式将第二左眼图像及第二右眼图像发送给移动终端:应用界面显示装置将第二左眼图像及第二右眼图像发送至该SurfaceFlinger模块,以使得该SurfaceFlinger模块在移动终端的屏幕的第三区域显示第二左眼图像,在屏幕的第四区域显示第二右眼图像。Correspondingly, the application interface display device can send the second left eye image and the second right eye image to the mobile terminal by: the application interface display device sending the second left eye image and the second right eye image to the SurfaceFlinger module, So that the SurfaceFlinger module displays the second left eye image in the third area of the screen of the mobile terminal, and displays the second right eye image in the fourth area of the screen.
本申请实施例应用界面显示装置可以是独立于安卓系统的装置,即本申请实施例中的应用界面显示方法可以不依赖于安卓系统,可以减轻移动终端的运算负担,另外当该方法中用到的算法需要更新时,该更新可以独立于安卓系统进行,当安卓系统中的内部架构发生更新时,该方法中用到的算法不需要进行相应的修改,灵活性和通用性更高。The application interface display device in the embodiment of the present application may be a device that is independent of the Android system, that is, the application interface display method in the embodiment of the present application may not depend on the Android system, and may reduce the computing load of the mobile terminal, and is used in the method. When the algorithm needs to be updated, the update can be performed independently of the Android system. When the internal architecture in the Android system is updated, the algorithm used in the method does not need to be modified accordingly, and the flexibility and versatility are higher.
本申请第二方面提供了一种应用界面显示装置,该装置用于在VR设备上显示二维应用程序的界面,该装置包括:The second aspect of the present application provides an application interface display device for displaying an interface of a two-dimensional application on a VR device, the device comprising:
第一获取模块,用于获取待显示界面,该待显示界面为2D应用程序的界面;a first acquiring module, configured to acquire an interface to be displayed, where the interface to be displayed is an interface of a 2D application;
处理模块,用于对第一获取模块获取的待显示界面进行维度转换处理,得到待显示界面对应的第一左眼图像及第一右眼图像,第一左眼图像及第一右眼图像用于呈现具有三维视觉效果的待显示界面;
a processing module, configured to perform dimension conversion processing on the interface to be displayed acquired by the first acquiring module, to obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, and the first left eye image and the first right eye image Presenting an interface to be displayed having a three-dimensional visual effect;
第二获取模块,用于获取用户的第一头部姿态;a second acquiring module, configured to acquire a first head posture of the user;
调整模块,用于根据第二获取模块获取的第一头部姿态,调整第一左眼图像得到第二左眼图像,并调整第一右眼图像得到第二右眼图像;The adjusting module is configured to adjust the first left eye image to obtain the second left eye image according to the first head posture acquired by the second acquiring module, and adjust the first right eye image to obtain the second right eye image;
显示模块,用于在VR设备的左眼视野区域显示第二左眼图像,并在VR设备的右眼视野区域显示第二右眼图像。And a display module, configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device.
本申请实施例对待显示界面进行维度转换处理得到左屏幕及右屏幕对应的图像后,再获取用户当前的头部姿态,并根据该头部姿态调整左屏幕及右屏幕对应的图像,再将调整后的图像分别显示在左屏幕及右屏幕上。也就是说本申请在对待显示界面进行维度转换得到具有三维视觉效果的图像后,还会根据用户最新的头部姿态对转换后的结果进行调整,从而使得最终显示的图像的位置更加贴合与用户的视野,避免了在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。After performing the dimension conversion processing on the display interface to obtain the image corresponding to the left screen and the right screen, the current head posture of the user is obtained, and the image corresponding to the left screen and the right screen is adjusted according to the head posture, and then the adjustment is performed. The subsequent images are displayed on the left and right screens, respectively. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched. The user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
结合本申请第二方面,在本申请第二方面的第一种实现方式中,处理模块具体包括:With reference to the second aspect of the present application, in a first implementation manner of the second aspect of the present application, the processing module specifically includes:
渲染单元,用于对待显示界面进行双目渲染,得到待显示界面的第三左眼图像及第三右眼图像;a rendering unit, configured to perform binocular rendering on the display interface, and obtain a third left eye image and a third right eye image of the interface to be displayed;
处理单元,用于对第三左眼图像及第三右眼图像进行桶形畸变处理,得到待显示界面的第一左眼图像及待显示界面的第一右眼图像。The processing unit is configured to perform barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image of the interface to be displayed and a first right eye image of the interface to be displayed.
本申请实施例处理单元可以对图像进行畸变处理,以抵消VR设备中的光学镜片产生的畸变,提升了图像质量,增强用户体验。The processing unit of the embodiment of the present application can perform distortion processing on the image to offset the distortion generated by the optical lens in the VR device, improve image quality, and enhance user experience.
结合本申请第二方面,在本申请第二方面的第一种实现方式中,渲染单元具体可以包括:With reference to the second aspect of the present application, in a first implementation manner of the second aspect of the present application, the rendering unit may specifically include:
第一获取子单元,用于获取用户的第二头部姿态;a first acquiring subunit, configured to acquire a second head posture of the user;
确定子单元,用于根据第二头部姿态分别确定第一区域及第二区域,第一区域为预设的三维场景的左眼图像中用于展示待显示界面的区域,第二区域为预设的三维场景的右眼图像中用于展示待显示界面的区域;Determining a sub-unit, configured to respectively determine a first area and a second area according to the second head posture, where the first area is an area for displaying an interface to be displayed in a left eye image of a preset three-dimensional scene, and the second area is a pre- The area of the right eye image of the three-dimensional scene for displaying the interface to be displayed;
绘制子单元,用于在第一区域绘制待显示界面得到第三左眼图像,并在第二区域绘制待显示界面得到第三右眼图像。The drawing subunit is configured to draw the interface to be displayed in the first area to obtain the third left eye image, and draw the interface to be displayed in the second area to obtain the third right eye image.
本申请实施例渲染单元通过绘制子单元将待显示界面绘制到预设三维场景中,使得用户能够置身在预设的三维场景中进行待显示界面的浏览,提升了方案的灵活性,进一步增强了用户体验。The rendering unit of the embodiment of the present application draws the interface to be displayed into the preset three-dimensional scene by using the drawing sub-unit, so that the user can browse the interface to be displayed in the preset three-dimensional scene, thereby improving the flexibility of the solution and further enhancing the user experience.
结合本申请第二方面,第二方面的第一或第二种实现方式,在本申请第二方面的第三种实现方式中,调整模块具体可以包括:With reference to the second aspect of the present application, the first or second implementation manner of the second aspect, in the third implementation manner of the second aspect of the application, the adjustment module may specifically include:
时间扭曲单元,用于根据第一头部姿态对第一左眼图像进行异步时间扭曲得到第二左眼图像,对第一右眼图像进行异步时间扭曲得到第二右眼图像。The time warping unit is configured to perform asynchronous time warping on the first left eye image according to the first head posture to obtain a second left eye image, and perform asynchronous time warping on the first right eye image to obtain a second right eye image.
本申请实施例提供了一种调整模块调整图像的具体方式,提高了方案的可实现性。The embodiment of the present application provides a specific manner for adjusting an image of an adjustment module, which improves the achievability of the solution.
结合本申请第二方面,第二方面的第一或第二种实现方式,在本申请第二方面的第三种实现方式中,第一获取模块具体可以包括:
With reference to the second aspect of the present application, the first or second implementation manner of the second aspect, in the third implementation manner of the second aspect of the application, the first obtaining module may specifically include:
获取单元,用于从移动终端获取所述待显示界面;An obtaining unit, configured to acquire the to-be-displayed interface from the mobile terminal;
对应地,显示模块具体可以包括:Correspondingly, the display module may specifically include:
发送单元,用于将第二左眼图像及第二右眼图像发送给移动终端,使得移动终端在屏幕的第三区域显示第二左眼图像,在屏幕的第四区域显示第二右眼图像,移动终端的屏幕包括第三区域和第四区域,第三区域对应VR设备的左眼视野区域,第四区域对应VR设备的右眼视野区域。a sending unit, configured to send the second left eye image and the second right eye image to the mobile terminal, so that the mobile terminal displays the second left eye image in the third area of the screen, and displays the second right eye image in the fourth area of the screen The screen of the mobile terminal includes a third area and a fourth area, the third area corresponds to a left eye view area of the VR device, and the fourth area corresponds to a right eye view area of the VR device.
本申请实施例提供了一种获取单元获取待显示界面及显示模块显示待显示界面的具体方式,提高了方案的可实现性。The embodiment of the present application provides a specific manner for an acquiring unit to obtain an interface to be displayed and a display module to display an interface to be displayed, thereby improving the achievability of the solution.
结合本申请第二方面的第三种实现方式,在本申请第二方面的第四种实现方式中,该移动终端包括SurfaceFlinger模块,该获取单元具体可以包括:With reference to the third implementation manner of the second aspect of the present application, in a fourth implementation manner of the second aspect of the present application, the mobile terminal includes a SurfaceFlinger module, and the acquiring unit may specifically include:
第二获取子单元,用于从SurfaceFlinger模块处获取待显示界面;a second obtaining subunit, configured to obtain an interface to be displayed from the SurfaceFlinger module;
对应地,该发送单元具体可以包括:Correspondingly, the sending unit may specifically include:
发送子单元,用于将第二左眼图像及第二右眼图像发送至SurfaceFlinger模块,以使得SurfaceFlinger模块端在移动终端的屏幕的第三区域显示第二左眼图像,在屏幕的第四区域显示第二右眼图像。a sending subunit, configured to send the second left eye image and the second right eye image to the SurfaceFlinger module, so that the SurfaceFlinger module end displays the second left eye image in the third area of the screen of the mobile terminal, in the fourth area of the screen The second right eye image is displayed.
本申请实施例应用界面显示装置可以是独立于安卓系统的装置,即可以不依赖于安卓系统执行本申请实施例中的方法,可以减轻移动终端的运算负担,另外当该方法中用到的算法需要更新时,该更新可以独立于安卓系统进行,当安卓系统中的内部架构发生更新时,该方法中用到的算法不需要进行相应的修改,灵活性和通用性更高。The application interface display device in the embodiment of the present application may be a device independent of the Android system, that is, the method in the embodiment of the present application may be executed independently of the Android system, and the computing load of the mobile terminal may be reduced, and the algorithm used in the method may be used. When an update is required, the update can be performed independently of the Android system. When the internal architecture in the Android system is updated, the algorithm used in the method does not need to be modified accordingly, and the flexibility and versatility are higher.
本申请的第三方面提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面,第一方面的第一至第五种实现方式中任意一种实现方式所述的方法。A third aspect of the present application provides a computer readable storage medium having stored therein instructions that, when run on a computer, cause the computer to perform the first aspect, the first aspect of the first aspect A method as claimed in any one of the fifth implementations.
本申请的第四方面提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面,第一方面的第一至第五种实现方式中任意一种实现方式所述的方法。A fourth aspect of the present application provides a computer program product comprising instructions which, when executed on a computer, cause the computer to perform the first aspect described above, any one of the first to fifth implementations of the first aspect The method described in the manner.
从以上技术方案可以看出,本申请实施例具有以下优点:As can be seen from the above technical solutions, the embodiments of the present application have the following advantages:
本申请实施例对待显示界面进行维度转换处理得到左屏幕及右屏幕对应的图像后,再获取用户当前的头部姿态,并根据该头部姿态调整左屏幕及右屏幕对应的图像,再将调整后的图像分别显示在左屏幕及右屏幕上。也就是说本申请在对待显示界面进行维度转换得到具有三维视觉效果的图像后,还会根据用户最新的头部姿态对转换后的结果进行调整,从而使得最终显示的图像的位置更加贴合与用户的视野,避免了在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。After performing the dimension conversion processing on the display interface to obtain the image corresponding to the left screen and the right screen, the current head posture of the user is obtained, and the image corresponding to the left screen and the right screen is adjusted according to the head posture, and then the adjustment is performed. The subsequent images are displayed on the left and right screens, respectively. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched. The user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
为了更清楚地说明本申请实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例。In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present application.
图1是本申请实施例中应用界面显示方法的一个实施例流程图;
1 is a flow chart of an embodiment of an application interface display method in an embodiment of the present application;
图2是本申请实施例中应用界面显示方法的另一实施例流程图;2 is a flow chart of another embodiment of an application interface display method in an embodiment of the present application;
图3是本申请实施例中第三左眼图像及第四左眼图像的一个示意图;3 is a schematic diagram of a third left eye image and a fourth left eye image in the embodiment of the present application;
图4是本申请实施例中应用界面显示系统的一个实施例示意图;4 is a schematic diagram of an embodiment of an application interface display system in an embodiment of the present application;
图5是本申请实施例中应用界面显示方法的另一实施例流程图;FIG. 5 is a flowchart of another embodiment of an application interface display method in an embodiment of the present application;
图6是本申请实施例中应用界面显示装置的一个实施例示意图;6 is a schematic diagram of an embodiment of an application interface display device in an embodiment of the present application;
图7是本申请实施例中应用界面显示装置的另一实施例示意图;7 is a schematic diagram of another embodiment of an application interface display device in an embodiment of the present application;
图8是本申请实施例中应用界面显示装置的另一实施例示意图;8 is a schematic diagram of another embodiment of an application interface display device in an embodiment of the present application;
图9是本申请实施例中应用界面显示装置的另一实施例示意图。FIG. 9 is a schematic diagram of another embodiment of an application interface display device in an embodiment of the present application.
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the drawings in the embodiments of the present application. It is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", "third", "fourth", etc. (if present) in the specification and claims of the present application and the above figures are used to distinguish similar objects, and are not necessarily used for Describe a specific order or order. It is to be understood that the data so used may be interchanged where appropriate, such that the embodiments of the present application described herein can be implemented, for example, in a sequence other than those illustrated or described herein. In addition, the terms "comprises" and "comprises" and "the" and "the" are intended to cover a non-exclusive inclusion, for example, a process, method, system, product, or device that comprises a series of steps or units is not necessarily limited to Those steps or units may include other steps or units not explicitly listed or inherent to such processes, methods, products or devices.
本申请实施例提供了一种应用界面显示方法及装置,用于避免在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。The embodiment of the present application provides an application interface display method and device for avoiding dislocation of an image position and a user's visual field due to a change in a user's head posture during rendering of a two-dimensional application interface into an image having a three-dimensional visual effect. The resulting dizziness enhances the user experience.
为了便于理解本申请实施例,下面对本申请实施例的技术背景进行简单介绍:In order to facilitate the understanding of the embodiments of the present application, the technical background of the embodiments of the present application is briefly introduced:
VR设备指的是与虚拟现实技术领域相关的硬件产品,是虚拟现实解决方案中用到的硬件设备。现阶段虚拟现实中常用到的硬件设备,大致可以分为建模设备、三维视觉显示设备、声音设备和交互设备这四类。而本申请实施例中的VR设备指的是三维视觉显示设备,比如三维展示系统,大型投影系统(如CAVE),头戴显示设备等。VR devices refer to hardware products related to the field of virtual reality technology and are hardware devices used in virtual reality solutions. At present, the hardware devices commonly used in virtual reality can be roughly divided into four types: modeling devices, three-dimensional visual display devices, sound devices, and interactive devices. The VR device in the embodiment of the present application refers to a three-dimensional visual display device, such as a three-dimensional display system, a large projection system (such as CAVE), a head-mounted display device, and the like.
VR头戴显示设备,简称VR头显,是一种利用头戴式显示设备将用户的对外界的视觉、听觉封闭,引导用户产生一种身在虚拟环境中的感觉。本申请实施例中VR设备包括左眼视野区域和右眼视野区域,其中左眼视野区域用于在用户的左眼显示左眼图像,右眼视野用于在用户的右眼显示右眼图像,在用户的左右眼分别显示带有差异的左眼图像和右眼图像后,用户即可在脑海中产生立体感。VR头显可以细分为三类:外接头显、一体机头显、移动端头显。其中,外接头显和一体机头显具有独立的屏幕,外接头显是通过外接设备输入的数据在自带的屏幕上显示左眼图像和右眼图像,以使得用户沉浸在虚拟环境中,一体机头显则不需要借助任何输入输出设备就可以使得用户沉浸在虚拟环境中。而移动端头显,也称为VR眼镜盒子,需要将移动终端放入该VR眼镜盒子,在移动终端的屏幕显示左眼图像和右眼图像,用户通过该VR眼镜盒子获取移动终端上的左眼图像和右眼图像在脑海中产生立体感和沉浸感。
The VR head-mounted display device, abbreviated as VR head display, is a kind of use of a head-mounted display device to close the user's visual and auditory sense to the outside world, and guide the user to create a feeling in the virtual environment. The VR device in the embodiment of the present application includes a left-eye view area for displaying a left-eye image in a left eye of the user, and a right-eye view for displaying a right-eye image in the right eye of the user. After the user's left and right eyes respectively display the left eye image and the right eye image with differences, the user can create a stereoscopic effect in the mind. VR head display can be subdivided into three categories: external connector display, integrated machine head display, mobile terminal display. Among them, the external connector display and the integrated machine head have an independent screen, and the external connector displays the left eye image and the right eye image on the self-contained screen through the data input by the external device, so that the user is immersed in the virtual environment, The headphone display allows users to immerse themselves in a virtual environment without any input/output devices. The mobile terminal display, also called the VR glasses box, needs to put the mobile terminal into the VR glasses box, and displays the left eye image and the right eye image on the screen of the mobile terminal, and the user obtains the left side of the mobile terminal through the VR glasses box. The eye image and the right eye image produce a sense of three-dimensionality and immersion in the mind.
而本申请实施例中的应用界面显示方法,用于应用界面显示装置在VR设备上显示2D应用程序的界面。具体地,对于外接头显,应用界面显示装置可以是该外接头显,也可以是能够与上述外接头显连接的输入设备,如电脑(personal computer,PC),手机等;对于一体机头显,应用界面显示装置可以是该一体机头显,也可以是该一体机头显中用于渲染图像的部件;对于移动端头显,应用界面显示装置可以是该移动端头显,也可以是能够放置在该移动端头显中用于显示左眼图像及右眼图像的移动终端。应用界面显示装置还可以是能够与上述三种头显,或输入设备,或移动终端通信的其他设备,如云端服务器等。The application interface display method in the embodiment of the present application is used for the interface of the application interface display device to display the 2D application on the VR device. Specifically, for the external connector display, the application interface display device may be the external connector display, or may be an input device capable of being connected to the external connector, such as a personal computer (PC), a mobile phone, etc.; The application interface display device may be the integrated head display, or may be a component for rendering an image in the integrated head display; for the mobile terminal display, the application interface display device may be the mobile terminal display, or A mobile terminal capable of being placed in the mobile terminal for displaying a left eye image and a right eye image. The application interface display device may also be other devices capable of communicating with the above three types of headphones, or input devices, or mobile terminals, such as a cloud server.
下面先介绍本申请实施例中的应用界面显示方法,请参阅图1,本申请实施例中应用界面显示方法的一个实施例包括:The following describes the application interface display method in the embodiment of the present application. Referring to FIG. 1 , an embodiment of the application interface display method in the embodiment of the present application includes:
101、应用界面显示装置获取待显示界面;101. The application interface display device obtains an interface to be displayed;
应用界面显示装置获取待显示界面,待显示界面为需要显示在显示设备的屏幕上的界面,该待显示界面可以是任意一个二维应用程序的界面,也可以是由多个二维应用程序的界面所合成的界面,具体此处不作限定。应理解,二维应用程序指的是基于二维显示所开发的应用程序。The application interface display device obtains an interface to be displayed. The interface to be displayed is an interface that needs to be displayed on a screen of the display device. The interface to be displayed may be an interface of any two-dimensional application, or may be a plurality of two-dimensional applications. The interface synthesized by the interface is not limited herein. It should be understood that a two-dimensional application refers to an application developed based on two-dimensional display.
102、应用界面显示装置对待显示界面进行维度转换处理,得到该待显示界面的第一左眼图像及第一右眼图像;102. The application interface display device performs dimension conversion processing on the display interface, and obtains a first left eye image and a first right eye image of the interface to be displayed;
应用界面显示装置获取待显示界面后,对该待显示界面进行维度转换处理,得到该待显示界面的第一左眼图像及第一右眼图像,该第一左眼图像及第一右眼图像用于呈现具有三维视觉效果的待显示界面。After the application interface display device obtains the interface to be displayed, performing dimension conversion processing on the interface to be displayed, and obtaining a first left eye image and a first right eye image of the interface to be displayed, the first left eye image and the first right eye image Used to render an interface to be displayed with a three-dimensional visual effect.
应理解,本申请实施例中,维度转换处理指的是将二维应用程序的界面转换成具有三维视觉效果的界面。还应理解,本申请实施例中的左眼图像指的是针对用户左眼视野生成的图像,右眼图像指的是针对用户右眼视野生成的图像。It should be understood that, in the embodiment of the present application, the dimension conversion process refers to converting an interface of a two-dimensional application into an interface having a three-dimensional visual effect. It should also be understood that the left eye image in the embodiment of the present application refers to an image generated for the user's left eye field of view, and the right eye image refers to an image generated for the user's right eye field of view.
103、应用界面显示装置获取用户的第一头部姿态;103. The application interface display device acquires a first head posture of the user;
在应用界面显示装置执行上述101及102的过程中,用户的头部姿态可能会发生变化,则当应用界面显示装置执行完步骤102,得到第一左眼图像及第一右眼图像之后,会获取用户最新的头部姿态,即第一头部姿态。应理解,头部姿态具体可以包括用户头部偏转方向、用户头部的偏转角度或用户头部的运动模式,还可以包括其他姿态信息,具体此处不作限定。In the process of the application interface display device performing the above 101 and 102, the posture of the user's head may change. When the application interface display device performs the step 102 and obtains the first left eye image and the first right eye image, Get the user's latest head gesture, the first head gesture. It should be understood that the head posture may specifically include a user's head yaw direction, a yaw angle of the user's head, or a motion mode of the user's head, and may also include other posture information, which is not limited herein.
104、应用界面显示装置根据第一头部姿态调整第一左眼图像得到第二左眼图像,并调整第一右眼图像得到第二右眼图像;104. The application interface display device adjusts the first left eye image according to the first head posture to obtain a second left eye image, and adjusts the first right eye image to obtain a second right eye image.
获取第一头部姿态后,应用界面显示装置根据第一头部姿态对第一左眼图像进行调整得到第二左眼图像,同时对第一右眼图像进行调整得到第二右眼图像。After acquiring the first head posture, the application interface display device adjusts the first left eye image according to the first head posture to obtain a second left eye image, and adjusts the first right eye image to obtain a second right eye image.
105、应用界面显示装置在VR设备的左眼视野区域显示第二左眼图像,并在VR设备的右眼视野区域显示第二右眼图像。105. The application interface display device displays the second left eye image in the left eye view area of the VR device, and displays the second right eye image in the right eye view area of the VR device.
应理解,本申请实施例中VR设备根据用户的左右眼视野分为左眼视野区域和右眼视野区域,具体地,若VR设备具有独立屏幕,则左眼视野区域为该屏幕上用户的左眼所看到的区域,应用界面显示装置在该区域显示第二左眼图像,右眼视野区域为该屏幕上用户
的右眼所看到的区域,应用界面显示装置在该区域显示第二右眼图像,第二左眼图像及第二右眼图像通过对应的光学镜片组显示在用户的左右眼中;若VR设备不具有独立屏幕,则左眼视野区域为VR设备上用户左眼所对准的光学镜片组,应用界面显示装置将第二左眼图像显示在外接屏幕中该光学镜片组所对准的区域,右眼视野区域为VR设备上用户右眼所对准的光学镜片组,应用界面显示装置将第二右眼图像显示在外接屏幕中该光学镜片组所对准的区域,则第二左眼图像及第二右眼图像通过光路变形最终显示在用户的左右眼中。这样,通过VR设备的左眼视野区域和右眼视野区域分别在用户的左眼和右眼显示第二左眼图像和第二右眼图像,用户即可在大脑中合成一副立体图像,呈现出三维效果的待显示界面。本申请实施例对待显示界面进行维度转换处理得到左右眼对应的图像后,再获取用户当前的头部姿态,并根据该头部姿态调整左右眼对应的图像,再将调整后的图像分别显示在VR设备的左右眼视野区域上。也就是说本申请在对待显示界面进行维度转换得到具有三维视觉效果的图像后,还会根据用户最新的头部姿态对转换后的结果进行调整,从而使得最终显示的图像的位置更加贴合与用户的视野,避免了在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。It should be understood that, in the embodiment of the present application, the VR device is divided into a left-eye view area and a right-eye view area according to the left and right eye views of the user. Specifically, if the VR device has an independent screen, the left-eye view area is the left of the user on the screen. The area seen by the eye, the application interface display device displays the second left eye image in the area, and the right eye view area is the user on the screen
The area seen by the right eye, the application interface display device displays the second right eye image in the area, and the second left eye image and the second right eye image are displayed in the left and right eyes of the user through the corresponding optical lens group; if the VR device Without a separate screen, the left eye field of view is the optical lens group on which the user's left eye is aligned on the VR device, and the application interface display device displays the second left eye image in the area of the external screen where the optical lens group is aligned. The right eye field of view is an optical lens group on which the user's right eye is aligned on the VR device, and the application interface display device displays the second right eye image on the external screen in the area where the optical lens group is aligned, and the second left eye image And the second right eye image is finally displayed in the left and right eyes of the user through the optical path deformation. In this way, the left eye field of view and the right eye field of view of the VR device respectively display the second left eye image and the second right eye image in the left and right eyes of the user, and the user can synthesize a stereoscopic image in the brain. A three-dimensional effect to be displayed interface. After performing the dimension conversion process on the display interface to obtain the image corresponding to the left and right eyes, the current head posture of the user is acquired, and the image corresponding to the left and right eyes is adjusted according to the head posture, and the adjusted image is respectively displayed in the image. The left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched. The user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
基于上述图1对应的实施例可知,应用界面显示装置可以通过多种方式将二维应用程序的界面转换成具有三维视觉效果的界面,下面以其中一种为例对本申请实施例中的应用界面显示方法进行详细描述,请参阅图2,本申请实施例中应用界面显示方法的另一实施例包括:Based on the corresponding embodiment of FIG. 1 , the application interface display device can convert the interface of the two-dimensional application into an interface with a three-dimensional visual effect in various manners, and one of the following uses the application interface as an example in the embodiment of the present application. The display method is described in detail. Referring to FIG. 2, another embodiment of the application interface display method in the embodiment of the present application includes:
201、应用界面显示装置获取待显示界面;201. The application interface display device obtains an interface to be displayed.
应用界面显示装置获取待显示界面,待显示界面为需要显示在显示设备的屏幕上的界面,该待显示界面可以是任意一个二维应用程序的界面,也可以是由多个二维应用程序的界面所合成的界面,具体此处不作限定。应理解,二维应用程序指的是基于二维显示所开发的应用程序。还应理解,SurfaceFlinger是安卓系统中负责显示合成的模块,能够接收窗口和图层作为输入,根据各图层的纵深、透明度、大小、位置等参数,计算出每个图层(Surface)最终合成图像中的位置,然后生成最终的显示缓冲(Buffer),再显示到特定的显示设备上。The application interface display device obtains an interface to be displayed. The interface to be displayed is an interface that needs to be displayed on a screen of the display device. The interface to be displayed may be an interface of any two-dimensional application, or may be a plurality of two-dimensional applications. The interface synthesized by the interface is not limited herein. It should be understood that a two-dimensional application refers to an application developed based on two-dimensional display. It should also be understood that SurfaceFlinger is the module responsible for display synthesis in Android system. It can receive windows and layers as input, and calculate the final synthesis of each layer according to the parameters such as depth, transparency, size and position of each layer. The position in the image is then generated into the final display buffer (Buffer) and displayed on a specific display device.
202、应用界面显示装置对待显示界面进行双目渲染,得到该待显示界面的第三左眼图像及第三右眼图像;202. The application interface display device performs binocular rendering on the display interface, and obtains a third left eye image and a third right eye image of the interface to be displayed;
应理解,用户的左眼和右眼能够各自独立看物体,而左右眼之间的有一定的间距,所以对于同一个目标,用户左眼中的图像与用户右眼中的图像是有差别的,这种从具有一定距离的两个点上观察用一个目标所产生的差异称为视差。用户的大脑能够将具有视差的左眼图像和右眼图像进行融合,产生出有空间感的立体视觉效果,从而用户能够看到立体的物体。It should be understood that the left eye and the right eye of the user can independently view the object, and there is a certain distance between the left and right eyes, so for the same target, the image in the left eye of the user is different from the image in the right eye of the user. The difference produced by observing one target from two points with a certain distance is called parallax. The user's brain can fuse the left-eye image and the right-eye image with parallax to produce a stereoscopic visual effect, so that the user can see the stereoscopic object.
基于上述原理,应用界面显示装置获取待显示界面后,针对用户的左眼和右眼绘制出待显示界面的左眼图像和右眼图像,即对待显示界面进行双目渲染(Stereoscopic Rendering)得到待显示界面的左眼图像和右眼图像,为了便于描述,这里将双目渲染得到的左眼图像
和右眼图像称为第三左眼图像和第三右眼图像。如图3为第三左眼图像及第三右眼图像的一个示例,通过VR设备获取该第三左眼图像及第三右眼图像,用户大脑即可将这两幅图像进行融合,产生立体视觉效果,让用户看到三维的待显示界面。Based on the above principle, after the application interface display device acquires the interface to be displayed, the left eye image and the right eye image of the interface to be displayed are drawn for the left eye and the right eye of the user, that is, the Stereoscopic Rendering of the display interface is obtained. Display the left eye image and the right eye image of the interface. For the convenience of description, the left eye image obtained by binocular rendering is here.
The right eye image is referred to as a third left eye image and a third right eye image. 3 is an example of a third left eye image and a third right eye image. The third left eye image and the third right eye image are acquired by the VR device, and the user brain can fuse the two images to generate a stereo image. The visual effect allows the user to see the three-dimensional interface to be displayed.
具体地,应用界面显示装置可以通过如下方式针对用户的左眼和右眼绘制出待显示界面的第三左眼图像和第三右眼图像:获取用户的第二头部姿态,根据第二头部姿态分别确定第一区域及第二区域,在第一区域绘制该待显示界面得到第三左眼图像,并在第二区域绘制该待显示界面得到第三右眼图像,其中第一区域为预设的三维场景的左眼图像中用于展示该待显示界面的区域,第二区域为预设的三维场景的右眼图像中用于展示该待显示界面的区域。Specifically, the application interface display device may draw a third left eye image and a third right eye image of the interface to be displayed for the left eye and the right eye of the user by acquiring the second head posture of the user according to the second head The first posture and the second region are respectively determined, the third left eye image is obtained by drawing the interface to be displayed in the first region, and the third right eye image is obtained by drawing the interface to be displayed in the second region, wherein the first region is An area for displaying the interface to be displayed in the left eye image of the preset three-dimensional scene, and the second area is an area for displaying the interface to be displayed in the right eye image of the preset three-dimensional scene.
本申请实施例中,用户或系统可以预先设定一个或多个三维场景,并针对用户的左眼和右眼绘制该三维场景的左眼图像和右眼图像,用户将左眼对准VR设备的左眼视野区域,右眼对准VR设备的右眼视野区域,即可获取该左眼图像及右眼图像,通过大脑合成即可产生立体感和沉浸感,使用户置身于该预设的三维场景中。而这些预设的三维场景中均包含有用于显示待显示界面的展示区,这个展示区在三维场景的左眼图像中对应的区域即为第一区域,在三维场景的右眼图像中对应的区域即为第二区域。当应用界面显示装置在第一区域即第二区域分别绘制待显示界面后,用户置身于该预设的三维场景中时,就该展示区看到该待显示界面。具体地,该预设的三维场景可以是电影院,商场,教室等,此处不再一一列举,对应地展示区可以是电影院中的荧幕,商场中的广告屏,教室中的黑板等。In the embodiment of the present application, the user or the system may preset one or more three-dimensional scenes, and draw a left-eye image and a right-eye image of the three-dimensional scene for the left and right eyes of the user, and the user points the left eye to the VR device. The left eye field of view and the right eye are aligned with the right eye field of view of the VR device to obtain the left eye image and the right eye image, and the brain synthesis can generate a stereoscopic feeling and a immersive feeling, so that the user is placed in the preset In a 3D scene. The preset three-dimensional scenes include a display area for displaying an interface to be displayed. The corresponding area in the left eye image of the three-dimensional scene is the first area, and the corresponding area in the right eye image of the three-dimensional scene. The area is the second area. After the application interface display device draws the to-be-displayed interface in the first area, that is, the second area, when the user is in the preset three-dimensional scene, the display area sees the to-be-displayed interface. Specifically, the preset three-dimensional scene may be a movie theater, a shopping mall, a classroom, etc., and is not enumerated here. The corresponding display area may be a screen in a movie theater, an advertisement screen in a shopping mall, a blackboard in a classroom, and the like.
应理解,VR技术是使得用户沉浸在模拟环境中,所以用户使用VR设备看到的三维场景会模拟现实中情形,当用户头部转动时,用户所看到的三维场景也会转动,其场景中的元素会发生变化,以教室场景为例,用户初始视野以教室中央的位置设定,此时用户可以看到前方的桌椅,讲台和整个黑板,而当用户头部往上抬时,用户仅能看到上半边的黑板和天花板。因此,随着用户头部的运动,展示区在用户视野中的位置会发生变化,甚至不在用户视野内。故应用界面显示装置会获取用户当前的头部姿态,即第二头部姿态后,再根据该第二头部姿态确定展示区在用户视野中的位置,即确定第一区域的第一位置信息,以及第二区域的第二位置信息,位置信息具体可以是区域各个顶点在屏幕中对应的坐标信息,也可以是其他能够确定位置的信息,具体此处不作限定。然后,应用界面显示装置可以根据第一位置信息将待显示界面绘制到第一区域中得到第三左眼图像,根据第二位置信息将该待显示界面绘制到第二区域中得到第三右眼图像。It should be understood that the VR technology is to immerse the user in the simulation environment, so the three-dimensional scene that the user sees using the VR device simulates the reality situation. When the user's head rotates, the three-dimensional scene seen by the user also rotates, and the scene thereof The elements in the room will change. Taking the classroom scene as an example, the user's initial field of view is set at the center of the classroom. At this time, the user can see the table and chairs in front, the podium and the entire blackboard, and when the user's head is lifted up, The user can only see the upper half of the blackboard and ceiling. Therefore, as the user's head moves, the position of the display area in the user's field of view changes, even within the user's field of view. Therefore, the application interface display device acquires the current head posture of the user, that is, the second head posture, and then determines the position of the display area in the user's field of view according to the second head posture, that is, determines the first position information of the first area. And the second location information of the second area, where the location information may be the coordinate information corresponding to the vertices of the area in the screen, or may be other information that can determine the location, which is not limited herein. Then, the application interface display device may draw the interface to be displayed into the first area according to the first location information to obtain a third left eye image, and draw the interface to be displayed into the second area according to the second location information to obtain a third right eye. image.
需要说明的是,本申请实施例中第二头部姿态指的是应用界面显示装置通过上述方式对待显示界面进行双目渲染时获取的头部姿态,而下述步骤204中的第一头部姿态指的是桶形畸变处理后,进行图像调整前获取的头部姿态。第一头部姿态与第二头部姿态是应用显示界面装置在不同时间获取用户的头部姿态,第一头部姿态用于双目渲染,第二头部姿态用于图像调整。而本申请实施例中的第一头姿态或第二头部姿态是通过传感器确定的,具体可以是VR设备中的传感器,也可以是应用界面显示装置中的传感器,还可以是其他外接设备的传感器,具体此处不作限定。It should be noted that, in the embodiment of the present application, the second head posture refers to a head posture acquired when the application interface display device performs binocular rendering on the display interface by the above manner, and the first header in the following step 204 The attitude refers to the head posture acquired before the image adjustment is performed after the barrel distortion processing. The first head pose and the second head pose are application display interface devices that acquire the user's head pose at different times, the first head pose for binocular rendering and the second head pose for image adjustment. The first head posture or the second head posture in the embodiment of the present application is determined by a sensor, and may be a sensor in the VR device, a sensor in the application interface display device, or other external devices. The sensor is not limited herein.
还需要说明的是,应用界面显示装置还可以通过其他方式得到待显示界面的第三左眼
图像和第三右眼图像,具体此处不作限定。It should also be noted that the application interface display device can also obtain the third left eye of the interface to be displayed by other means.
The image and the third right eye image are not limited herein.
203、应用界面显示装置对第三左眼图像及第三右眼图像进行桶形畸变处理,得到该待显示界面的第一左眼图像及第一右眼图像;203. The application interface display device performs barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image and a first right eye image of the interface to be displayed.
由于VR设备中包含有多组光学镜片,用户通过光学镜片获取图像时,图像边缘会发生不同程度的畸变,而本申请实施例中,应用界面显示装置针对用户左右眼绘制第三左图像及第三右眼图像后,会对第三左眼图像进行桶形畸变处理得到第一左眼图像,同时对第三右眼图像进行桶形畸变处理得到第一右眼图像,以此将光学镜片产生的畸变抵消。Since the VR device includes a plurality of sets of optical lenses, when the user obtains an image through the optical lens, the edge of the image may be distorted to different degrees. In the embodiment of the present application, the application interface display device draws a third left image for the left and right eyes of the user. After the three right eye images, the third left eye image is subjected to barrel distortion processing to obtain a first left eye image, and the third right eye image is subjected to barrel distortion processing to obtain a first right eye image, thereby generating an optical lens. The distortion is offset.
具体地,应用界面显示装置可以利用着色器(Shader)通过一组预设参数,对第五图像及第六图像中的各个元素进行桶形畸变得到第一左眼图像及第一右眼图像。该预设参数是针对VR设备中的透镜参数设置的,如厚度、折射率、瞳距等等。应用界面显示装置还可以通过其他方式进行桶形畸变处理,具体此处不作限定。Specifically, the application interface display device may use a shader to perform barrel distortion on each of the fifth image and the sixth image to the first left eye image and the first right eye image through a set of preset parameters. The preset parameters are set for lens parameters in the VR device, such as thickness, refractive index, pitch, and the like. The application interface display device can also perform the barrel distortion processing in other manners, which is not limited herein.
204、应用界面显示装置获取用户的第一头部姿态;204. The application interface display device acquires a first head posture of the user.
在应用界面显示装置执行上述201至203的过程中,用户的头部姿态可能会发生变化,则当应用界面显示装置执行完步骤203,得到第一左眼图像及第一右眼图像之后,应用界面显示装置会获取用户最新的头部姿态,即第一头部姿态。应理解,头部姿态具体可以包括用户头部偏转方向、用户头部的偏转角度或用户头部的运动模式,运动模式具体可以是左右摇摆,上下摇摆,或其他,具体此处不作限定,头部姿态还可以包括其他姿态信息,具体此处不作限定。In the process of the application interface display device performing the foregoing 201 to 203, the posture of the user's head may change, and after the application interface display device performs step 203 to obtain the first left eye image and the first right eye image, the application The interface display device acquires the user's latest head posture, that is, the first head posture. It should be understood that the head posture may specifically include a user's head deflection direction, a deflection angle of the user's head, or a motion mode of the user's head. The motion mode may specifically be a left-right swing, a swing up and down, or the like, which is not limited herein. The gesture may also include other gesture information, which is not limited herein.
205、应用界面显示装置根据第一头部姿态,调整第一左眼图像得到第二左眼图像,并调整第一右眼图像得到第二右眼图像;205. The application interface display device adjusts the first left eye image to obtain a second left eye image according to the first head posture, and adjusts the first right eye image to obtain a second right eye image.
获取第一头部姿态后,应用界面显示装置根据第一头部姿态对第一左眼图像进行调整得到第二左眼图像,同时对第一右眼图像进行调整得到第二右眼图像。具体地,应用界面显示装置可以根据第一头部姿态计算出变换矩阵,根据该变换矩阵对第一左眼图像进行变换得到第二左眼图像,对第一右眼图像进行变换得到第二右眼图像,也就是对第一左眼图像进行异步时间扭曲得到第二左眼图像,对第一右眼图像进行异步时间扭曲得到第二右眼图像。具体地,应用界面显示装置可以利用着色器(Shader)通过一组预设参数,对第一左眼图像及第一右眼图像的纹理数据进行异步时间扭曲操作得到第二左眼图像和第二右眼图像。也可以通过其他方式进行异步时间扭曲得到第二左眼图像和第二右眼图像,具体此处不作限定。After acquiring the first head posture, the application interface display device adjusts the first left eye image according to the first head posture to obtain a second left eye image, and adjusts the first right eye image to obtain a second right eye image. Specifically, the application interface display device may calculate a transformation matrix according to the first head posture, transform the first left eye image according to the transformation matrix to obtain a second left eye image, and transform the first right eye image to obtain a second right image. The eye image, that is, the first left eye image is asynchronously time warped to obtain a second left eye image, and the first right eye image is asynchronously time warped to obtain a second right eye image. Specifically, the application interface display device may perform an asynchronous time warping operation on the texture data of the first left eye image and the first right eye image through a set of preset parameters by using a shader to obtain a second left eye image and a second image. Right eye image. The second left eye image and the second right eye image may be obtained by performing asynchronous time warping in other manners, which is not limited herein.
应理解,异步时间扭曲(Asynchronous Timewarp,ATW)是一种图像修正的技术,在使用虚拟现实设备时,由于头部运动过快,而造场景渲染的延迟,即头已经转过去了,但是图像还没有渲染出来,或者渲染的是上一帧的图像,异步时间扭曲通过扭曲一副被送往显示设备之前图像,来解决这个延迟问题。具体地,异步时间扭曲指的是对一副图像进行拉伸和位移等操作,比如说,当获取到的第一头部姿态为向左转动时,应用界面显示装置根据该第一头部姿态,将第一左眼图像和第一右眼图像分别往左拉伸和平移得到第二左眼图像和第二右眼图像,比如说,当获取到的第一头部姿态为向下转动,则应用界面显示装置根据该第一头部姿态,将第一左眼图像往下拉伸和平移得到第二左眼图像和第二右眼
图像。基于获取的第一头部姿态信息的不同,调整的方式也不同,此处不再一一列举。It should be understood that Asynchronous Time Warp (ATW) is a technique of image correction. When using a virtual reality device, the delay of the scene rendering due to the head movement is too fast, that is, the head has been turned, but the image It has not been rendered, or the image of the previous frame is rendered. The asynchronous time warping solves this delay problem by distorting the image before being sent to the display device. Specifically, the asynchronous time warping refers to an operation of stretching and shifting an image, for example, when the acquired first head posture is rotated to the left, the application interface display device according to the first head a posture, stretching and translating the first left eye image and the first right eye image to the left to obtain a second left eye image and a second right eye image, for example, when the acquired first head posture is downward rotation And the application interface display device stretches and translates the first left eye image downward according to the first head posture to obtain the second left eye image and the second right eye.
image. The manner of adjustment is different based on the acquired first head posture information, and will not be enumerated here.
206、应用界面显示装置在VR设备的左眼视野区域上显示第二左眼图像,并在VR设备的右眼视野区域上显示第二右眼图像。206. The application interface display device displays the second left eye image on the left eye view area of the VR device, and displays the second right eye image on the right eye view area of the VR device.
应理解,本申请实施例中VR设备根据用户的左右眼视野分为左眼视野区域和右眼视野区域,具体地,若VR设备具有独立屏幕,则左眼视野区域为该屏幕上用户的左眼所看到的区域,应用界面显示装置在该区域显示第二左眼图像,右眼视野区域为该屏幕上用户的右眼所看到的区域,应用界面显示装置在该区域显示第二右眼图像,第二左眼图像及第二右眼图像通过对应的光学镜片组显示在用户的左右眼中;若VR设备不具有独立屏幕,则左眼视野区域为VR设备上用户左眼所对准的光学镜片组,应用界面显示装置将第二左眼图像显示在外接屏幕中该光学镜片组所对准的区域,右眼视野区域为VR设备上用户右眼所对准的光学镜片组,应用界面显示装置将第二右眼图像显示在外接屏幕中该光学镜片组所对准的区域,则第二左眼图像及第二右眼图像通过光路变形最终显示在用户的左右眼中。这样,通过VR设备的左眼视野区域和右眼视野区域分别在用户的左眼和右眼显示第二左眼图像和第二右眼图像,用户即可在大脑中合成一副立体图像,呈现出三维效果的待显示界面。It should be understood that, in the embodiment of the present application, the VR device is divided into a left-eye view area and a right-eye view area according to the left and right eye views of the user. Specifically, if the VR device has an independent screen, the left-eye view area is the left of the user on the screen. The area seen by the eye, the application interface display device displays the second left eye image in the area, the right eye view area is the area seen by the user's right eye on the screen, and the application interface display device displays the second right in the area The eye image, the second left eye image and the second right eye image are displayed in the left and right eyes of the user through the corresponding optical lens group; if the VR device does not have a separate screen, the left eye visual field is aligned on the left eye of the VR device The optical lens group, the application interface display device displays the second left eye image in the area of the external screen where the optical lens group is aligned, and the right eye field of view is the optical lens group on which the user's right eye is aligned on the VR device, the application The interface display device displays the second right eye image on the area of the external screen where the optical lens group is aligned, and the second left eye image and the second right eye image are finally deformed by the optical path. Displayed in the left and right eyes of the user. In this way, the left eye field of view and the right eye field of view of the VR device respectively display the second left eye image and the second right eye image in the left and right eyes of the user, and the user can synthesize a stereoscopic image in the brain. A three-dimensional effect to be displayed interface.
本申请实施例对待显示界面进行维度转换处理得到左右眼对应的图像后,再获取用户当前的头部姿态,并根据该头部姿态调整左右眼对应的图像,再将调整后的图像分别显示在VR设备的左右眼视野区域上。也就是说本申请在对待显示界面进行维度转换得到具有三维视觉效果的图像后,还会根据用户最新的头部姿态对转换后的结果进行调整,从而使得最终显示的图像的位置更加贴合与用户的视野,避免了在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。After performing the dimension conversion process on the display interface to obtain the image corresponding to the left and right eyes, the current head posture of the user is acquired, and the image corresponding to the left and right eyes is adjusted according to the head posture, and the adjusted image is respectively displayed in the image. The left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched. The user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
其次,本申请实施例提供了对待显示界面进行双目渲染,将二维应用界面渲染成具有三维视觉效果的图像后,还会对该图像进行桶形畸变,以消除VR设备中的光学镜片产生的畸变,提升了图像质量,增强用户体验。Secondly, the embodiment of the present application provides binocular rendering of the display interface, and after rendering the two-dimensional application interface into an image having a three-dimensional visual effect, the image is also subjected to barrel distortion to eliminate optical lens generation in the VR device. Distortion improves image quality and enhances user experience.
再次,本申请实施例提供了多种根据头部姿态进行图像调整的方式,提高了方案的灵活性。Again, the embodiment of the present application provides a variety of manners for image adjustment according to the head posture, which improves the flexibility of the solution.
为了便于理解本申请实施例,下面对本申请实施例所适用的一个应用场景进行简单介绍,请参阅图4,本申请实施例提供的应用界面显示方法及装置所适用的一种系统组成结构示意图。该系统可以包括一台移动端头显401和一台移动终端402。其中,该移动终端402包含有屏幕,该屏幕包括第三区域和第四区域,用户在使用时首先需要将该移动终端放置在该移动端头显401内,并将第三区域与该移动端头显401的左眼视野区域对准,第四区域与该移动端头显401的右眼视野区域对准,然后佩戴该移动端头显401,将左眼对准该移动端头显401的左眼视野区域,右眼对准该移动端头显401的右眼视野区域。In order to facilitate the understanding of the embodiments of the present application, an application scenario to which the embodiments of the present application are applied is briefly introduced. Referring to FIG. 4, a schematic diagram of a system component structure applied to the application interface display method and apparatus provided by the embodiments of the present application is provided. The system can include a mobile terminal display 401 and a mobile terminal 402. The mobile terminal 402 includes a screen, and the screen includes a third area and a fourth area. The user first needs to place the mobile terminal in the mobile terminal display 401, and the third area and the mobile end are used. The left eye field of view of the head display 401 is aligned, the fourth area is aligned with the right eye field of view of the moving head display 401, and then the moving end display 401 is worn, and the left eye is aligned with the moving end display 401. In the left-eye view area, the right eye is aligned with the right-eye view area of the moving end display 401.
该移动端头显的左眼视野区域和右眼视野区域分别包含有至少一组光学镜片,用于将移动终端402所显示的图像进行光学处理,并将处理后的图像显示在用户视网膜上,使用户脑海中产生立体感和沉浸感。该移动端头显401还可以包括用于追踪用户头部姿态的传
感器,用于处理数据的CPU等。The left-eye view area and the right-eye view area of the mobile terminal respectively include at least one set of optical lenses for optically processing the image displayed by the mobile terminal 402, and displaying the processed image on the user's retina. It creates a sense of three-dimensionality and immersion in the minds of users. The mobile terminal display 401 may also include a pass for tracking the posture of the user's head.
Sensor, CPU for processing data, etc.
基于上述图4对应的场景,请参阅图5,本申请实施例中应用界面显示方法的另一实施例包括:Based on the scenario corresponding to FIG. 4, another embodiment of the application interface display method in the embodiment of the present application includes:
501、应用界面显示装置从移动终端处获取待显示界面;501. The application interface display device obtains an interface to be displayed from the mobile terminal;
本申请实施例中,用户佩戴移动端头显后,当用户需要对二维应用程序的界面进行显示时,移动终端根据用户操作确定需要显示的二维应用程序的界面,应用界面显示装置从移动终端处获取待显示界面。In the embodiment of the present application, after the user wears the mobile terminal display, when the user needs to display the interface of the two-dimensional application, the mobile terminal determines an interface of the two-dimensional application that needs to be displayed according to the user operation, and the application interface display device moves from the mobile device. The interface to be displayed is obtained at the terminal.
具体地,本申请实施例中,移动终端可以包含SurfaceFlinger模块。SurfaceFlinger是安卓系统中负责显示合成的模块,能够接收窗口和图层作为输入,根据各图层的纵深、透明度、大小、位置等参数,计算出每个图层(Surface)最终合成图像中的位置,然后生成最终的显示缓冲(Buffer),再显示到特定的显示设备上。则移动终端可以通过SurfaceFlinger模块生成该待显示界面,应用界面显示装置从该SurfaceFlinger模块处获取该待显示界面。Specifically, in the embodiment of the present application, the mobile terminal may include a SurfaceFlinger module. SurfaceFlinger is a module responsible for display synthesis in Android. It can receive windows and layers as input, and calculate the position in the final composite image of each layer according to the depth, transparency, size, position and other parameters of each layer. , then generate the final display buffer (Buffer), and then display it to a specific display device. The mobile terminal can generate the to-be-displayed interface by using a SurfaceFlinger module, and the application interface display device acquires the to-be-displayed interface from the SurfaceFlinger module.
可选地,本申请实施例中,应用界面显示装置可以是该移动终端,该移动终端通过SurfaceFlinger模块合成该待显示界面,再通过跨进程通信接口将该待显示界面传输至另一独立于安卓系统的进程中,并在该进程中执行如下步骤502至步骤503。Optionally, in the embodiment of the present application, the application interface display device may be the mobile terminal, and the mobile terminal synthesizes the to-be-displayed interface through the SurfaceFlinger module, and then transmits the to-be-displayed interface to another independent Android through the cross-process communication interface. In the process of the system, the following steps 502 to 503 are performed in the process.
可选地,本申请实施例中的应用界面显示装置还可以是独立于安卓系统的其他用户设备,如PC。该用户设备可以通过数据线,无线网络,蓝牙或其他方式与移动终端建立连接,当移动终端通过SurfaceFlinger模块合成待显示界面后,该用户设备再通过该连接从该移动终端中获取该待显示界面,并执行如下步骤502至步骤503。Optionally, the application interface display device in the embodiment of the present application may also be other user equipments, such as PCs, that are independent of the Android system. The user equipment can establish a connection with the mobile terminal by using a data line, a wireless network, a Bluetooth, or other means. After the mobile terminal synthesizes the to-be-displayed interface through the SurfaceFlinger module, the user equipment obtains the to-be-displayed interface from the mobile terminal through the connection. And perform the following steps 502 to 503.
可选地,本申请实施例中的应用界面显示装置还可以是独立于安卓系统的云端服务器,该移动终端通过无线网络与该云端服务器进行通信,并将SurfaceFlinger模块合成的待显示界面传输至该云端服务器,该云端服务器接收该待显示界面,并执行如下步骤502至步骤503。Optionally, the application interface display device in the embodiment of the present application may further be a cloud server independent of the Android system, the mobile terminal communicates with the cloud server through the wireless network, and transmits the interface to be displayed synthesized by the SurfaceFlinger module to the cloud server. The cloud server receives the interface to be displayed, and performs the following steps 502 to 503.
502、应用界面显示装置对待显示界面进行维度转换处理,得到待显示界面对应的第一左眼图像及第一右眼图像;502. The application interface display device performs a dimension conversion process on the display interface, and obtains a first left eye image and a first right eye image corresponding to the interface to be displayed.
应用界面显示装置获取待显示界面后,可以通过如上述图2对应实施例中步骤202至步骤203所述的方式对待显示界面进行维度转换处理得到第一左眼图像及第一右眼图像,也可以通过其他方式得到待显示界面的第一左眼图像及第一右眼图像,具体此处不作限定。After the application interface display device obtains the interface to be displayed, the first left eye image and the first right eye image may be obtained by performing dimension conversion processing on the display interface in the manner described in steps 202 to 203 in the corresponding embodiment of FIG. 2 above. The first left eye image and the first right eye image of the interface to be displayed may be obtained by other methods, which are not limited herein.
503、应用界面显示装置获取用户的第一头部姿态;503. The application interface display device acquires a first head posture of the user.
本申请实施例中,移动端头显中的传感器能够实时追踪用户的头部姿态,则当应用界面显示装置得到第一左眼图像及第二右眼图像后,应用界面显示装置从该移动头显中的传感器获取用户当前的头部姿态,即第一头部姿态。In the embodiment of the present application, the sensor in the mobile terminal display can track the head posture of the user in real time, and after the application interface display device obtains the first left eye image and the second right eye image, the application interface display device moves from the moving head. The sensor in the display acquires the current head posture of the user, that is, the first head posture.
504、应用界面显示装置根据第一头部姿态,调整第一左眼图像得到第二左眼图像,并调整第一右眼图像得到第二右眼图像;504. The application interface display device adjusts the first left eye image to obtain a second left eye image according to the first head posture, and adjusts the first right eye image to obtain a second right eye image.
应用界面显示装置获取第一头部姿态后,可以通过如图2对应实施例中步骤505所述的方式对第一左眼图像及第一右眼图像进行调整得到第二左眼图像及第二右眼图像,也可以通过其他方式对第一左眼图像及第一右眼图像进行调整,具体此处不作限定。
After the application interface display device obtains the first head posture, the first left eye image and the first right eye image may be adjusted to obtain the second left eye image and the second manner as described in step 505 of the corresponding embodiment of FIG. 2 . The first left eye image and the first right eye image may be adjusted by other methods, which are not limited herein.
505、应用界面显示装置将第二左眼图像及第二右眼图像发送给移动终端。505. The application interface display device sends the second left eye image and the second right eye image to the mobile terminal.
应用界面显示装置得到第二左眼图像及第二右眼图像后,将第二左眼图像及第二右眼图像发送给移动终端,使得移动终端在屏幕的第三区域显示该第二左眼图像,在屏幕的第四区域显示该第二右眼图像。则用户左眼通过移动头显的左眼视野区域获取该第三区域中的第二左眼图像,右眼通过移动头显的右眼视野区域获取该第四区域中的第二右眼图像,在大脑中就可以将第二左眼图像及第二右眼图像合成一副立体图像,呈现出三维效果的待显示界面。After the application interface display device obtains the second left eye image and the second right eye image, the second left eye image and the second right eye image are sent to the mobile terminal, so that the mobile terminal displays the second left eye in the third area of the screen. The image displays the second right eye image in a fourth area of the screen. Then, the left eye of the user acquires the second left eye image in the third area by moving the headed left eye field of view, and the right eye acquires the second right eye image in the fourth area by moving the headed right eye field of view. In the brain, the second left eye image and the second right eye image can be combined into a stereoscopic image to present a three-dimensional effect to be displayed interface.
可选地,本申请实施例中,应用界面显示装置可以是该移动终端,当该移动终端该独立于安卓系统的另一进程中执行完步骤502及步骤503得到第二左眼图像及第二右眼图像后,将通过跨进程通信接口将该的人左眼图像及第二右眼图像发送至该SurfaceFlinger模块,由该SurfaceFlinger模块生成显示缓冲,并在屏幕上显示该第二左眼图像及第二右眼图像。Optionally, in the embodiment of the present application, the application interface display device may be the mobile terminal, and when the mobile terminal performs the steps 502 and 503 in another process independent of the Android system, the second left eye image and the second image are obtained. After the right eye image, the human left eye image and the second right eye image are sent to the SurfaceFlinger module through the cross-process communication interface, the display buffer is generated by the SurfaceFlinger module, and the second left eye image is displayed on the screen and Second right eye image.
可选地,本申请实施例中的应用界面显示装置还可以是独立于安卓系统的其他用户设备或云端服务器,当该用户设备或云端服务器执行步骤502及步骤503得到第二左眼图像及第二右眼图像后,可以通过无线网络或其他方式将该第二左眼图像及第二右眼图像发送至该SurfaceFlinger模块,由SurfaceFlinger模块生产显示缓冲,并在屏幕上显示该第二左眼图像及第二右眼图像。Optionally, the application interface display device in the embodiment of the present application may be another user device or a cloud server that is independent of the Android system. When the user device or the cloud server performs steps 502 and 503, the second left eye image and the second image are obtained. After the second right eye image, the second left eye image and the second right eye image may be sent to the SurfaceFlinger module through a wireless network or other manner, the display buffer is produced by the SurfaceFlinger module, and the second left eye image is displayed on the screen. And a second right eye image.
本申请实施例对待显示界面进行维度转换处理得到左右眼对应的图像后,再获取用户当前的头部姿态,并根据该头部姿态调整左右眼对应的图像,再将调整后的图像分别显示在VR设备的左右眼视野区域上。也就是说本申请在对待显示界面进行维度转换得到具有三维视觉效果的图像后,还会根据用户最新的头部姿态对转换后的结果进行调整,从而使得最终显示的图像的位置更加贴合与用户的视野,避免了在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。After performing the dimension conversion process on the display interface to obtain the image corresponding to the left and right eyes, the current head posture of the user is acquired, and the image corresponding to the left and right eyes is adjusted according to the head posture, and the adjusted image is respectively displayed in the image. The left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched. The user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
其次,本申请实施例中的应用界面显示方法可以在移动终端中独立于安卓系统的另一进程中运行,也可以在独立于安卓系统的用户设备或云端服务器中运行。即本申请实施例中的应用界面显示方法不依赖于安卓系统,可以减轻移动终端的运算负担,另外当该方法中用到的算法需要更新时,该更新可以独立于安卓系统进行,当安卓系统中的内部架构发生更新时,该方法中用到的算法不需要进行相应的修改,灵活性和通用性更高。Secondly, the application interface display method in the embodiment of the present application may be run in another process independent of the Android system in the mobile terminal, or may be run in a user device or a cloud server independent of the Android system. That is, the application interface display method in the embodiment of the present application does not depend on the Android system, and can reduce the computational burden of the mobile terminal. In addition, when the algorithm used in the method needs to be updated, the update can be performed independently of the Android system, when the Android system When the internal architecture in the update is updated, the algorithm used in the method does not need to be modified accordingly, and the flexibility and versatility are higher.
基于上述图5对应的实施例,在本申请实施例提供的应用界面显示方法的另一实施例中,该移动终端可以是基于安卓系统的手机,该手机中包含有SurfaceFlinger进程,及另一独立于安卓系统的3DConverter进程。In another embodiment of the application interface display method provided by the embodiment of the present application, the mobile terminal may be an Android-based mobile phone, which includes a SurfaceFlinger process, and another independent. The 3DConverter process for Android.
则在本申请实施例中,当用户点击该安卓系统手机中的二维应用程序的图标时,手机启动该二维应用程序对应的进程(Process100),SurfaceFlinger为Process100创建一个图层(Surface),同时创建该Surface对应的图形缓冲区(GraphicBuffer),为了便于描述,这里将该Surface对应的图形缓冲区称为第一图像缓冲区(gb100)。SurfaceFlinger将该第一图形缓冲区中的数据通过Binder机制传递至Process100,Process100将该gb100中的数据映射到进程空间中。然后Process100根据应用的绘制逻辑,通过OpenGL函数执行绘制操作,
并将绘制结果写入该进程空间中,同时通过Binder机制通知SurfaceFlinger绘制已完成。In the embodiment of the present application, when the user clicks the icon of the two-dimensional application in the Android mobile phone, the mobile phone starts the process corresponding to the two-dimensional application (Process100), and SurfaceFlinger creates a layer for the Process 100. At the same time, a graphic buffer (GraphicBuffer) corresponding to the Surface is created. For convenience of description, the graphic buffer corresponding to the Surface is referred to as a first image buffer (gb100). SurfaceFlinger passes the data in the first graphics buffer to Process100 through the Binder mechanism, and Process100 maps the data in gb100 to the process space. Then Process100 performs the drawing operation through the OpenGL function according to the drawing logic of the application.
The drawing result is written into the process space, and the SurfaceFlinger drawing is notified through the Binder mechanism.
SurfaceFlinger在定时器信号驱动下,每隔固定周期检测该gb100中的数据是否有更新,若有更新,则对gb100进行标记,标记内容主要是SurfaceFlinger对gb100合成策略,比如SurfaceFlinger是通过图形处理器(Graphics Processing Unit,GPU)还是硬件合成器(Hardware Compose,HWC)处理gb100,此处处理是指将多个应用的图形缓冲区的合成,并送至帧缓冲(framebuffer)显示。SurfaceFlinger通过遍历待显示的gb100中的数据,通过glDrawArray函数的调用,将该数据以纹理的形式绘制到图层帧缓冲(FramebufferSurface)对应的图形缓冲区(gb200)中。Driven by the timer signal, SurfaceFlinger detects whether the data in gb100 is updated every fixed period. If there is an update, it marks gb100. The content of the mark is mainly SurfaceFlinger's synthesis strategy for gb100. For example, SurfaceFlinger is through the graphics processor ( Graphics Processing Unit (GPU) or Hardware Compose (HWC) handles gb100. Here, processing refers to the synthesis of graphics buffers of multiple applications and sends them to the framebuffer display. SurfaceFlinger traverses the data in gb100 to be displayed, and calls the data into the graphics buffer (gb200) corresponding to the framebufferSurface through the call of glDrawArray function.
手机启动3DConverter进程后,3DConverter通过跨进程通信接口(Interfacer100)从SurfaceFlinger处获取gb200中的数据,即待显示界面的纹理数据,然后将该纹理数据更新至第一纹理块(P200_texture100),然后将第一纹理块作为OpenGL函数的输入,对待显示界面进行一次渲染,并将一次渲染的结果存储到第二纹理块(P200_texture200)中。一次渲染的具体过程如下:After the mobile phone starts the 3DConverter process, the 3DConverter obtains the data in the gb200 from the SurfaceFlinger through the cross-process communication interface (Interfacer100), that is, the texture data of the interface to be displayed, and then updates the texture data to the first texture block (P200_texture100), and then the first A texture block is used as input to the OpenGL function to render the display interface once and store the result of one rendering into the second texture block (P200_texture200). The specific process of rendering is as follows:
3DConverter将gb200中的数据更新至第一纹理块后,确定预设的三维场景为电影院场景,该场景中包含有一个荧幕(展示区),3DConverter通过VR设备中的传感器获取用户当前的头部姿态(第一头部姿态),然后将电影院场景的模型数据(包括电影院模型的顶点、几何、颜色等),第一纹理块中存储的待显示界面的纹理数据以及获取到的头部姿态等信息作为输入,进程两次调用glDrawArray,分别计算出荧幕在用虚拟场景中的位置,得到四个的顶点坐标,然后根据该顶点坐标以及待显示界面的纹理数据将待显示界面绘制到该三维场景中,得到该三维场景(包含待显示界面)对应的第三左眼图像和第三右眼图像。然后在OpenGL的着色器中,通过一组预设参数,对上述第三左眼图像和第三右眼图像进行桶形畸变得到第一左眼图像及第一右眼图像,并将第一左眼图像及第一右眼图像存储以纹理的形式存储到第二纹理块中。After updating the data in the gb200 to the first texture block, the 3DConverter determines that the preset three-dimensional scene is a cinema scene, and the scene includes a screen (display area), and the 3DConverter acquires the current head of the user through the sensor in the VR device. Gesture (first head pose), then model data of the cinema scene (including the vertices, geometry, color, etc. of the cinema model), texture data of the interface to be displayed stored in the first texture block, and the acquired head pose, etc. As the input, the process calls glDrawArray twice, calculates the position of the screen in the virtual scene, obtains the coordinates of the four vertices, and then draws the interface to be displayed to the three-dimensional according to the coordinates of the vertex and the texture data of the interface to be displayed. In the scene, a third left eye image and a third right eye image corresponding to the three-dimensional scene (including the interface to be displayed) are obtained. Then, in the shader of the OpenGL, the third left eye image and the third right eye image are barrel-distorted to the first left eye image and the first right eye image through a set of preset parameters, and the first left is The eye image and the first right eye image storage are stored in texture form into the second texture block.
3DConverter将第一左眼图像及第一右眼图像存储到第二纹理块后,将第二纹理块作为OpenGL函数的输入,进行二次渲染,并将二次渲染结果存到第一纹理块中。二次渲染的具体过程如下:After the 3DConverter stores the first left eye image and the first right eye image to the second texture block, the second texture block is used as an input of the OpenGL function, and the secondary rendering result is stored in the first texture block. . The specific process of secondary rendering is as follows:
3DConverter通过VR设备中的传感器获取用户当前的头部姿态(第一头部姿态),然后根据该头部姿态计算出变换矩阵,使用该变换矩阵对存储在第二纹理块中的图像进行变换,并绘制出变换后的图像,具体地,在使用OpenGL绘制时,OpenGL的着色器通过另一组预设参数对第二纹理块中的纹理数据进行异步时间扭曲操作得到第二左眼图像及第二右眼图像,并该第二左眼图像及第二右眼图像以纹理的形式存储到第一纹理块中。The 3DConverter acquires the current head posture (first head posture) of the user through the sensor in the VR device, and then calculates a transformation matrix according to the head posture, and uses the transformation matrix to transform the image stored in the second texture block, And drawing the transformed image. Specifically, when drawing with OpenGL, the OpenGL shader performs an asynchronous time warping operation on the texture data in the second texture block by another set of preset parameters to obtain the second left eye image and the first Two right eye images, and the second left eye image and the second right eye image are stored in a texture form into the first texture block.
应理解,正常情况下,纹理块是OpenGL绘制的输入,帧缓冲是绘制的输出,但本申请实施例是将绘制结果输出到纹理块中,具体是将第一纹理块关联到第一帧缓冲(p200_faramebuffer100)的颜色挂载点上,将第二纹理块关联到第二针缓冲(p200_faramebuffer200)的颜色挂载点上,通过对第一帧缓冲的调用,就可以将一次渲染结果存储到第二纹理块中,通过对第二帧缓冲的调用,就可以将二次渲染的结果存储到第一纹理块中。
It should be understood that, in a normal case, the texture block is an input of OpenGL drawing, and the frame buffer is an output of the drawing, but the embodiment of the present application outputs the drawing result to the texture block, specifically, the first texture block is associated with the first frame buffer. (p200_faramebuffer100) at the color mount point, the second texture block is associated with the color mount point of the second pin buffer (p200_faramebuffer200), and the first render buffer can be used to store the first render result to the second. In the texture block, the result of the secondary rendering can be stored in the first texture block by calling the second frame buffer.
最后,3DConverter通过跨进程通信接口(Interfacer200)通知SurfaceFlinger渲染结束,并将第一纹理块中的纹理数据发送至SurfaceFlinger,SurfaceFlinger根据该纹理数据,在手机的屏幕的第三区域显示第二左眼图像,同时在屏幕的第四区域显示第二右眼图像。则用户通过VR设备,将左眼对准手机的左屏幕,右眼对准手机的右屏幕,就感受到身处预设的电影院场景中,并且在该电影院的荧幕上看到该待显示界面。Finally, 3DConverter notifies SurfaceFlinger of the end of rendering through the cross-process communication interface (Interfacer200), and sends the texture data in the first texture block to SurfaceFlinger. Based on the texture data, SurfaceFlinger displays the second left-eye image in the third area of the screen of the mobile phone. At the same time, the second right eye image is displayed in the fourth area of the screen. Then, through the VR device, the user points the left eye to the left screen of the mobile phone, the right eye to the right screen of the mobile phone, and feels in the preset movie theater scene, and sees the to-be-displayed on the screen of the movie theater. interface.
上面介绍了本申请实施例中的应用界面显示方法,下面对本申请实施例中的应用界面显示装置进行介绍,应理解,本申请实施例中的应用界面显示装置用于在VR设备上显示2D应用程序的界面,该应用界面显示装置可以是该VR设备,也可以是能够与该VR设备连接的通讯设备,如PC,移动终端,云端服务器等,还可以是该VR设备或通讯设备中的部件,具体此处不作限定。The application interface display device in the embodiment of the present application is described above. The following describes the application interface display device in the embodiment of the present application. It should be understood that the application interface display device in the embodiment of the present application is used to display a 2D application on the VR device. The interface of the program, the application interface display device may be the VR device, or may be a communication device capable of connecting with the VR device, such as a PC, a mobile terminal, a cloud server, etc., or may be a component in the VR device or the communication device. Specifically, it is not limited here.
下面先从功能模块的角度介绍本申请实施例中的应用界面显示装置,请参阅图6,本申请实施例中应用界面显示装置的一个实施例包括:The following describes the application interface display device in the embodiment of the present application from the perspective of a function module. Referring to FIG. 6 , an embodiment of the application interface display device in the embodiment of the present application includes:
第一获取模块601,用于获取待显示界面,该待显示界面为2D应用程序的界面;The first obtaining module 601 is configured to obtain an interface to be displayed, where the interface to be displayed is an interface of a 2D application;
处理模块602,用于对第一获取模块601获取的待显示界面进行维度转换处理,得到待显示界面对应的第一左眼图像及第一右眼图像,第一左眼图像及第一右眼图像用于呈现具有三维视觉效果的待显示界面;The processing module 602 is configured to perform dimension conversion processing on the interface to be displayed acquired by the first obtaining module 601, and obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, and the first left eye image and the first right eye. The image is used to present an interface to be displayed having a three-dimensional visual effect;
第二获取模块603,用于获取用户的第一头部姿态;a second acquiring module 603, configured to acquire a first head posture of the user;
调整模块604,用于根据第二获取模块获取的第一头部姿态,调整第一左眼图像得到第二左眼图像,并调整第一右眼图像得到第二右眼图像;The adjusting module 604 is configured to adjust the first left eye image to obtain the second left eye image according to the first head posture acquired by the second acquiring module, and adjust the first right eye image to obtain the second right eye image;
显示模块605,用于在VR设备的左眼视野区域显示第二左眼图像,并在VR设备的右眼视野区域显示第二右眼图像。The display module 605 is configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device.
本申请实施例处理模块602对待显示界面进行维度转换处理得到左右眼对应的图像后,第二获取模块603获取用户当前的头部姿态,调整模块604根据该头部姿态调整左右眼对应的图像,显示模块再将调整后的图像分别显示在VR设备的左右眼视野区域上。也就是说本申请在对待显示界面进行维度转换得到具有三维视觉效果的图像后,还会根据用户最新的头部姿态对转换后的结果进行调整,从而使得最终显示的图像的位置更加贴合与用户的视野,避免了在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。After the processing module 602 performs the dimension conversion processing on the display interface to obtain the image corresponding to the left and right eyes, the second obtaining module 603 acquires the current head posture of the user, and the adjusting module 604 adjusts the image corresponding to the left and right eyes according to the head posture. The display module then displays the adjusted images on the left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched. The user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
基于上述图6对应的实施例可知,处理模块可以通过多种方式将二维应用程序的界面转换成具有三维视觉效果的界面,下面以其中一种为例对本申请实施例中的应用界面显示装置进行详细描述,请参阅图7,本申请实施例中应用界面显示装置的另一实施例包括:Based on the embodiment corresponding to FIG. 6 , the processing module can convert the interface of the two-dimensional application into an interface having a three-dimensional visual effect in a plurality of manners, and one of the following is an example of the application interface display device in the embodiment of the present application. For a detailed description, please refer to FIG. 7. Another embodiment of the application interface display device in the embodiment of the present application includes:
第一获取模块701,用于获取待显示界面,该待显示界面为2D应用程序的界面;The first obtaining module 701 is configured to obtain an interface to be displayed, where the interface to be displayed is an interface of a 2D application.
处理模块702,用于对第一获取模块701获取的待显示界面进行维度转换处理,得到待显示界面对应的第一左眼图像及第一右眼图像,第一左眼图像及第一右眼图像用于呈现具有三维视觉效果的待显示界面;The processing module 702 is configured to perform dimension conversion processing on the interface to be displayed acquired by the first obtaining module 701, and obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, a first left eye image and a first right eye. The image is used to present an interface to be displayed having a three-dimensional visual effect;
第二获取模块703,用于获取用户的第一头部姿态;a second acquiring module 703, configured to acquire a first head posture of the user;
调整模块704,用于根据第二获取模块获取的第一头部姿态,调整第一左眼图像得到
第二左眼图像,并调整第一右眼图像得到第二右眼图像;The adjusting module 704 is configured to adjust the first left eye image according to the first head posture acquired by the second acquiring module
a second left eye image, and adjusting the first right eye image to obtain a second right eye image;
显示模块705,用于在VR设备的左眼视野区域显示第二左眼图像,并在VR设备的右眼视野区域显示第二右眼图像;The display module 705 is configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device;
本申请实施例中,处理模块702包括:In the embodiment of the present application, the processing module 702 includes:
渲染单元7021,用于对待显示界面进行双目渲染,得到待显示界面的第三左眼图像及第三右眼图像;a rendering unit 7021, configured to perform binocular rendering on the display interface, and obtain a third left eye image and a third right eye image of the interface to be displayed;
处理单元7022,用于对第三左眼图像及第三右眼图像进行桶形畸变处理,得到待显示界面的第一左眼图像及待显示界面的第一右眼图像。The processing unit 7022 is configured to perform barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image of the interface to be displayed and a first right eye image of the interface to be displayed.
可选地,在本申请实施例中,渲染单元7021可以包括:Optionally, in the embodiment of the present application, the rendering unit 7021 may include:
第一获取子单元70211,用于获取用户的第二头部姿态;a first obtaining subunit 70211, configured to acquire a second head posture of the user;
确定子单元70212,用于根据第二头部姿态分别确定第一区域及第二区域,第一区域为预设的三维场景的左眼图像中用于展示待显示界面的区域,第二区域为预设的三维场景的右眼图像中用于展示待显示界面的区域;The determining subunit 70212 is configured to respectively determine the first area and the second area according to the second head posture, where the first area is an area for displaying an interface to be displayed in a left eye image of the preset three-dimensional scene, and the second area is An area in the right eye image of the preset three-dimensional scene for displaying the interface to be displayed;
绘制子单元70213,用于在第一区域绘制待显示界面得到第三左眼图像,并在第二区域绘制待显示界面得到第三右眼图像。The drawing sub-unit 70213 is configured to draw an interface to be displayed in the first area to obtain a third left-eye image, and draw an interface to be displayed in the second area to obtain a third right-eye image.
可选地,在本申请实施例中,调整模块704可以包括:Optionally, in the embodiment of the present application, the adjusting module 704 may include:
时间扭曲单元7041,用于根据第一头部姿态对第一左眼图像进行异步时间扭曲得到第二左眼图像,对第一右眼图像进行异步时间扭曲得到第二右眼图像。The time warping unit 7041 is configured to perform asynchronous time warping on the first left eye image according to the first head posture to obtain a second left eye image, and perform asynchronous time warping on the first right eye image to obtain a second right eye image.
本申请实施例处理模块702对待显示界面进行维度转换处理得到左右眼对应的图像后,第二获取模块703获取用户当前的头部姿态,调整模块704根据该头部姿态调整左右眼对应的图像,显示模块再将调整后的图像分别显示在VR设备的左右眼视野区域上。也就是说本申请在对待显示界面进行维度转换得到具有三维视觉效果的图像后,还会根据用户最新的头部姿态对转换后的结果进行调整,从而使得最终显示的图像的位置更加贴合与用户的视野,避免了在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。After the processing module 702 performs the dimension conversion processing on the display interface to obtain the image corresponding to the left and right eyes, the second obtaining module 703 acquires the current head posture of the user, and the adjusting module 704 adjusts the image corresponding to the left and right eyes according to the head posture. The display module then displays the adjusted images on the left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched. The user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
其次,本申请实施例提供了对待显示界面进行双目渲染,将二维应用界面渲染成具有三维视觉效果的图像后,还会对该图像进行桶形畸变,以消除VR设备中的光学镜片产生的畸变,提升了图像质量,增强用户体验。Secondly, the embodiment of the present application provides binocular rendering of the display interface, and after rendering the two-dimensional application interface into an image having a three-dimensional visual effect, the image is also subjected to barrel distortion to eliminate optical lens generation in the VR device. Distortion improves image quality and enhances user experience.
再次,本申请实施例提供了一种根据头部姿态进行图像调整的方式,提高了方案的可实现性。The embodiment of the present application provides a method for performing image adjustment according to the head posture, which improves the achievability of the solution.
为了便于理解本申请实施例,请参阅图4,基于上述图4对应的系统场景,请参阅图8,本申请实施例中应用界面显示装置的另一实施例包括:For an embodiment of the present application, referring to FIG. 4, based on the system scenario corresponding to FIG. 4, another embodiment of the application interface display device in the embodiment of the present application includes:
第一获取模块801,用于获取待显示界面,该待显示界面为2D应用程序的界面;The first obtaining module 801 is configured to obtain an interface to be displayed, where the interface to be displayed is an interface of a 2D application;
处理模块802,用于对第一获取模块801获取的待显示界面进行维度转换处理,得到待显示界面对应的第一左眼图像及第一右眼图像,第一左眼图像及第一右眼图像用于呈现具有三维视觉效果的待显示界面;The processing module 802 is configured to perform dimension conversion processing on the interface to be displayed acquired by the first obtaining module 801, and obtain a first left eye image and a first right eye image corresponding to the interface to be displayed, a first left eye image and a first right eye. The image is used to present an interface to be displayed having a three-dimensional visual effect;
第二获取模块803,用于获取用户的第一头部姿态;
a second acquiring module 803, configured to acquire a first head posture of the user;
调整模块804,用于根据第二获取模块获取的第一头部姿态,调整第一左眼图像得到第二左眼图像,并调整第一右眼图像得到第二右眼图像;The adjusting module 804 is configured to adjust the first left eye image to obtain a second left eye image according to the first head posture acquired by the second acquiring module, and adjust the first right eye image to obtain a second right eye image;
显示模块805,用于在VR设备的左眼视野区域显示第二左眼图像,并在VR设备的右眼视野区域显示第二右眼图像;The display module 805 is configured to display a second left eye image in a left eye view area of the VR device, and display a second right eye image in a right eye view area of the VR device;
其中,第一获取模块801包括:The first obtaining module 801 includes:
获取单元8011,用于从移动终端获取所述待显示界面;The obtaining unit 8011 is configured to acquire the to-be-displayed interface from the mobile terminal;
对应地,显示模块805包括:Correspondingly, the display module 805 includes:
发送单元8051,用于将第二左眼图像及第二右眼图像发送给移动终端,使得移动终端在屏幕的第三区域显示第二左眼图像,在屏幕的第四区域显示第二右眼图像,移动终端的屏幕包括第三区域和第四区域,第三区域对应VR设备的左眼视野区域,第四区域对应VR设备的右眼视野区域;The sending unit 8051 is configured to send the second left eye image and the second right eye image to the mobile terminal, so that the mobile terminal displays the second left eye image in the third area of the screen, and displays the second right eye in the fourth area of the screen. The image, the screen of the mobile terminal includes a third area and a fourth area, the third area corresponds to a left eye view area of the VR device, and the fourth area corresponds to a right eye view area of the VR device;
可选地,本申请实施例中,获取单元8011可以包括:Optionally, in the embodiment of the present application, the obtaining unit 8011 may include:
第二获取子单元80111,用于从SurfaceFlinger模块处获取待显示界面;a second obtaining subunit 80111, configured to obtain an interface to be displayed from the SurfaceFlinger module;
对应地,发送单元8051可以包括:Correspondingly, the sending unit 8051 can include:
发送子单元80511,用于将第二左眼图像及第二右眼图像发送至SurfaceFlinger模块,以使得SurfaceFlinger模块端在移动终端的屏幕的第三区域显示第二左眼图像,在屏幕的第四区域显示第二右眼图像;The sending subunit 80511 is configured to send the second left eye image and the second right eye image to the SurfaceFlinger module, so that the SurfaceFlinger module end displays the second left eye image in the third area of the screen of the mobile terminal, in the fourth screen The area displays the second right eye image;
应理解,在本申请实施例中,应用界面显示装置可以是如图4所示的移动终端,可以是独立于安卓系统的其他用户设备,如PC等,可以是独立于安卓系统的云端服务器,还可以是其他设备,具体此处不作限定。It should be understood that, in the embodiment of the present application, the application interface display device may be a mobile terminal as shown in FIG. 4, and may be other user devices independent of the Android system, such as a PC, and may be a cloud server independent of the Android system. It can also be other devices, which are not limited herein.
本申请实施例处理模块802对待显示界面进行维度转换处理得到左右眼对应的图像后,第二获取模块803获取用户当前的头部姿态,调整模块804根据该头部姿态调整左右眼对应的图像,显示模块再将调整后的图像分别显示在VR设备的左右眼视野区域上。也就是说本申请在对待显示界面进行维度转换得到具有三维视觉效果的图像后,还会根据用户最新的头部姿态对转换后的结果进行调整,从而使得最终显示的图像的位置更加贴合与用户的视野,避免了在二维应用界面渲染成具有三维视觉效果的图像的过程中,由于用户头部姿态变化而导致的图像位置和用户视野错位而造成的晕眩感,提升用户体验。After the processing module 802 performs the dimension conversion processing on the display interface to obtain the image corresponding to the left and right eyes, the second obtaining module 803 obtains the current head posture of the user, and the adjusting module 804 adjusts the image corresponding to the left and right eyes according to the head posture. The display module then displays the adjusted images on the left and right eye view areas of the VR device. That is to say, after the dimension conversion of the display interface is performed to obtain an image having a three-dimensional visual effect, the converted result is adjusted according to the latest head posture of the user, so that the position of the finally displayed image is more closely matched. The user's field of vision avoids the dizziness caused by the image position and the user's visual field misalignment caused by the user's head posture change during rendering of the two-dimensional application interface into an image with a three-dimensional visual effect, thereby improving the user experience.
其次,本申请实施例中的应用界面显示装置可以是独立于安卓系统的用户设备或云端服务器,即本申请实施例中的应用界面显示方法不依赖于安卓系统,可以减轻移动终端的运算负担,另外当该方法中用到的算法需要更新时,该更新可以独立于安卓系统进行,当安卓系统中的内部架构发生更新时,该方法中用到的算法不需要进行相应的修改,灵活性和通用性更高。Secondly, the application interface display device in the embodiment of the present application may be a user device or a cloud server that is independent of the Android system, that is, the application interface display method in the embodiment of the present application does not depend on the Android system, and the computing burden of the mobile terminal may be reduced. In addition, when the algorithm used in the method needs to be updated, the update can be performed independently of the Android system. When the internal architecture in the Android system is updated, the algorithm used in the method does not need to be modified accordingly, and the flexibility and More versatile.
上面从功能模块的角度介绍了本申请实施例中的应用界面显示装置,下面从实体硬件的角度介绍本申请实施例中的应用界面显示装置,请参阅图9,图9是本申请实施例应用界面显示装置90的结构示意图。应用界面显示装置90可包括输入设备910、输出设备920、处理器930和存储器940。The application interface display device in the embodiment of the present application is introduced from the perspective of the function module. The application interface display device in the embodiment of the present application is introduced from the perspective of the physical hardware. Please refer to FIG. 9 , which is an application of the embodiment of the present application. A schematic structural view of the interface display device 90. The application interface display device 90 can include an input device 910, an output device 920, a processor 930, and a memory 940.
存储器940可以包括只读存储器和随机存取存储器,并向处理器930提供指令和数据。
存储器940的一部分还可以包括非易失性随机存取存储器(Non-Volatile Random Access Memory,NVRAM)。 Memory 940 can include read only memory and random access memory and provides instructions and data to processor 930.
A portion of the memory 940 may also include a Non-Volatile Random Access Memory (NVRAM).
存储器940存储了如下的元素,可执行模块或者数据结构,或者它们的子集,或者它们的扩展集: Memory 940 stores the following elements, executable modules or data structures, or subsets thereof, or their extended sets:
操作指令:包括各种操作指令,用于实现各种操作。Operation instructions: include various operation instructions for implementing various operations.
操作系统:包括各种系统程序,用于实现各种基础业务以及处理基于硬件的任务。Operating system: Includes a variety of system programs for implementing various basic services and handling hardware-based tasks.
本申请实施例中,应用界面显示装置或VR设备中包括至少一台显示器,应用界面显示装置中的处理器930具体用于:In the embodiment of the present application, the application interface display device or the VR device includes at least one display, and the processor 930 in the application interface display device is specifically configured to:
获取待显示界面,待显示界面为2D应用程序的界面;Obtaining an interface to be displayed, where the interface to be displayed is an interface of a 2D application;
对待显示界面进行维度转换处理,得到待显示界面对应的第一左眼图像及第一右眼图像,第一左眼图像及第一右眼图像用于呈现具有三维视觉效果的待显示界面;Performing a dimension conversion process on the display interface, and obtaining a first left eye image and a first right eye image corresponding to the interface to be displayed, where the first left eye image and the first right eye image are used to display a to-be-displayed interface having a three-dimensional visual effect;
获取用户的第一头部姿态;Obtaining a first head gesture of the user;
根据第一头部姿态,调整第一左眼图像得到第二左眼图像,并调整第一右眼图像得到第二右眼图像;Adjusting the first left eye image to obtain a second left eye image according to the first head posture, and adjusting the first right eye image to obtain a second right eye image;
控制显示器在VR设备的左眼视野区域显示第二左眼图像,并在VR设备的右眼视野区域显示第二右眼图像。The control display displays a second left eye image in the left eye view area of the VR device and a second right eye image in the right eye view area of the VR device.
处理器930控制应用界面显示装置90的操作,处理器930还可以称为中央处理单元(Central Processing Unit,CPU)。存储器940可以包括只读存储器和随机存取存储器,并向处理器930提供指令和数据。存储器940的一部分还可以包括NVRAM。具体的应用中,应用界面显示装置90的各个组件通过总线系统950耦合在一起,其中总线系统950除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统950。The processor 930 controls the operation of the application interface display device 90, which may also be referred to as a Central Processing Unit (CPU). Memory 940 can include read only memory and random access memory and provides instructions and data to processor 930. A portion of the memory 940 can also include an NVRAM. In a specific application, the components of the application interface display device 90 are coupled together by a bus system 950. The bus system 950 may include a power bus, a control bus, a status signal bus, and the like in addition to the data bus. However, for clarity of description, various buses are labeled as bus system 950 in the figure.
上述本申请实施例揭示的方法可以应用于处理器930中,或者由处理器930实现。处理器930可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器930中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器930可以是通用处理器、数字信号处理器(Digital Signal Processing,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器940,处理器930读取存储器940中的信息,结合其硬件完成上述方法的步骤。在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。The method disclosed in the foregoing embodiment of the present application may be applied to the processor 930 or implemented by the processor 930. Processor 930 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 930 or an instruction in a form of software. The processor 930 may be a general-purpose processor, a digital signal processing (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or Other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The general purpose processor may be a microprocessor or the processor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor. The software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like. The storage medium is located in memory 940, and processor 930 reads the information in memory 940 and, in conjunction with its hardware, performs the steps of the above method. In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product.
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机
程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。The computer program product includes one or more computer instructions. Loading and executing the computer on a computer
The program or function described in the embodiment of the present application is generated in whole or in part when the program is instructed. The computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device. The computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.). The computer readable storage medium can be any available media that can be stored by a computer or a data storage device such as a server, data center, or the like that includes one or more available media. The usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a Solid State Disk (SSD)) or the like.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。A person skilled in the art can clearly understand that for the convenience and brevity of the description, the specific working process of the system, the device and the unit described above can refer to the corresponding process in the foregoing method embodiment, and details are not described herein again.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided by the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application, in essence or the contribution to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium. A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes. .
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。
The above embodiments are only used to explain the technical solutions of the present application, and are not limited thereto; although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that they can still The technical solutions described in the embodiments are modified, or the equivalents of the technical features are replaced by the equivalents. The modifications and substitutions of the embodiments do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.
Claims (15)
- 一种应用界面显示方法,用于应用界面显示装置在虚拟现实VR设备上显示二维2D应用程序的界面,其特征在于,包括:An application interface display method for an interface of an application interface display device for displaying a two-dimensional 2D application on a virtual reality VR device, comprising:获取待显示界面,所述待显示界面为2D应用程序的界面;Obtaining an interface to be displayed, where the interface to be displayed is an interface of a 2D application;对所述待显示界面进行维度转换处理,得到所述待显示界面对应的第一左眼图像及所述第一右眼图像,所述第一左眼图像及所述第一右眼图像用于呈现具有三维视觉效果的所述待显示界面;Performing a dimension conversion process on the interface to be displayed to obtain a first left eye image and the first right eye image corresponding to the interface to be displayed, where the first left eye image and the first right eye image are used Presenting the interface to be displayed having a three-dimensional visual effect;获取用户的第一头部姿态;Obtaining a first head gesture of the user;根据所述第一头部姿态,调整所述第一左眼图像得到第二左眼图像,并调整所述第一右眼图像得到第二右眼图像;Adjusting the first left eye image to obtain a second left eye image according to the first head posture, and adjusting the first right eye image to obtain a second right eye image;在所述VR设备的左眼视野区域显示所述第二左眼图像,并在所述VR设备的右眼视野区域显示所述第二右眼图像。Displaying the second left eye image in a left eye view area of the VR device, and displaying the second right eye image in a right eye view area of the VR device.
- 根据权利要求1所述的方法,其特征在于,所述对所述待显示界面进行维度转换处理包括:The method according to claim 1, wherein the performing dimension conversion processing on the interface to be displayed comprises:对所述待显示界面进行双目渲染,得到所述待显示界面的第三左眼图像及第三右眼图像;Performing binocular rendering on the interface to be displayed to obtain a third left eye image and a third right eye image of the interface to be displayed;对所述第三左眼图像及所述第三右眼图像进行桶形畸变处理,得到所述待显示界面的第一左眼图像及所述待显示界面的第一右眼图像。Performing barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image of the interface to be displayed and a first right eye image of the interface to be displayed.
- 根据权利要求2所述的方法,其特征在于,所述对所述待显示界面进行双目渲染,得到所述待显示界面的第三左眼图像及第三右眼图像包括:The method according to claim 2, wherein the binocular rendering of the interface to be displayed, the obtaining of the third left eye image and the third right eye image of the interface to be displayed include:获取用户的第二头部姿态;Obtaining a second head gesture of the user;根据所述第二头部姿态分别确定第一区域及第二区域,所述第一区域为预设的三维场景的左眼图像中用于展示所述待显示界面的区域,所述第二区域为所述预设的三维场景的右眼图像中用于展示所述待显示界面的区域;Determining, according to the second head posture, a first area and a second area, where the first area is an area for displaying the interface to be displayed in a left eye image of a preset three-dimensional scene, the second area An area for displaying the interface to be displayed in the right eye image of the preset three-dimensional scene;在所述第一区域绘制所述待显示界面得到所述第三左眼图像,并在所述第二区域绘制所述待显示界面得到所述第三右眼图像。Draw the to-be-displayed interface in the first area to obtain the third left-eye image, and draw the to-be-displayed interface in the second area to obtain the third right-eye image.
- 根据权利要求1所述的方法,其特征在于,所述根据所述第一头部姿态,调整所述第一左眼图像得到第二左眼图像,并调整所述第一右眼图像得到第二右眼图像包括:The method according to claim 1, wherein the adjusting the first left eye image to obtain a second left eye image according to the first head posture, and adjusting the first right eye image to obtain the first The second right eye image includes:根据所述第一头部姿态对所述第一左眼图像进行异步时间扭曲得到第二左眼图像,对所述第一右眼图像进行异步时间扭曲得到第二右眼图像。Performing asynchronous time warping on the first left eye image according to the first head posture to obtain a second left eye image, and performing asynchronous time warping on the first right eye image to obtain a second right eye image.
- 根据权利要求1至4中任一项所述的方法,其特征在于,所述获取待显示界面包括:The method according to any one of claims 1 to 4, wherein the obtaining the interface to be displayed comprises:从移动终端获取所述待显示界面;Obtaining the to-be-displayed interface from the mobile terminal;所述在VR设备的左眼视野区域显示所述第二左眼图像,并在所述VR设备的右眼视野区域显示所述第二右眼图像包括:Displaying the second left eye image in a left eye view area of the VR device, and displaying the second right eye image in a right eye view area of the VR device includes:将所述第二左眼图像及所述第二右眼图像发送给所述移动终端,使得所述移动终端在屏幕的第三区域显示所述第二左眼图像,在屏幕的第四区域显示所述第二右眼图像,所述移动终端的屏幕包括第三区域和第四区域,所述第三区域对应所述VR设备的左眼视野区 域,所述第四区域对应所述VR设备的右眼视野区域。Transmitting the second left eye image and the second right eye image to the mobile terminal, so that the mobile terminal displays the second left eye image in a third area of the screen, and displays the fourth area of the screen The second right eye image, the screen of the mobile terminal includes a third area and a fourth area, where the third area corresponds to a left eye view area of the VR device The fourth area corresponds to a right eye view area of the VR device.
- 根据权利要求5所述的方法,其特征在于,所述移动终端包括SurfaceFlinger模块,所述从所述移动终端获取待显示界面包括:The method of claim 5, wherein the mobile terminal comprises a SurfaceFlinger module, and the obtaining the interface to be displayed from the mobile terminal comprises:从所述SurfaceFlinger模块处获取所述待显示界面。Obtaining the interface to be displayed from the SurfaceFlinger module.所述将所述第二左眼图像及所述第二右眼图像发送给所述移动终端包括:The sending the second left eye image and the second right eye image to the mobile terminal includes:将所述第二左眼图像及所述第二右眼图像发送至所述SurfaceFlinger模块,以使得所述SurfaceFlinger模块端在所述移动终端的屏幕的第三区域显示所述第二左眼图像,在屏幕的第四区域显示所述第二右眼图像。Sending the second left eye image and the second right eye image to the SurfaceFlinger module, so that the SurfaceFlinger module end displays the second left eye image in a third area of the screen of the mobile terminal, The second right eye image is displayed in a fourth area of the screen.
- 一种应用界面显示装置,用于在虚拟现实VR设备上显示二维2D应用程序的界面,其特征在于,包括:An application interface display device for displaying an interface of a two-dimensional 2D application on a virtual reality VR device, comprising:第一获取模块,用于获取待显示界面,所述待显示界面为2D应用程序的界面;a first acquiring module, configured to acquire an interface to be displayed, where the interface to be displayed is an interface of a 2D application;处理模块,用于对所述第一获取模块获取的所述待显示界面进行维度转换处理,得到所述待显示界面对应的第一左眼图像及所述第一右眼图像,所述第一左眼图像及所述第一右眼图像用于呈现具有三维视觉效果的所述待显示界面;a processing module, configured to perform a dimension conversion process on the to-be-displayed interface acquired by the first acquiring module, to obtain a first left-eye image and the first right-eye image corresponding to the to-be-displayed interface, where the first The left eye image and the first right eye image are used to present the to-be-displayed interface having a three-dimensional visual effect;第二获取模块,用于获取用户的第一头部姿态;a second acquiring module, configured to acquire a first head posture of the user;调整模块,用于根据所述第二获取模块获取的所述第一头部姿态,调整所述第一左眼图像得到第二左眼图像,并调整所述第一右眼图像得到第二右眼图像;And an adjusting module, configured to adjust the first left eye image to obtain a second left eye image according to the first head posture acquired by the second acquiring module, and adjust the first right eye image to obtain a second right image Eye image显示模块,用于在所述VR设备的左眼视野区域显示所述第二左眼图像,并在所述VR设备的右眼视野区域显示所述第二右眼图像。a display module, configured to display the second left eye image in a left eye view area of the VR device, and display the second right eye image in a right eye view area of the VR device.
- 根据权利要求7所述的装置,其特征在于,所述处理模块包括:The device according to claim 7, wherein the processing module comprises:渲染单元,用于对所述待显示界面进行双目渲染,得到所述待显示界面的第三左眼图像及第三右眼图像;a rendering unit, configured to perform binocular rendering on the interface to be displayed, to obtain a third left eye image and a third right eye image of the interface to be displayed;处理单元,用于对所述第三左眼图像及所述第三右眼图像进行桶形畸变处理,得到所述待显示界面的第一左眼图像及所述待显示界面的第一右眼图像。a processing unit, configured to perform barrel distortion processing on the third left eye image and the third right eye image to obtain a first left eye image of the interface to be displayed and a first right eye of the interface to be displayed image.
- 根据权利要求8所述的装置,其特征在于,所述渲染单元包括:The apparatus according to claim 8, wherein the rendering unit comprises:第一获取子单元,用于获取用户的第二头部姿态;a first acquiring subunit, configured to acquire a second head posture of the user;确定子单元,用于根据所述第二头部姿态分别确定第一区域及第二区域,所述第一区域为预设的三维场景的左眼图像中用于展示所述待显示界面的区域,所述第二区域为所述预设的三维场景的右眼图像中用于展示所述待显示界面的区域;a determining subunit, configured to respectively determine a first area and a second area according to the second head posture, where the first area is an area for displaying the interface to be displayed in a left eye image of a preset three-dimensional scene The second area is an area in the right eye image of the preset three-dimensional scene for displaying the interface to be displayed;绘制子单元,用于在所述第一区域绘制所述待显示界面得到所述第三左眼图像,并在所述第二区域绘制所述待显示界面得到所述第三右眼图像。And a drawing subunit, configured to draw the to-be-displayed interface in the first area to obtain the third left-eye image, and draw the to-be-displayed interface in the second area to obtain the third right-eye image.
- 根据权利要求7所述的装置,其特征在于,所述调整模块包括:The apparatus according to claim 7, wherein the adjustment module comprises:时间扭曲单元,用于根据所述第一头部姿态对所述第一左眼图像进行异步时间扭曲得到第二左眼图像,对所述第一右眼图像进行异步时间扭曲得到第二右眼图像。a time warping unit, configured to perform asynchronous time warping on the first left eye image according to the first head posture to obtain a second left eye image, and perform asynchronous time warping on the first right eye image to obtain a second right eye image.
- 根据权利要求7至10中任一项所述的装置,其特征在于,所述第一获取模块包括:The apparatus according to any one of claims 7 to 10, wherein the first acquisition module comprises:获取单元,用于从移动终端获取所述待显示界面;An obtaining unit, configured to acquire the to-be-displayed interface from the mobile terminal;所述显示模块包括: The display module includes:发送单元,用于将所述第二左眼图像及所述第二右眼图像发送给所述移动终端,使得所述移动终端在屏幕的第三区域显示所述第二左眼图像,在屏幕的第四区域显示所述第二右眼图像,所述移动终端的屏幕包括第三区域和第四区域,所述第三区域对应所述VR设备的左眼视野区域,所述第四区域对应所述VR设备的右眼视野区域。a sending unit, configured to send the second left eye image and the second right eye image to the mobile terminal, so that the mobile terminal displays the second left eye image in a third area of the screen, on the screen The fourth area displays the second right eye image, the screen of the mobile terminal includes a third area and a fourth area, where the third area corresponds to a left eye view area of the VR device, and the fourth area corresponds to The right eye field of view of the VR device.
- 根据权利要求11所述的装置,其特征在于,所述移动终端包括SurfaceFlinger模块,所述获取单元包括:The device according to claim 11, wherein the mobile terminal comprises a SurfaceFlinger module, and the obtaining unit comprises:第二获取子单元,用于从所述SurfaceFlinger模块处获取所述待显示界面。And a second obtaining subunit, configured to obtain the interface to be displayed from the SurfaceFlinger module.所述发送单元包括:The sending unit includes:发送子单元,用于将所述第二左眼图像及所述第二右眼图像发送至所述SurfaceFlinger模块,以使得所述SurfaceFlinger模块端在所述移动终端的屏幕的第三区域显示所述第二左眼图像,在屏幕的第四区域显示所述第二右眼图像。a sending subunit, configured to send the second left eye image and the second right eye image to the SurfaceFlinger module, so that the SurfaceFlinger module end displays the third area of the screen of the mobile terminal The second left eye image displays the second right eye image in a fourth area of the screen.
- 一种应用界面显示装置,用于在虚拟现实VR设备上显示二维2D应用程序的界面,其特征在于,所述应用界面显示装置或所述VR设备中包括至少一台显示器;所述应用界面显示装置包括:输入设备,输出设备,处理器和存储器;An application interface display device, configured to display an interface of a two-dimensional 2D application on a virtual reality VR device, wherein the application interface display device or the VR device includes at least one display; the application interface The display device comprises: an input device, an output device, a processor and a memory;所述存储器用于存储程序;The memory is used to store a program;所述处理器用于执行所述存储器中的程序,具体包括如下步骤:The processor is configured to execute the program in the memory, and specifically includes the following steps:获取待显示界面,所述待显示界面为2D应用程序的界面;Obtaining an interface to be displayed, where the interface to be displayed is an interface of a 2D application;对所述待显示界面进行维度转换处理,得到所述待显示界面对应的第一左眼图像及所述第一右眼图像,所述第一左眼图像及所述第一右眼图像用于呈现具有三维视觉效果的所述待显示界面;Performing a dimension conversion process on the interface to be displayed to obtain a first left eye image and the first right eye image corresponding to the interface to be displayed, where the first left eye image and the first right eye image are used Presenting the interface to be displayed having a three-dimensional visual effect;获取用户的第一头部姿态;Obtaining a first head gesture of the user;根据所述第一头部姿态,调整所述第一左眼图像得到第二左眼图像,并调整所述第一右眼图像得到第二右眼图像;Adjusting the first left eye image to obtain a second left eye image according to the first head posture, and adjusting the first right eye image to obtain a second right eye image;控制所述显示器在所述VR设备的左眼视野区域显示所述第二左眼图像,并在所述VR设备的右眼视野区域显示所述第二右眼图像。Controlling the display to display the second left eye image in a left eye view area of the VR device, and displaying the second right eye image in a right eye view area of the VR device.
- 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1至6中任意一项所述的方法。A computer readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 6.
- 一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如权利要求1至6中任意一项所述的方法。 A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201780010154.0A CN108604385A (en) | 2016-11-08 | 2017-03-24 | A kind of application interface display methods and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610980760 | 2016-11-08 | ||
CN201610980760.2 | 2016-11-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018086295A1 true WO2018086295A1 (en) | 2018-05-17 |
Family
ID=62110374
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/078027 WO2018086295A1 (en) | 2016-11-08 | 2017-03-24 | Application interface display method and apparatus |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108604385A (en) |
WO (1) | WO2018086295A1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110597577A (en) * | 2019-05-31 | 2019-12-20 | 珠海全志科技股份有限公司 | Head-mounted visual equipment and split-screen display method and device thereof |
CN111556305A (en) * | 2020-05-20 | 2020-08-18 | 京东方科技集团股份有限公司 | Image processing method, VR device, terminal, display system and computer-readable storage medium |
CN112639681A (en) * | 2018-08-23 | 2021-04-09 | 苹果公司 | Method and apparatus for process data sharing |
CN112965773A (en) * | 2021-03-03 | 2021-06-15 | 闪耀现实(无锡)科技有限公司 | Method, apparatus, device and storage medium for information display |
CN113538648A (en) * | 2021-07-27 | 2021-10-22 | 歌尔光学科技有限公司 | Image rendering method, device, equipment and computer readable storage medium |
CN113589927A (en) * | 2021-07-23 | 2021-11-02 | 杭州灵伴科技有限公司 | Split screen display method, head-mounted display device and computer readable medium |
CN113660476A (en) * | 2021-08-16 | 2021-11-16 | 纵深视觉科技(南京)有限责任公司 | Three-dimensional display system and method based on Web page |
CN114674531A (en) * | 2021-08-30 | 2022-06-28 | 北京新能源汽车股份有限公司 | Boundary determining method and device for vehicle rearview mirror, control equipment and automobile |
CN114972607A (en) * | 2022-07-29 | 2022-08-30 | 烟台芯瞳半导体科技有限公司 | Data transmission method, device and medium for accelerating image display |
CN115190284A (en) * | 2022-07-06 | 2022-10-14 | 敏捷医疗科技(苏州)有限公司 | Image processing method |
CN115272568A (en) * | 2022-07-12 | 2022-11-01 | 重庆大学 | Dislocation interface characteristic three-dimensional visualization method |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112015264B (en) * | 2019-05-30 | 2023-10-20 | 深圳市冠旭电子股份有限公司 | Virtual reality display method, virtual reality display device and virtual reality equipment |
CN110286866A (en) * | 2019-06-24 | 2019-09-27 | 上海临奇智能科技有限公司 | A kind of rendering method and equipment of virtual transparent screen |
CN113342220B (en) * | 2021-05-11 | 2023-09-12 | 杭州灵伴科技有限公司 | Window rendering method, head-mounted display suite and computer-readable medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103236074A (en) * | 2013-03-25 | 2013-08-07 | 深圳超多维光电子有限公司 | Method and device for processing 2D (two-dimensional) /3D images |
CN103402106A (en) * | 2013-07-25 | 2013-11-20 | 青岛海信电器股份有限公司 | Method and device for displaying three-dimensional image |
CN105376546A (en) * | 2015-11-09 | 2016-03-02 | 中科创达软件股份有限公司 | 2D-to-3D method, device and mobile terminal |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120200676A1 (en) * | 2011-02-08 | 2012-08-09 | Microsoft Corporation | Three-Dimensional Display with Motion Parallax |
JP5874176B2 (en) * | 2011-03-06 | 2016-03-02 | ソニー株式会社 | Display device and relay device |
CN105447898B (en) * | 2015-12-31 | 2018-12-25 | 北京小鸟看看科技有限公司 | The method and apparatus of 2D application interface are shown in a kind of virtual reality device |
-
2017
- 2017-03-24 CN CN201780010154.0A patent/CN108604385A/en active Pending
- 2017-03-24 WO PCT/CN2017/078027 patent/WO2018086295A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103236074A (en) * | 2013-03-25 | 2013-08-07 | 深圳超多维光电子有限公司 | Method and device for processing 2D (two-dimensional) /3D images |
CN103402106A (en) * | 2013-07-25 | 2013-11-20 | 青岛海信电器股份有限公司 | Method and device for displaying three-dimensional image |
CN105376546A (en) * | 2015-11-09 | 2016-03-02 | 中科创达软件股份有限公司 | 2D-to-3D method, device and mobile terminal |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112639681A (en) * | 2018-08-23 | 2021-04-09 | 苹果公司 | Method and apparatus for process data sharing |
CN110597577A (en) * | 2019-05-31 | 2019-12-20 | 珠海全志科技股份有限公司 | Head-mounted visual equipment and split-screen display method and device thereof |
CN111556305A (en) * | 2020-05-20 | 2020-08-18 | 京东方科技集团股份有限公司 | Image processing method, VR device, terminal, display system and computer-readable storage medium |
US11838494B2 (en) | 2020-05-20 | 2023-12-05 | Beijing Boe Optoelectronics Technology Co., Ltd. | Image processing method, VR device, terminal, display system, and non-transitory computer-readable storage medium |
CN112965773A (en) * | 2021-03-03 | 2021-06-15 | 闪耀现实(无锡)科技有限公司 | Method, apparatus, device and storage medium for information display |
CN112965773B (en) * | 2021-03-03 | 2024-05-28 | 闪耀现实(无锡)科技有限公司 | Method, apparatus, device and storage medium for information display |
CN113589927B (en) * | 2021-07-23 | 2023-07-28 | 杭州灵伴科技有限公司 | Split screen display method, head-mounted display device and computer readable medium |
CN113589927A (en) * | 2021-07-23 | 2021-11-02 | 杭州灵伴科技有限公司 | Split screen display method, head-mounted display device and computer readable medium |
CN113538648A (en) * | 2021-07-27 | 2021-10-22 | 歌尔光学科技有限公司 | Image rendering method, device, equipment and computer readable storage medium |
CN113538648B (en) * | 2021-07-27 | 2024-04-30 | 歌尔科技有限公司 | Image rendering method, device, equipment and computer readable storage medium |
CN113660476A (en) * | 2021-08-16 | 2021-11-16 | 纵深视觉科技(南京)有限责任公司 | Three-dimensional display system and method based on Web page |
CN114674531A (en) * | 2021-08-30 | 2022-06-28 | 北京新能源汽车股份有限公司 | Boundary determining method and device for vehicle rearview mirror, control equipment and automobile |
CN115190284A (en) * | 2022-07-06 | 2022-10-14 | 敏捷医疗科技(苏州)有限公司 | Image processing method |
CN115190284B (en) * | 2022-07-06 | 2024-02-27 | 敏捷医疗科技(苏州)有限公司 | Image processing method |
CN115272568A (en) * | 2022-07-12 | 2022-11-01 | 重庆大学 | Dislocation interface characteristic three-dimensional visualization method |
CN114972607A (en) * | 2022-07-29 | 2022-08-30 | 烟台芯瞳半导体科技有限公司 | Data transmission method, device and medium for accelerating image display |
Also Published As
Publication number | Publication date |
---|---|
CN108604385A (en) | 2018-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018086295A1 (en) | Application interface display method and apparatus | |
US20240267524A1 (en) | Reprojecting holographic video to enhance streaming bandwidth/quality | |
US11010958B2 (en) | Method and system for generating an image of a subject in a scene | |
JP7009494B2 (en) | Mixed reality system with color virtual content warping and how to use it to generate virtual content | |
CN111880644A (en) | Multi-user instant location and map construction (SLAM) | |
CN107924589B (en) | Communication system | |
JP2012079291A (en) | Program, information storage medium and image generation system | |
WO2019020608A1 (en) | Method and system for providing virtual reality experience based on ultrasound data | |
CN111355944B (en) | Generating and signaling transitions between panoramic images | |
JP2019197368A (en) | Stereoscopic motion image depth compression device and stereoscopic motion image depth compression program | |
CN102005062A (en) | Method and device for producing three-dimensional image for three-dimensional stereo display | |
JP7426413B2 (en) | Blended mode three-dimensional display system and method | |
US10957106B2 (en) | Image display system, image display device, control method therefor, and program | |
JP2023505235A (en) | Virtual, Augmented, and Mixed Reality Systems and Methods | |
CN102063735B (en) | Method and device for manufacturing three-dimensional image source by changing viewpoint angles | |
WO2017085803A1 (en) | Video display device and video display method | |
TWI817335B (en) | Stereoscopic image playback apparatus and method of generating stereoscopic images thereof | |
CN116610213A (en) | Interactive display method and device in virtual reality, electronic equipment and storage medium | |
US11187914B2 (en) | Mirror-based scene cameras | |
CN115375825A (en) | Perspective corrected vector graphics rendered with point of regard | |
TWM630947U (en) | Stereoscopic image playback apparatus | |
CN110620917A (en) | Virtual reality cross-screen stereoscopic display method | |
LU503478B1 (en) | Method of virtual reality cross-screen stereoscopic display | |
Zhang et al. | Integration of real-time 3D image acquisition and multiview 3D display | |
EP4030752A1 (en) | Image generation system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17869609 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17869609 Country of ref document: EP Kind code of ref document: A1 |