WO2008122838A1 - Improved image quality in stereoscopic multiview displays - Google Patents
Improved image quality in stereoscopic multiview displays Download PDFInfo
- Publication number
- WO2008122838A1 WO2008122838A1 PCT/IB2007/051208 IB2007051208W WO2008122838A1 WO 2008122838 A1 WO2008122838 A1 WO 2008122838A1 IB 2007051208 W IB2007051208 W IB 2007051208W WO 2008122838 A1 WO2008122838 A1 WO 2008122838A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- sets
- image
- multiview display
- stemming
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/349—Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/183—On-screen display [OSD] information, e.g. subtitles or menus
Definitions
- This invention relates to a method, a computer program, a computer program product, a device and a system for providing sets of image data of a three-dimensional image for a multiview display.
- the horizontal distance or disparity in an object between the left- and right-eye images defines the amount of stereo effect in the image.
- the most common 3D displays have two views (a left-eye view and a right-eye view) , and the user must be located exactly in front of the display to see a comfortable 3D image. That is, the moving freedom in front of the display is very limited.
- One solution for the limited viewing freedom is to have more than two views in the display.
- the views are spread more widely and the transition from one view to another is typically smooth.
- the resulting motion parallax i.e. the possibility to peak behind objects makes the 3D experience more natural .
- each view In these spatially interlaced 3D displays, generally the resolution for each view is reduced compared to the resolution of a 2D display that uses a base panel (pixel matrix) of the same size.
- the resolution for each view In a 2-view 3D display, the resolution for each view is still half of the base panel resolution, which is not critical yet especially with high-resolution displays.
- N views e.g. 9
- each view has only 1/N (e.g. 1/9) of the base panel resolution.
- the perceived resolution might be slightly higher than this because typically the adjacent views are partly overlapping. Still, the resolution is clearly lower than in 2-view displays.
- a method for providing image data related to a three-dimensional image for a multiview display wherein the image data comprises sets of image data, wherein the multiview display comprises a plurality of sets of one or more viewing sectors, and wherein the multiview display is configured to display each set of image data of the sets of image data in a different set of viewing sectors of the plurality of sets of viewing sectors to create a stereoscopic effect.
- the method comprises including, in at least two sets of image data of the sets of image data, image data stemming from the same view of at least a part of the three-dimensional image, so that the number of views displayed for the at least a part of the three-dimensional image is smaller than the number of sets of viewing sectors of the multiview display.
- a computer program for providing image data related to a three-dimensional image for a multiview display wherein the image data comprises sets of image data, wherein the multiview display comprises a plurality of sets of one or more viewing sectors, and wherein the multiview display is configured to display each set of image data of the sets of image data in a different set of viewing sectors of the plurality of sets of viewing sectors to create a stereoscopic effect.
- the computer program comprises instructions operable to cause a processor to include, in at least two sets of image data of the sets of image data, image data stemming from the same view of at least a part of the three-dimensional image, so that the number of views displayed for the at least a part of the three-dimensional image is smaller than the number of sets of viewing sectors of the multiview display.
- the computer program may for instance be stored on a computer-readable medium.
- the computer-readable medium may for instance be embodied as an electric, magnetic, electro-magnetic or optic storage medium, and may either be a removable medium or a medium that is fixedly installed in a device.
- a device for providing image data related to a three-dimensional image for a multiview display wherein the image data comprises sets of image data, wherein the multiview display comprises a plurality of sets of one or more viewing sectors, and wherein the multiview display is configured to display each set of image data of the sets of image data in a different set of viewing sectors of the plurality of sets of viewing sectors to create a stereoscopic effect.
- the device comprises a processing unit configured to include, in at least two sets of image data of the sets of image data, image data stemming from the same view of at least a part of the three-dimensional image, so that the number of views displayed for the at least a part of the three-dimensional image is smaller than the number of sets of viewing sectors of the multiview display.
- the device may for instance be embodied as a module.
- the device may also comprise the multiview display.
- the device may for instance be a mobile communication device or a part thereof, such as for instance a mobile phone, a personal digital assistant or a portable computer. Equally well, the device may be a computer.
- the device may for instance be a server in a communication network that produces image data to be transmitted to clients (e.g.
- a system comprising a first device and a second device, wherein the first device is configured to provide image data related to a three-dimensional image for a multiview display, wherein the image data comprises sets of image data, wherein the multiview display comprises a plurality of sets of one or more viewing sectors, and wherein the multiview display is configured to display each set of image data of the sets of image data in a different set of viewing sectors of the plurality of sets of viewing sectors to create a stereoscopic effect.
- the first device comprises a processing unit configured to include, in at least two sets of image data of the sets of image data, image data stemming from the same view of at least a part of the three-dimensional image, so that the number of views displayed for the at least a part of the three-dimensional image is smaller than the number of sets of viewing sectors of the multiview display, and an interface configured to transmit the sets of image data.
- the second device comprises an interface configured to receive the sets of image data, and the multiview display for displaying the sets of image data.
- the first and second devices may for instance be components of a wired and/or wireless network, e.g. a server, or a part thereof, and a client or a part thereof.
- image data related to a 3D image is provided for a multiview display.
- the providing of said image data may for instance comprise producing said image data and/or processing of raw image data.
- Said producing of said image data may for instance be performed by a 3D engine that uses virtual cameras looking at a virtual scene or object from different viewpoints.
- Said processing of raw image data may for instance comprise combining raw image data obtained from one or more cameras picturing an object from one or more different viewpoints.
- the image data may be in any format that allows the image data to be displayed on a multiview display.
- the multiview display is configured to display each set of image data in a different set of viewing sectors to create a stereoscopic effect.
- the sets of image data may be displayed in the different sets of viewing sectors completely or partially concurrently, or in a temporally interlaced manner.
- each of the sets of image data may for instance be rendered spatially interlaced, i.e. by a specific plurality of pixels of a base panel (e.g. a Liquid Crystal Display (LCD) matrix) of the multiview display, and the specific plurality of pixels may be associated with optical means that accordingly block or bend the light from the specific plurality of pixels so that the image data rendered by the specific plurality of pixels is only displayed in a specific set of one or more viewing sectors of the multiview display.
- a base panel e.g. a Liquid Crystal Display (LCD) matrix
- the sets of image data stem from different views of the 3D image (for instance a left-eye view and a right-eye view)
- providing the image data comprising the sets of image data to the multiview display causes the different views of the 3D image to be respectively displayed in different sets of viewing sectors of the multiview display, and thus creates a stereoscopic effect.
- each of the sets of image data may for instance be rendered by all pixels of the base panel (i.e. with full resolution), and the multiview display may be configured in a way to display the sets of image data in a temporally interlaced manner, wherein each set of image data is displayed in a different set of viewing sectors.
- the multiview display may for instance first display a first set of image data in a first set of viewing sectors, and then may display the second set of image data in a second set of viewing sectors.
- the perception inertia of a viewer's eye will cause the sets of image data to be perceived in a stereoscopic way.
- Changing the set of viewing sectors in which the respective set of image data is displayed may for instance be achieved by a bi-directional backlight structure, which is configured to direct light into two or more distinct directions (corresponding to viewing sectors) and is synchronized with the frequency of the change of the sets of image data displayed by the base panel.
- a bi-directional backlight structure which is configured to direct light into two or more distinct directions (corresponding to viewing sectors) and is synchronized with the frequency of the change of the sets of image data displayed by the base panel.
- other optical and/or mechanical means that allow to display the sets of image data in different sets of viewing sectors in temporally interlaced manner may be applied.
- the multiview display may also be configured display the sets of image data in the sets of viewing sectors in both a temporally and spatially interlaced way. For instance, in a first time slot, two sets of image data may be displayed spatially interlaced, i.e. by two different sets of pixels of the base panel that cooperate with one or more lenticular lenses to display the two sets of image data in two different sets of viewing sectors, and in a second time slot, two further sets of image data may be displayed spatially interlaced, so that, when integrating both time slots, four sets of image data are displayed in four different sets of viewing sectors (e.g. four different viewing sectors).
- image data stemming from the same view of at least a part of the 3D image is included. This including may for instance be performed during or after a producing of said image data. Equally well, this including may for instance be performed during or after processing of raw image data.
- the complexity of providing the image data for the multiview display may be significantly reduced.
- the sets of image data may be provided in a way that in the N sets of image data that are to be displayed by the multiview display, image data stemming from respective N different views of the background of the 3D image is included, i.e. image data stemming from a first view is included in a first set of image data, image data stemming from a second view is included in a second set of image data, and so forth.
- image data stemming from the same view of the foreground object is included into at least two of the sets of image data. For instance, only image data stemming from a left-eye view of the foreground object may be included into the first half of the sets of image data, and image data stemming from a right-eye view of the foreground object may be included into the second half of the sets of image data.
- This may slightly reduce the motion parallax (depending on the content) and/or the viewing freedom (depending on the 3D structure of the display) , but increases its sharpness. Furthermore, all sets of viewing sectors of the multiview display are still used.
- image data stemming from the same view of only a part of the 3D image may be included into at least two sets of image data. Then, in at least two sets of viewing sectors of the multiview display, the same view of the 3D image is displayed.
- the present invention may for instance be applied to create 3D content for a User Interface (UI), for 3D games, for movie subtitles, but may equally well be applied in the context of natural images (for instance with post processing) .
- UI User Interface
- 3D games for 3D games
- movie subtitles for movie subtitles
- the sets of image data form groups, and in all sets of image data forming a group, the image data stemming from the same view of the at least a part of the three-dimensional image is included.
- the image data stemming from the same view of the at least a part of the 3D image may for instance be image data stemming from a left-eye or right-eye view of the entire 3D image or a part of the 3D image.
- the sets of image data forming a group are displayed in adjacent viewing sectors, when the sets of image data are displayed by the multiview display.
- the sets of image data forming a first group may for instance be displayed by the first viewing sectors in each set of viewing sectors of the multiview display, and the sets of image data forming a second group may for instance be displayed by the last viewing sectors in each set of viewing sectors of the multiview display.
- the sets of image data only form two groups.
- the two groups may for instance be related to a left-eye view and a right-eye view of the 3D image or of a part of the 3D image, respectively.
- the image data stemming from the same view of the at least a part of the three-dimensional image is included in at least three sets of image data of the sets of image data.
- the sets of image data form a first group and a second group, wherein, when the sets of image data are displayed by the multiview display, the sets of image data forming the first group are displayed in adjacent viewing sectors, and the sets of image data forming the second group are displayed in adjacent viewing vectors, wherein in all sets of image data forming the first group, image data stemming from a left-eye view of the at least a part of the three-dimensional image is included, and wherein in all sets of image data forming the second group, image data stemming from a right-eye view of the at least a part of the three-dimensional image is included.
- the image data included in the at least two sets of image data and stemming from the same view of the at least a part of the three-dimensional image is the same for each of the at least two sets of image data. This may contribute to a reduced computational complexity.
- the image data included in the at least two sets of image data and stemming from the same view of the at least a part of the three-dimensional image is different for each of the at least two sets of image data.
- This may contribute to an improved perceived resolution of the 3D image or of parts thereof.
- This may for instance be of advantage when the sets of image data are displayed in the different sets of viewing sectors at least partially by spatial interlacing, because each spatially interlaced set of image data may then only use a portion of the resolution of the base panel.
- the image data included in the at least two sets of image data and stemming from the same view of the at least a part of the three-dimensional image differs for each of the at least two sets of image data by the spatial sampling grid applied when sampling the same view of the at least a part of the three-dimensional image.
- the view of the at least a part of the 3D image may be sampled by a rectangular sampling grid with M pixels, and for each of the at least two sets of image data stemming from this view of the 3D image, only a subset of pixels from the M pixels (corresponding to a thinned-out sampling grid) is used as the image data to be included.
- M/3 pixels may be used as image data to be included.
- This may for instance be achieved by using, as image data for a first of the three sets of image data that are to be displayed in spatially interlaced manner by the same base panel, a sampling grid which samples the first, the fourth, seventh and so forth pixels, by using, for a second of the three sets of image data, a sampling grid which samples the second, the fifth, eighth and so forth pixels, and by using for a third of the three sets of image data, a sampling grid which samples the third, the sixth, ninth and so forth pixels.
- the determining if at least a part of the three-dimensional image requires increased sharpness may for instance be based on information on the composition of the 3D image, for instance if there is text comprised in the 3D image. Furthermore, the determining may be based on an analysis of the 3D image.
- the three-dimensional image comprises at least one of a natural image and an artificial element.
- the natural image may for instance be used as a background.
- the artificial element may for instance be text or icons, to name but a few possibilities .
- Fig. Ia a schematic block diagram of a device according to an exemplary embodiment of the present invention.
- Fig. Ib a schematic block diagram of a system according to an exemplary embodiment of the present invention.
- Fig. 2 a schematic illustration of a generation of viewing sectors in a multiview display
- Fig. 3 a diagram illustrating the viewing sectors generated by a multiview display
- Fig. 4 a flowchart of an exemplary method according to the present invention.
- Fig. 5 an exemplary illustration of five views of a 3D image .
- Fig. Ia is a schematic block diagram of a device 1 according to an exemplary embodiment of the present invention.
- Device 1 comprises a processor 10, a processor memory 11 and a multiview display 12.
- Multiview display 12 is an autostereoscopic direct-view display that is configured to display image data relating to a 3D image in 3D.
- the display panel of the multiview display 12 includes a spatially interlaced structure that blocks or bends the light from only certain pixels to each eye, as will be explained in more detail with reference to Fig. 2 below.
- Device 1 may for instance be an electronic device such as for instance a computer, a mobile phone or a personal digital assistant.
- 3D content may be provided and displayed, for instance in the context of a 3D User Interface (UI) .
- UI 3D User Interface
- Fig. Ib is a schematic block diagram of a system 2 according to an exemplary embodiment of the present invention.
- System 2 comprises a first device 3 and a second device 4.
- Device 3 comprises a processor 30, a processor memory 31 and an interface 32.
- Processor 30 is configured to provide image data for displaying on a multiview display. The providing of this image data may for instance comprise producing the image data and/or processing of raw image data to obtain the image data for the multiview display. To provide the image data, the steps of flowchart 400 of Fig. 4 may be executed by processor 30, which steps may for instance be implemented in software code that is stored in processor memory 31 and can be accessed by processor 30.
- the image data provided by processor 30 is transmitted via an interface 32 to device 4. Therein, the image data provided by processor 30 may for instance also be at least partially produced by processor 30, or may at least partially be received by processor 30 and then be processed accordingly.
- Device 4 comprises an interface 42 for receiving image data from device 3, a processor 40 for controlling an overall operation of device 4, a processor memory for storing software code that is executed by processor 40, and a multiview display 43. Image data received from device 3 may then, under the control of processor 40, be forwarded to multiview display 43 for displaying.
- the image data provided by processor 10 of Fig. Ia and/or the processor 30 of Fig. Ib may be created or produced in many ways for a display with N views.
- Image data could also be created by a 3D engine that includes many virtual cameras looking at a scene or object from different viewpoints (for instance with fixed or adaptive camera separation) .
- the 3D engine could create a 2D image plus a depth map.
- Image data could also be created as layered data, which is a combination of the previous cases and flat objects located in the image with certain disparity (separation) between different views. Layered data may for instance be used for UI applications.
- Image data created by a 3D engine or layered data may for instance be at least partially obtained by processor 10 or processor 30 and then be further processed, or may at least partially be created by processor 10 or processor 30 itself.
- Fig. 2 is a schematic illustration of a generation of viewing sectors 7 and 8 in a multiview display 5, such as for instance the multiview displays 12 and 43 of Figs. Ia and Ib, respectively.
- LCD pixel matrix 51 renders image data that is fed to device 5.
- Lenticular lens 52 bends the light emitted by the pixels of the LCD pixel matrix 51 and thus allows to display specific sets of image data mainly in specific sets of viewing sectors. For instance, consider the pixels 510 and 511 in LCD pixel matrix 51. When rendering image data by LCD pixel matrix 51, pixels 510 and 511 can be understood to render a set of image data comprised in the overall image data rendered by LCD pixel matrix 51. Due to the presence of the lenticular lens 52, this set of image data is displayed in a specific set of one or more viewing sectors. In Fig. 2, only one viewing sector is represented by arrow 7.
- the set of image data rendered by pixels 510 and 511 is not only directed to one viewing sector 7, but to a set of viewing sectors, wherein the center angles of these viewing vectors may for instance be equidistant.
- An example of such a set of viewing sectors will be discussed with reference to Fig. 3 below.
- a viewer 6 with left eye 61 and right eye 62 is shown. Since the left eye 61 of viewer 6 is positioned in viewing sector 7, left eye 61 perceives the set of image data that is rendered by pixels 510 and 511 of LDC pixel matrix 51, whereas right eye 62 perceives the set of image data that is rendered by pixels 512 and 513.
- the two sets of image data displayed towards the left eye 61 and right eye 62 stem from different views of a 3D image (for instance have been produced with different viewing angles with respect to a 3D image target), e.g. a left-eye view and a right-eye view, a stereoscopic effect is created for viewer 6.
- Fig. 2 also illustrates the displaying of three further sets of image data in three different sets of viewing sectors.
- Multiview display 5 of Fig. 2 thus is capable of displaying five sets of image data in five different set of viewing sectors, so that actually five different views of a 3D image can be displayed.
- Fig. 5 exemplarily illustrates five different views 103-1, 103-2, 103-3, 103-4 and 103-5 of a 3D image 100 that is composed of a 3D object 101 and a text block 102.
- a 3D image 100 may for instance be used in a UI .
- each view can be understood as a 2D representation of the 3D image with respect to a specific viewing angle.
- the five views 103-1, 103-2, 103-3, 103-4 and 103-5 of Fig. 5 may for instance be described by five respective sets of image data, and these five sets of image data may then be displayed by the five viewing sectors of multiview display 5 of Fig. 2 to generate a 3D impression.
- a multiview display 5 that applies spatial interlacing to display the sets of image data in the different sets of viewing sectors was exemplarily considered.
- a multiview display that at least partially performs temporal interlacing to display the sets of image data in the different sets of viewing sectors could be deployed. For instance, instead of using only a subset of the pixels of the display panel 51 for the different sets of image data, all pixels of the display panel 51 could be used for each set of image data, and a bi-directional backlight structure could be used that allows to direct the light generated by the backlight into two or more distinct directions. In fast succession, then the sets of image data are displayed by the display panel, and the backlight is accordingly switched to direct the currently displayed set of image data towards a different viewing sectors.
- Such a temporally interlaced multiview display allows displaying the sets of image data with full resolution.
- temporal interlacing and spatial interlacing of sets of image data may be combined.
- Fig. 3 is a diagram illustrating in more detail the viewing sectors generated by a multiview display, such as for instance the multiview display 5 of Fig. 2.
- the diagram depicts the luminance distribution of five different sets of image data (e.g. the five sets of image data corresponding to the five views 103-1, 103-2, 103-3, 103-4 and 103-5 of 3D image 100 of Fig. 5) displayed by a multiview display (in candela per square meter, or "nits") as a function of the viewing angle with respect to the display (measured in the horizontal plane, with an angle of 0° defining a position straight in front of the display) .
- a multiview display in candela per square meter, or "nits"
- the peaks 91-1 and 91-2 illustrate two viewing sectors of a set of viewing vectors in which a first set of image data is displayed.
- peaks 92-1 and 92-2 illustrate two viewing sectors of a set of viewing sectors in which a second set of image data is displayed
- peak 93-1 illustrates one viewing sector of a set of viewing sectors in which a third set of image data is displayed
- peaks 94-1 and 94-2 illustrate two viewing sectors of a set of viewing sectors in which a fourth set of image data is displayed
- peaks 95-1 and 95-2 illustrate two viewing sectors of a set of viewing sectors in which a fifth set of image data is displayed.
- a viewing sector may for instance be defined based on the intersections between the curves of different sets of image data. This yields a sector width of approximately 3°. Equally well, other definitions of viewing sectors are possible.
- the viewing sectors at least partially overlap, which may be advantageous with respect to a smooth transition between the sets of image data displayed in the different viewing sectors .
- the multiview display does not only allow to display the 3D image in an angular range of approximately 15°, corresponding to the peaks 91-1, 92-1, 93-1, 94-1 and 95-1, but also in a much broader angular range, due to the presence of further peaks 91-2, 92-2, 94-2 and 95-2, i.e. due to the fact that sets of image data are not displayed in single viewing sectors only, but in sets of viewing sectors.
- Fig. 4 is a flowchart 400 of an exemplary method according to the present invention. The steps of this method may for instance be performed by processor 10 of device 1 (see Fig. Ia) or by processor 30 of device 3 of system 2 (see Fig. Ib) . This method is directed to increasing a sharpness and/or a resolution of at least a part of a 3D image.
- the loop defined by steps 401-404 causes image data of the five views 103-1, 103-2, 103-3, 103-4 and 103-5 of 3D image 100 of Fig. 5 to be included in five respective sets of image data. If these sets of image data were provided to a multiview display with five sets of viewing sectors (as described with reference to Figs. 2 and 3 above), a strong motion parallax and/or a large viewing freedom would be caused for 3D image 100 of Fig. 5. However, the text block 102 would be likely to be blurred to such a degree that it may no longer be properly readable .
- a step 405 it is determined if an increased sharpness is required for a part of the 3D image 100. As discussed above, this may for instance be the case for text block 102 of 3D image 100, which text block 102 is understood as a part of the 3D image 100.
- step 405 If it is determined in step 405 that an increased sharpness is required for text block 102, the flowchart proceeds to step 406 and includes image data of a left-eye view of the text block 102 in a first half of sets of image data, and includes image data of a right-eye view of text block 102 in a second half of sets of image data.
- the left-eye view of text block 102 may be view 103-2 of Fig. 5
- the right-eye view of text block 102 may be view 103-4 of Fig. 5. Since the views of Fig.
- the steps 401-406 of the flowchart 400 of Fig. 5 may for instance be performed during the production of the image data of the different views of the 3D image. Equally well, the steps 401-406 may be performed as a pre-processing on already existing sets of image data.
- a multiview 3D display thus can be treated partly or completely as if it was a 2-view 3D display. This may especially be true if the viewing freedom of the multiview 3D display is close to that of a common 2-view 3D display.
- the multiview display may for instance work as a 2-view 3D display, if the first half of the views is treated as the left-eye view and the second half of the views as the right-eye view.
- the display hardware of the multiview display then may not need any changes, only content creation has to be adapted. Those parts of the 3D image that need to have more sharpness may then be sampled with only 2 views and the rest treated as a multiview image.
- a background image in a 3D display may be a multiview image and a text in front of it (popping out) of the screen may only have two views.
- Including different image data stemming from the same view of the 3D image (or a part thereof) increases the perceived resolution of the 3D image (or the part thereof) , because the viewing sectors in which these sets of image data are displayed may at least partially overlap (see Fig. 3) . This causes the image data displayed in adjacent viewing sectors to be merged in a viewer's eye, resulting in increased perceived resolution.
- the computer software may be stored in a variety of storage media of electric, magnetic, electro-magnetic or optic type and may be read and executed by a processor, such as for instance a microprocessor.
- a processor such as for instance a microprocessor.
- the processor and the storage medium may be coupled to interchange information, or the storage medium may be included in the processor.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
This invention relates to a method, a computer program, a computer program product, a device and a system for providing image data related to a three-dimensional image for a multiview display. The image data comprises sets of image data. The multiview display comprises a plurality of sets of one or more viewing sectors. The multiview display is configured to display each set of image data of the sets of image data in a different set of viewing sectors of the plurality of sets of viewing sectors to create a stereoscopic effect. Therein, in at least two sets of image data of the sets of image data, image data stemming from the same view of at least a part of the three-dimensional image is included, so that the number of views displayed for the at least a part of the three-dimensional image is smaller than the number of sets of viewing sectors of the multiview display.
Description
Improved Image Quality in Stereoscopic Multiview Displays
FIELD OF THE INVENTION
This invention relates to a method, a computer program, a computer program product, a device and a system for providing sets of image data of a three-dimensional image for a multiview display.
BACKGROUND OF THE INVENTION
In autostereoscopic direct-view displays (i.e. three-dimensional (3D) displays), the image is seen in 3D.
The 3D display panel typically includes a spatially interlaced structure that blocks or bends the light from only certain pixels to each eye. So the left and the right eyes of the user see different pixels of the display, and if the display content is generated accordingly, different left- and right-eye images also.
The horizontal distance or disparity in an object between the left- and right-eye images defines the amount of stereo effect in the image. The most common 3D displays have two views (a left-eye view and a right-eye view) , and the user must be located exactly in front of the display to see a comfortable 3D image. That is, the moving freedom in front of the display is very limited.
One solution for the limited viewing freedom is to have more than two views in the display. In these multiview 3D displays, the views are spread more widely and the transition from one view to another is typically smooth. In addition to the larger
viewing freedom, the resulting motion parallax (i.e. the possibility to peak behind objects) makes the 3D experience more natural .
In these spatially interlaced 3D displays, generally the resolution for each view is reduced compared to the resolution of a 2D display that uses a base panel (pixel matrix) of the same size. In a 2-view 3D display, the resolution for each view is still half of the base panel resolution, which is not critical yet especially with high-resolution displays. However, in a multiview 3D display incorporating N views (e.g. 9), each view has only 1/N (e.g. 1/9) of the base panel resolution. The perceived resolution might be slightly higher than this because typically the adjacent views are partly overlapping. Still, the resolution is clearly lower than in 2-view displays.
Also, the perceived blurriness of objects increases when the stereo effect increases, which is not the case in 2-view displays .
This results in blurry images and e.g. in poor text readability in multiview 3D displays.
SUMMARY
The quality of a 3D image or of certain parts of it may be improved by minimizing the stereo effect of those parts. Keeping the disparity of these objects close to zero makes them look sharp. This approach may be used by 3D content creators both with 2-view and multiview stereo content. The downside is that the capabilities of the 3D display may be partly wasted.
According to a first aspect of the present invention, a method for providing image data related to a three-dimensional image for a multiview display is disclosed, wherein the image data comprises sets of image data, wherein the multiview display comprises a plurality of sets of one or more viewing sectors, and wherein the multiview display is configured to display each set of image data of the sets of image data in a different set of viewing sectors of the plurality of sets of viewing sectors to create a stereoscopic effect. The method comprises including, in at least two sets of image data of the sets of image data, image data stemming from the same view of at least a part of the three-dimensional image, so that the number of views displayed for the at least a part of the three-dimensional image is smaller than the number of sets of viewing sectors of the multiview display.
According to a second aspect of the present invention, a computer program for providing image data related to a three-dimensional image for a multiview display is disclosed, wherein the image data comprises sets of image data, wherein the multiview display comprises a plurality of sets of one or more viewing sectors, and wherein the multiview display is configured to display each set of image data of the sets of image data in a different set of viewing sectors of the plurality of sets of viewing sectors to create a stereoscopic effect. The computer program comprises instructions operable to cause a processor to include, in at least two sets of image data of the sets of image data, image data stemming from the same view of at least a part of the three-dimensional image, so that the number of views displayed for the at least a part of the three-dimensional image is smaller than the number of
sets of viewing sectors of the multiview display. The computer program may for instance be stored on a computer-readable medium. The computer-readable medium may for instance be embodied as an electric, magnetic, electro-magnetic or optic storage medium, and may either be a removable medium or a medium that is fixedly installed in a device.
According to a third aspect of the present invention, a device for providing image data related to a three-dimensional image for a multiview display is disclosed, wherein the image data comprises sets of image data, wherein the multiview display comprises a plurality of sets of one or more viewing sectors, and wherein the multiview display is configured to display each set of image data of the sets of image data in a different set of viewing sectors of the plurality of sets of viewing sectors to create a stereoscopic effect. The device comprises a processing unit configured to include, in at least two sets of image data of the sets of image data, image data stemming from the same view of at least a part of the three-dimensional image, so that the number of views displayed for the at least a part of the three-dimensional image is smaller than the number of sets of viewing sectors of the multiview display. The device may for instance be embodied as a module. The device may also comprise the multiview display. The device may for instance be a mobile communication device or a part thereof, such as for instance a mobile phone, a personal digital assistant or a portable computer. Equally well, the device may be a computer. The device may for instance be a server in a communication network that produces image data to be transmitted to clients (e.g. terminals) of the network via wired and/or wireless connections.
According to a fourth aspect of the present invention, a system is disclosed, comprising a first device and a second device, wherein the first device is configured to provide image data related to a three-dimensional image for a multiview display, wherein the image data comprises sets of image data, wherein the multiview display comprises a plurality of sets of one or more viewing sectors, and wherein the multiview display is configured to display each set of image data of the sets of image data in a different set of viewing sectors of the plurality of sets of viewing sectors to create a stereoscopic effect. The first device comprises a processing unit configured to include, in at least two sets of image data of the sets of image data, image data stemming from the same view of at least a part of the three-dimensional image, so that the number of views displayed for the at least a part of the three-dimensional image is smaller than the number of sets of viewing sectors of the multiview display, and an interface configured to transmit the sets of image data. The second device comprises an interface configured to receive the sets of image data, and the multiview display for displaying the sets of image data. The first and second devices may for instance be components of a wired and/or wireless network, e.g. a server, or a part thereof, and a client or a part thereof.
According to the present invention, image data related to a 3D image is provided for a multiview display. The providing of said image data may for instance comprise producing said image data and/or processing of raw image data. Said producing of said image data may for instance be performed by a 3D engine that uses virtual cameras looking at a virtual scene or object from different viewpoints. Said processing of raw image data
may for instance comprise combining raw image data obtained from one or more cameras picturing an object from one or more different viewpoints. The image data may be in any format that allows the image data to be displayed on a multiview display. The image data comprises sets of image data, and the multiview display, which comprises a plurality of sets of one or more viewing sectors, is configured to display each of the sets of image data in a different set of viewing sectors of the plurality of sets of viewing sectors to create a stereoscopic effect. The viewing sectors of the multiview display may for instance be sectors lying in a horizontal plane and/or in a vertical plane. Both of the planes may for instance be perpendicular to a plane defined by the screen of a display. A set of viewing sectors may for instance comprise only one viewing sector, or several viewing sectors which may for instance be periodically arranged, for instance with equidistant viewing sector center angles, in a plane. Therein, different sets of viewing sectors may be understood as sets of viewing sectors that do not have viewing sectors in common. Nevertheless, viewing sectors of different sets of viewing sectors may at least partially overlap.
The multiview display is configured to display each set of image data in a different set of viewing sectors to create a stereoscopic effect. Therein, the sets of image data may be displayed in the different sets of viewing sectors completely or partially concurrently, or in a temporally interlaced manner.
For instance, in the multiview display, each of the sets of image data may for instance be rendered spatially interlaced, i.e. by a specific plurality of pixels of a base panel (e.g.
a Liquid Crystal Display (LCD) matrix) of the multiview display, and the specific plurality of pixels may be associated with optical means that accordingly block or bend the light from the specific plurality of pixels so that the image data rendered by the specific plurality of pixels is only displayed in a specific set of one or more viewing sectors of the multiview display.
In case that the sets of image data stem from different views of the 3D image (for instance a left-eye view and a right-eye view) , providing the image data comprising the sets of image data to the multiview display causes the different views of the 3D image to be respectively displayed in different sets of viewing sectors of the multiview display, and thus creates a stereoscopic effect.
Equally well, in the multiview display, each of the sets of image data may for instance be rendered by all pixels of the base panel (i.e. with full resolution), and the multiview display may be configured in a way to display the sets of image data in a temporally interlaced manner, wherein each set of image data is displayed in a different set of viewing sectors. As a simple example, if only two sets of viewing sectors are present, the multiview display may for instance first display a first set of image data in a first set of viewing sectors, and then may display the second set of image data in a second set of viewing sectors. If the alternation between the displaying of both sets of image data is fast enough, the perception inertia of a viewer's eye will cause the sets of image data to be perceived in a stereoscopic way. Changing the set of viewing sectors in which the respective set of image data is displayed may for instance be achieved by a
bi-directional backlight structure, which is configured to direct light into two or more distinct directions (corresponding to viewing sectors) and is synchronized with the frequency of the change of the sets of image data displayed by the base panel. Equally well, other optical and/or mechanical means that allow to display the sets of image data in different sets of viewing sectors in temporally interlaced manner may be applied.
The multiview display may also be configured display the sets of image data in the sets of viewing sectors in both a temporally and spatially interlaced way. For instance, in a first time slot, two sets of image data may be displayed spatially interlaced, i.e. by two different sets of pixels of the base panel that cooperate with one or more lenticular lenses to display the two sets of image data in two different sets of viewing sectors, and in a second time slot, two further sets of image data may be displayed spatially interlaced, so that, when integrating both time slots, four sets of image data are displayed in four different sets of viewing sectors (e.g. four different viewing sectors).
According to the present invention, in at least two sets of image data of the sets of image data, image data stemming from the same view of at least a part of the 3D image is included. This including may for instance be performed during or after a producing of said image data. Equally well, this including may for instance be performed during or after processing of raw image data. This including has the effect that the number of views displayed by the multiview display for the at least a part of the three-dimensional image is smaller than the number of sets of viewing sectors of the multiview display.
This causes the at least a part of the 3D image to appear less blurred when the sets of image data are displayed by the multiview display. Furthermore, the complexity of providing the image data for the multiview display may be significantly reduced.
According to the present invention, it is thus possible to treat a multiview display that is actually capable of displaying N sets of image data in respective N different sets of viewing sectors partly or completely as if it was a multiview display with less than N sets of viewing sectors.
For instance, if a 3D image that is to be displayed by the multiview display with N sets of viewing sectors is composed of a foreground object, which is desired to be sharp (e.g. a text object), and a background, for which a large motion parallax and/or viewing freedom is desired, the sets of image data may be provided in a way that in the N sets of image data that are to be displayed by the multiview display, image data stemming from respective N different views of the background of the 3D image is included, i.e. image data stemming from a first view is included in a first set of image data, image data stemming from a second view is included in a second set of image data, and so forth. This ensures a strong motion parallax and/or a large viewing freedom for the background. However, to increase sharpness of the foreground object, instead of including image data stemming from respective N different views of the foreground object of the 3D image in the N sets of image data, image data stemming from the same view of the foreground object is included into at least two of the sets of image data. For instance, only image data stemming from a left-eye view of the foreground object may
be included into the first half of the sets of image data, and image data stemming from a right-eye view of the foreground object may be included into the second half of the sets of image data. This may slightly reduce the motion parallax (depending on the content) and/or the viewing freedom (depending on the 3D structure of the display) , but increases its sharpness. Furthermore, all sets of viewing sectors of the multiview display are still used.
Instead of including image data stemming from the same view of only a part of the 3D image into at least two sets of image data, equally well image data stemming from the same view of the entire 3D image may be included into at least two sets of image data. Then, in at least two sets of viewing sectors of the multiview display, the same view of the 3D image is displayed.
The present invention may for instance be applied to create 3D content for a User Interface (UI), for 3D games, for movie subtitles, but may equally well be applied in the context of natural images (for instance with post processing) .
According to an exemplary embodiment of the present invention, the sets of image data form groups, and in all sets of image data forming a group, the image data stemming from the same view of the at least a part of the three-dimensional image is included. The image data stemming from the same view of the at least a part of the 3D image may for instance be image data stemming from a left-eye or right-eye view of the entire 3D image or a part of the 3D image.
According to an exemplary embodiment of the present invention, the sets of image data forming a group are displayed in adjacent viewing sectors, when the sets of image data are displayed by the multiview display. For instance, in case of two groups, the sets of image data forming a first group may for instance be displayed by the first viewing sectors in each set of viewing sectors of the multiview display, and the sets of image data forming a second group may for instance be displayed by the last viewing sectors in each set of viewing sectors of the multiview display.
According to an exemplary embodiment of the present invention the sets of image data only form two groups. The two groups may for instance be related to a left-eye view and a right-eye view of the 3D image or of a part of the 3D image, respectively.
According to an exemplary embodiment of the present invention the image data stemming from the same view of the at least a part of the three-dimensional image is included in at least three sets of image data of the sets of image data.
According to an exemplary embodiment of the present invention, the sets of image data form a first group and a second group, wherein, when the sets of image data are displayed by the multiview display, the sets of image data forming the first group are displayed in adjacent viewing sectors, and the sets of image data forming the second group are displayed in adjacent viewing vectors, wherein in all sets of image data forming the first group, image data stemming from a left-eye view of the at least a part of the three-dimensional image is included, and wherein in all sets of image data forming the second group, image data stemming
from a right-eye view of the at least a part of the three-dimensional image is included.
According to an exemplary embodiment of the present invention, the image data included in the at least two sets of image data and stemming from the same view of the at least a part of the three-dimensional image is the same for each of the at least two sets of image data. This may contribute to a reduced computational complexity.
According to an exemplary embodiment of the present invention, the image data included in the at least two sets of image data and stemming from the same view of the at least a part of the three-dimensional image is different for each of the at least two sets of image data. This may contribute to an improved perceived resolution of the 3D image or of parts thereof. This may for instance be of advantage when the sets of image data are displayed in the different sets of viewing sectors at least partially by spatial interlacing, because each spatially interlaced set of image data may then only use a portion of the resolution of the base panel.
According to an exemplary embodiment of the present invention, the image data included in the at least two sets of image data and stemming from the same view of the at least a part of the three-dimensional image differs for each of the at least two sets of image data by the spatial sampling grid applied when sampling the same view of the at least a part of the three-dimensional image. For instance, the view of the at least a part of the 3D image may be sampled by a rectangular sampling grid with M pixels, and for each of the at least two sets of image data stemming from this view of the 3D image,
only a subset of pixels from the M pixels (corresponding to a thinned-out sampling grid) is used as the image data to be included. For instance, in case of three sets of image data, for each of the three sets of image data, M/3 pixels may be used as image data to be included. This may for instance be achieved by using, as image data for a first of the three sets of image data that are to be displayed in spatially interlaced manner by the same base panel, a sampling grid which samples the first, the fourth, seventh and so forth pixels, by using, for a second of the three sets of image data, a sampling grid which samples the second, the fifth, eighth and so forth pixels, and by using for a third of the three sets of image data, a sampling grid which samples the third, the sixth, ninth and so forth pixels.
Since viewing sectors (of different sets of viewing sectors) in which the sets of image data are displayed may overlap, a viewer may see with one eye more than only one set of image data at a time, so that the image data of the at least two sets of image data may at least partially be merged in the viewer's eye. Compared to the case when the same image data is included into each of the at least two sets of image data, thus an increased resolution may be achieved by including different image data into each of the at least two sets of image data.
According to an exemplary embodiment of the present invention, it is determined if at least a part of the three-dimensional image requires increased sharpness, wherein the including of the image data is only performed if it has been determined that at least a part of the three-dimensional image requires increased sharpness. The
determining if at least a part of the 3D image requires increased sharpness may for instance be based on information on the composition of the 3D image, for instance if there is text comprised in the 3D image. Furthermore, the determining may be based on an analysis of the 3D image.
According to an exemplary embodiment of the present invention, the three-dimensional image comprises at least one of a natural image and an artificial element. The natural image may for instance be used as a background. The artificial element may for instance be text or icons, to name but a few possibilities .
These and other aspects of the invention will be apparent from and elucidated with reference to the detailed description presented hereinafter. The features of the present invention and of its exemplary embodiments as presented above are understood to be disclosed also in all possible combinations with each other.
BRIEF DESCRIPTION OF THE FIGURES In the figures show:
Fig. Ia: a schematic block diagram of a device according to an exemplary embodiment of the present invention;
Fig. Ib: a schematic block diagram of a system according to an exemplary embodiment of the present invention;
Fig. 2: a schematic illustration of a generation of viewing sectors in a multiview display;
Fig. 3: a diagram illustrating the viewing sectors generated by a multiview display;
Fig. 4: a flowchart of an exemplary method according to the present invention; and
Fig. 5: an exemplary illustration of five views of a 3D image .
DETAILED DESCRIPTION OF THE INVENTION
In the following detailed description of the present invention, the present invention will be described by means of exemplary embodiments.
Fig. Ia is a schematic block diagram of a device 1 according to an exemplary embodiment of the present invention. Device 1 comprises a processor 10, a processor memory 11 and a multiview display 12.
Multiview display 12 is an autostereoscopic direct-view display that is configured to display image data relating to a 3D image in 3D. To this end, the display panel of the multiview display 12 includes a spatially interlaced structure that blocks or bends the light from only certain pixels to each eye, as will be explained in more detail with reference to Fig. 2 below.
Processor 10 is configured to provide image data for displaying by multiview display 12. The providing of this image data may for instance comprise producing the image data and/or processing of raw image data to obtain the image data for the multiview display. To provide the image data,
processor 10 may perform the steps of flowchart 400 of Fig. 4, which will be discussed below. These steps of flowchart 400 may for instance be implemented in software code, which is stored in processor memory 11 and can be accessed by processor 10. Therein, processor memory 11 may be fixedly installed in device 1, or may be a removable memory, such as a memory stick or card. The image data provided by processor 10 may for instance also be at least partially produced by processor 10, or may be at least partially received by processor 10 and then be processed accordingly.
Device 1 may for instance be an electronic device such as for instance a computer, a mobile phone or a personal digital assistant. On this electronic device, 3D content may be provided and displayed, for instance in the context of a 3D User Interface (UI) .
Fig. Ib is a schematic block diagram of a system 2 according to an exemplary embodiment of the present invention. System 2 comprises a first device 3 and a second device 4.
Device 3 comprises a processor 30, a processor memory 31 and an interface 32. Processor 30 is configured to provide image data for displaying on a multiview display. The providing of this image data may for instance comprise producing the image data and/or processing of raw image data to obtain the image data for the multiview display. To provide the image data, the steps of flowchart 400 of Fig. 4 may be executed by processor 30, which steps may for instance be implemented in software code that is stored in processor memory 31 and can be accessed by processor 30. The image data provided by processor 30 is transmitted via an interface 32 to device 4.
Therein, the image data provided by processor 30 may for instance also be at least partially produced by processor 30, or may at least partially be received by processor 30 and then be processed accordingly.
Device 4 comprises an interface 42 for receiving image data from device 3, a processor 40 for controlling an overall operation of device 4, a processor memory for storing software code that is executed by processor 40, and a multiview display 43. Image data received from device 3 may then, under the control of processor 40, be forwarded to multiview display 43 for displaying.
System 2 may for instance represent a (mobile) communication system, where device 3 represents a server of the communication system, and wherein device 4 represents a client of the communication system. The client then may display image data provided by the server, for instance image data related to a 3D UI, or to 3D games, movie subtitles, or natural images (where applicable, after post processing) .
The image data provided by processor 10 of Fig. Ia and/or the processor 30 of Fig. Ib may be created or produced in many ways for a display with N views.
For instance, N cameras with fixed camera separations (1 camera per 1 view) could be used to take images of a scene or object. Equally well, one camera that is moved between shots for each view could be used, or two cameras with a large camera separation could be used, wherein views between the two views taken by the two cameras could for instance be interpolated and/or extrapolated or generated by similar
techniques. As a further alternative, one camera could be used together with a measurement unit (like a depth sensor) or algorithm for detecting the depths (the z-direction) in the 3D image, resulting in a 2D image plus a depth map. The aforementioned production techniques may for instance be applied for natural images and video capturing, to name but a few examples.
Such image data may for instance at least partially be obtained by processor 10 or processor 30 as raw image data and may be further processed according to the present invention .
Image data could also be created by a 3D engine that includes many virtual cameras looking at a scene or object from different viewpoints (for instance with fixed or adaptive camera separation) . Alternatively, the 3D engine could create a 2D image plus a depth map. These production techniques may for instance be applied for gaming and other 3D computer graphics, to name but a few examples.
Image data could also be created as layered data, which is a combination of the previous cases and flat objects located in the image with certain disparity (separation) between different views. Layered data may for instance be used for UI applications.
Image data created by a 3D engine or layered data may for instance be at least partially obtained by processor 10 or processor 30 and then be further processed, or may at least partially be created by processor 10 or processor 30 itself.
Fig. 2 is a schematic illustration of a generation of viewing sectors 7 and 8 in a multiview display 5, such as for instance the multiview displays 12 and 43 of Figs. Ia and Ib, respectively.
Multiview display 5 of Fig. 2 comprises a backlight panel 50, a Liquid Crystal Display (LCD) pixel matrix 51, of which, due to the sectional presentation, only one row of pixels is visible, and a lenticular lens 52.
LCD pixel matrix 51 renders image data that is fed to device 5. Lenticular lens 52 bends the light emitted by the pixels of the LCD pixel matrix 51 and thus allows to display specific sets of image data mainly in specific sets of viewing sectors. For instance, consider the pixels 510 and 511 in LCD pixel matrix 51. When rendering image data by LCD pixel matrix 51, pixels 510 and 511 can be understood to render a set of image data comprised in the overall image data rendered by LCD pixel matrix 51. Due to the presence of the lenticular lens 52, this set of image data is displayed in a specific set of one or more viewing sectors. In Fig. 2, only one viewing sector is represented by arrow 7. Depending on the structure of display 5, it is also possible that the set of image data rendered by pixels 510 and 511 is not only directed to one viewing sector 7, but to a set of viewing sectors, wherein the center angles of these viewing vectors may for instance be equidistant. An example of such a set of viewing sectors will be discussed with reference to Fig. 3 below.
Similarly, pixels 512 and 513 of the LDC pixel matrix can be understood to render a different set of image data, and this different set of image data, due to the spatial displacement
of the pixels 512 and 513 with respect to the pixels 510 and 511, is displayed in a different set of one or more viewing sectors, one of which viewing sectors is illustrated in Fig. 2 and bears reference numeral 8.
In Fig. 2, furthermore a viewer 6 with left eye 61 and right eye 62 is shown. Since the left eye 61 of viewer 6 is positioned in viewing sector 7, left eye 61 perceives the set of image data that is rendered by pixels 510 and 511 of LDC pixel matrix 51, whereas right eye 62 perceives the set of image data that is rendered by pixels 512 and 513. Now, if the two sets of image data displayed towards the left eye 61 and right eye 62, respectively, stem from different views of a 3D image (for instance have been produced with different viewing angles with respect to a 3D image target), e.g. a left-eye view and a right-eye view, a stereoscopic effect is created for viewer 6.
Fig. 2 also illustrates the displaying of three further sets of image data in three different sets of viewing sectors. Multiview display 5 of Fig. 2 thus is capable of displaying five sets of image data in five different set of viewing sectors, so that actually five different views of a 3D image can be displayed.
Fig. 5 exemplarily illustrates five different views 103-1, 103-2, 103-3, 103-4 and 103-5 of a 3D image 100 that is composed of a 3D object 101 and a text block 102. Such a 3D image 100 may for instance be used in a UI . Therein, each view can be understood as a 2D representation of the 3D image with respect to a specific viewing angle. The five views 103-1, 103-2, 103-3, 103-4 and 103-5 of Fig. 5 may for instance be
described by five respective sets of image data, and these five sets of image data may then be displayed by the five viewing sectors of multiview display 5 of Fig. 2 to generate a 3D impression.
In Fig. 2, a multiview display 5 that applies spatial interlacing to display the sets of image data in the different sets of viewing sectors was exemplarily considered. Alternatively, a multiview display that at least partially performs temporal interlacing to display the sets of image data in the different sets of viewing sectors could be deployed. For instance, instead of using only a subset of the pixels of the display panel 51 for the different sets of image data, all pixels of the display panel 51 could be used for each set of image data, and a bi-directional backlight structure could be used that allows to direct the light generated by the backlight into two or more distinct directions. In fast succession, then the sets of image data are displayed by the display panel, and the backlight is accordingly switched to direct the currently displayed set of image data towards a different viewing sectors. Such a temporally interlaced multiview display allows displaying the sets of image data with full resolution. Of course, temporal interlacing and spatial interlacing of sets of image data may be combined.
Fig. 3 is a diagram illustrating in more detail the viewing sectors generated by a multiview display, such as for instance the multiview display 5 of Fig. 2. The diagram depicts the luminance distribution of five different sets of image data (e.g. the five sets of image data corresponding to the five views 103-1, 103-2, 103-3, 103-4 and 103-5 of 3D image 100
of Fig. 5) displayed by a multiview display (in candela per square meter, or "nits") as a function of the viewing angle with respect to the display (measured in the horizontal plane, with an angle of 0° defining a position straight in front of the display) .
The peaks 91-1 and 91-2 illustrate two viewing sectors of a set of viewing vectors in which a first set of image data is displayed. Similarly, peaks 92-1 and 92-2 illustrate two viewing sectors of a set of viewing sectors in which a second set of image data is displayed, peak 93-1 illustrates one viewing sector of a set of viewing sectors in which a third set of image data is displayed, peaks 94-1 and 94-2 illustrate two viewing sectors of a set of viewing sectors in which a fourth set of image data is displayed, and peaks 95-1 and 95-2 illustrate two viewing sectors of a set of viewing sectors in which a fifth set of image data is displayed. Therein, a viewing sector may for instance be defined based on the intersections between the curves of different sets of image data. This yields a sector width of approximately 3°. Equally well, other definitions of viewing sectors are possible. In Fig. 3, the viewing sectors at least partially overlap, which may be advantageous with respect to a smooth transition between the sets of image data displayed in the different viewing sectors .
In Fig. 3, when the five sets of image data displayed by the five sets of viewing sectors stem from five different views of a 3D image, for instance the outer left view 103-1 (see Fig. 5), the inner left view 103-2, the central view 103-3, the inner right view 103-4 and the outer right view 103-5, it is readily clear that a viewer gets a 3D impression of the
3D image when looking at the multiview display. Furthermore, by changing his angular position with respect to the display by a few degrees, he gets the possibility to peek behind objects in said 3D image. As can be further seen from Fig. 3, the multiview display does not only allow to display the 3D image in an angular range of approximately 15°, corresponding to the peaks 91-1, 92-1, 93-1, 94-1 and 95-1, but also in a much broader angular range, due to the presence of further peaks 91-2, 92-2, 94-2 and 95-2, i.e. due to the fact that sets of image data are not displayed in single viewing sectors only, but in sets of viewing sectors.
Fig. 4 is a flowchart 400 of an exemplary method according to the present invention. The steps of this method may for instance be performed by processor 10 of device 1 (see Fig. Ia) or by processor 30 of device 3 of system 2 (see Fig. Ib) . This method is directed to increasing a sharpness and/or a resolution of at least a part of a 3D image.
In a first step 401, a counter variable 1 is initialized to 1. Counter variable 1 identifies the set of image data that is currently processed.
In a step 402, image data of the 1-th view of a 3D image is included in the 1-th set of image data. For example, if sets of image data shall be provided for 3D image 100 of Fig. 5, for instance in case of 1=1, image data of the first view 103-1 of the 3D image 100 is included in the first set of image data. This image data may be understood as a 2D image of 3D image 100 with respect to a specific viewing angle, which is illustrated by the arrow 103-1 in Fig. 5.
In a step 403, it is checked if the counter variable 1 has reached the number of sets of image data L, which is presently assumed to be 5. If the counter variable 1 has not reached L yet, the counter variable is incremented by 1 in a step 404, and the flowchart jumps back to step 402. Otherwise, the flowchart proceeds to step 405.
The loop defined by steps 401-404 causes image data of the five views 103-1, 103-2, 103-3, 103-4 and 103-5 of 3D image 100 of Fig. 5 to be included in five respective sets of image data. If these sets of image data were provided to a multiview display with five sets of viewing sectors (as described with reference to Figs. 2 and 3 above), a strong motion parallax and/or a large viewing freedom would be caused for 3D image 100 of Fig. 5. However, the text block 102 would be likely to be blurred to such a degree that it may no longer be properly readable .
To combat this, according to the present invention, in a step 405, it is determined if an increased sharpness is required for a part of the 3D image 100. As discussed above, this may for instance be the case for text block 102 of 3D image 100, which text block 102 is understood as a part of the 3D image 100.
If it is determined in step 405 that an increased sharpness is required for text block 102, the flowchart proceeds to step 406 and includes image data of a left-eye view of the text block 102 in a first half of sets of image data, and includes image data of a right-eye view of text block 102 in a second half of sets of image data.
For instance, the left-eye view of text block 102 may be view 103-2 of Fig. 5, and the right-eye view of text block 102 may be view 103-4 of Fig. 5. Since the views of Fig. 5 can be understood as 2D representations of the 3D image with respect to different viewing angles, the image data of left-eye view 103-2 of text block 102 then may be considered to pertain to a 2D image of text block 102 with respect to a viewing angle that is indicated by arrow 103-2 in Fig. 5. Similarly, the image data of right-eye view 103-4 of text block 102 may be considered to pertain to a 2D image of text block 102 with respect to a viewing angle that is indicated by arrow 103-4 in Fig. 5.
Including image data of the left-eye view 103-2 of text block 102 into the first half of sets of image data, respectively, may for instance be understood in a way that the image data of the left-eye view 103-2 of text block 102 is copied into the three sets of image data that respectively correspond to views 103-1, 103-2 and 103-3 of 3D image 100. Correspondingly, the image data of the right-eye view 103-4 of text block 102 may then be understood to be copied into the two sets of image data that respectively correspond to views 103-4 and 103-5 of 3D image 100. Therein, image data related to text block 102 that has been included into these sets of image data in step 402 may be overwritten. Including image data in the sets of image data may furthermore comprise image processing, for instance to smooth the transitions between image data of the 3D image (as included in step 402) and the image data related to the text block 102 (included in step 406) .
This including of image data of the same view of a part of a 3D image into several sets of image data has the effect that
the motion parallax and/or the viewing freedom for this part of the 3D image (e.g. the text block 102) is traded against an increased sharpness. It is readily clear that, instead of only using two views of the part of the 3D image, equally well only one view could be used (i.e. image data of a single view, such as for instance the center view 103-3 of text block 102, may be included in all sets of image data) , and equally well more than two views could be used (e.g. only views 103-2, 103-3 and 103-4 of text block 102 could be included in the five sets of image data) .
Finally, in a step 407, all five sets of image data are provided to a multiview display for displaying, as already described in the context of Figs. 2 and 3 above, i.e. by spatially and/or temporally interlacing the sets of image data to achieve their displaying in the different sets of viewing sectors of the multiview display.
The steps 401-406 of the flowchart 400 of Fig. 5 may for instance be performed during the production of the image data of the different views of the 3D image. Equally well, the steps 401-406 may be performed as a pre-processing on already existing sets of image data.
It should be noted that, instead of increasing the sharpness of parts of a 3D image only, equally well the sharpness of the entire 3D image may be increased. The steps 401 to 404 then may become obsolete, and the steps 405 to 407 could be performed for the entire 3D image instead of a part of the 3D image only. This may for instance result in five sets of image data, wherein the first three sets of image data only contain image data of a left-eye view 103-2 of 3D image 100
of Fig. 5, and the last two sets of image data only contain image data of a right-eye view 103-4 of 3D image 100 of Fig. 5.
Furthermore, it should be noted that the sequence of the steps of flowchart 400 is not binding. For instance, equally well, steps 405-407 may be performed before steps 401-404, wherein in step 402, then only image data of the 1-th view of the rest of the 3D image would be included in the 1-th set of image data, so that the image data included in step 406 would not be overwritten.
As described with reference to flowchart 400 of Fig. 4 above, a multiview 3D display thus can be treated partly or completely as if it was a 2-view 3D display. This may especially be true if the viewing freedom of the multiview 3D display is close to that of a common 2-view 3D display. The multiview display may for instance work as a 2-view 3D display, if the first half of the views is treated as the left-eye view and the second half of the views as the right-eye view. The display hardware of the multiview display then may not need any changes, only content creation has to be adapted. Those parts of the 3D image that need to have more sharpness may then be sampled with only 2 views and the rest treated as a multiview image. E.g. a background image in a 3D display may be a multiview image and a text in front of it (popping out) of the screen may only have two views.
Including image data of the same view of a 3D image (or a part thereof) into several sets of image data allows not only trading the motion parallax and/or the viewing freedom against sharpness, but also allows increasing the perceived
resolution of the 3D image (or the part thereof) . This can be achieved by obtaining different image data from the same view of the 3D image (or the part thereof) , and including this different image data into several sets of image data.
For instance, to stay in the above example, where it was assumed that image data of a left-eye view 103-2 of text block 102 of 3D image 100 of Fig. 5 is included in the first three sets of image data, and that image data of a right-eye view 103-4 of text block 102 of 3D image 100 is included in the last two sets of image data, instead of including the same image data in the first three sets of image data on the one hand and including the same image data in the last two sets of image data on the other hand, the following approach may be applied:
The left-eye view 103-2 of text block 102 is sampled with three-fold resolution as compared to the resolution of a single set of image data to obtain a first 2D image, and the right-eye view 103-4 of text block 102 is sampled with two-fold resolution as compared to the resolution of a single set of image data to obtain a second 2D image. For the image data of the left-eye view of text block 102 to be included into the first set of image data, only one third of the pixels of the first 2D image is used, for the image data of the left-eye view of text block 102 to be included into the second set of image data, also only a third of the pixels of the first 2D image is used, wherein these pixels are different from the pixels used for the first set of image data, and for the image data of the left-eye view of the text block 102 to be included into the third set of image data, also only a third of the pixels of the first 2D image is used, these pixels being
different from both the pixels used for the first and second set of image data. A similar processing is performed for the fourth and fifth set of image data, for which different pixels from the second 2D image are used.
The different pixels of the first/second 2D image used for the respective sets of image data may for instance be obtained by sub-sampling the first/second 2D image with a (regular) sampling grid that samples only every third value, wherein the sampling grids for different sets of image data are shifted by one or two pixels with respect to each other.
Including different image data stemming from the same view of the 3D image (or a part thereof) increases the perceived resolution of the 3D image (or the part thereof) , because the viewing sectors in which these sets of image data are displayed may at least partially overlap (see Fig. 3) . This causes the image data displayed in adjacent viewing sectors to be merged in a viewer's eye, resulting in increased perceived resolution.
The invention has been described above by means of exemplary embodiments. It should be noted that there are alternative ways and variations which are obvious to a skilled person in the art and can be implemented without deviating from the scope and spirit of the appended claims.
It is readily clear for a skilled person that the logical blocks in the schematic block diagrams as well as the flowchart and algorithm steps presented in the above description may at least partially be implemented in electronic hardware and/or computer software, wherein it
depends on the functionality of the logical block, flowchart step and algorithm step and on design constraints imposed on the respective devices to which degree a logical block, a flowchart step or algorithm step is implemented in hardware or software. The presented logical blocks, flowchart steps and algorithm steps may for instance be implemented in one or more digital signal processors, application specific integrated circuits, field programmable gate arrays or other programmable devices. The computer software may be stored in a variety of storage media of electric, magnetic, electro-magnetic or optic type and may be read and executed by a processor, such as for instance a microprocessor. To this end, the processor and the storage medium may be coupled to interchange information, or the storage medium may be included in the processor.
Claims
1. A method for providing image data related to a three-dimensional image for a multiview display, wherein said image data comprises sets of image data, wherein said multiview display comprises a plurality of sets of one or more viewing sectors, and wherein said multiview display is configured to display each set of image data of said sets of image data in a different set of viewing sectors of said plurality of sets of viewing sectors to create a stereoscopic effect, said method comprising : including, in at least two sets of image data of said sets of image data, image data stemming from the same view of at least a part of said three-dimensional image, so that the number of views displayed for said at least a part of said three-dimensional image is smaller than the number of sets of viewing sectors of said multiview display.
2. The method according to claim 1, wherein said sets of image data form groups, and wherein in all sets of image data forming a group, said image data stemming from said same view of said at least a part of said three-dimensional image is included.
3. The method according to claim 2, wherein said sets of image data forming a group are displayed in adjacent viewing sectors, when said sets of image data are displayed by said multiview display.
4. The method according to any of the claims 2-3, wherein said sets of image data only form two groups.
5. The method according to any of the claims 1-4, wherein said image data stemming from said same view of said at least a part of said three-dimensional image is included in at least three sets of image data of said sets of image data .
6. The method according to claim 1, wherein said sets of image data form a first group and a second group, wherein, when said sets of image data are displayed by said multiview display, said sets of image data forming said first group are displayed in adjacent viewing sectors, and said sets of image data forming said second group are displayed in adjacent viewing vectors, wherein in all sets of image data forming said first group, image data stemming from a left-eye view of said at least a part of said three-dimensional image is included, and wherein in all sets of image data forming said second group, image data stemming from a right-eye view of said at least a part of said three-dimensional image is included.
7. The method according to any of the claims 1-6, wherein said image data included in said at least two sets of image data and stemming from said same view of said at least a part of said three-dimensional image is the same for each of said at least two sets of image data.
8. The method according to any of the claims 1-6, wherein said image data included in said at least two sets of image data and stemming from said same view of said at least a part of said three-dimensional image is different for each of said at least two sets of image data .
9. The method according to any of the claims 1-6, wherein said image data included in said at least two sets of image data and stemming from said same view of said at least a part of said three-dimensional image differs for each of said at least two sets of image data by the spatial sampling grid applied when sampling said same view of said at least a part of said three-dimensional image .
10. The method according to any of the claims 1-9, further comprising: determining if at least a part of said three-dimensional image requires increased sharpness, wherein said including of said image data is only performed if it has been determined that at least a part of said three-dimensional image requires increased sharpness.
11. The method according to any of the claims 1-10, wherein said three-dimensional image comprises at least one of a natural image and an artificial element.
12. A computer program for providing image data related to a three-dimensional image for a multiview display, wherein said image data comprises sets of image data, wherein said multiview display comprises a plurality of sets of one or more viewing sectors, and wherein said multiview display is configured to display each set of image data of said sets of image data in a different set of viewing sectors of said plurality of sets of viewing sectors to create a stereoscopic effect said computer program comprising: instructions operable to cause a processor to include, in at least two sets of image data of said sets of image data, image data stemming from the same view of at least a part of said three-dimensional image, so that the number of views displayed for said at least a part of said three-dimensional image is smaller than the number of sets of viewing sectors of said multiview display.
13. A computer-readable medium having a computer program according to claim 7 stored thereon.
14. A device for providing image data related to a three-dimensional image for a multiview display, wherein said image data comprises sets of image data, wherein said multiview display comprises a plurality of sets of one or more viewing sectors, and wherein said multiview display is configured to display each set of image data of said sets of image data in a different set of viewing sectors of said plurality of sets of viewing sectors to create a stereoscopic effect, said device comprising : a processing unit configured to include, in at least two sets of image data of said sets of image data, image data stemming from the same view of at least a part of said three-dimensional image, so that the number of views displayed for said at least a part of said three-dimensional image is smaller than the number of sets of viewing sectors of said multiview display.
15. The device according to claim 14, wherein said sets of image data form groups, and wherein in all sets of image data forming a group, said image data stemming from said same view of said at least a part of said three-dimensional image is included.
16. The device according to claim 15, wherein said sets of image data forming a group are displayed in adjacent viewing sectors, when said sets of image data are displayed by said multiview display.
17. The device according to any of the claims 15-16, wherein said sets of image data only form two groups.
18. The device according to any of the claims 14-17, wherein said image data stemming from said same view of said at least a part of said three-dimensional image is included in at least three sets of image data of said sets of image data .
19. The device according to claim 14, wherein said sets of image data form a first group and a second group, wherein, when said sets of image data are displayed by said multiview display, said sets of image data forming said first group are displayed in adjacent viewing sectors, and said sets of image data forming said second group are displayed in adjacent viewing vectors, wherein in all sets of image data forming said first group, image data stemming from a left-eye view of said at least a part of said three-dimensional image is included, and wherein in all sets of image data forming said second group, image data stemming from a right-eye view of said at least a part of said three-dimensional image is included.
20. The device according to any of the claims 14-19, wherein said image data included in said at least two sets of image data and stemming from said same view of said at least a part of said three-dimensional image is the same for each of said at least two sets of image data.
21. The device according to any of the claims 14-19, wherein said image data included in said at least two sets of image data and stemming from said same view of said at least a part of said three-dimensional image is different for each of said at least two sets of image data .
22. The device according to any of the claims 14-19, wherein said image data included in said at least two sets of image data and stemming from said same view of said at least a part of said three-dimensional image differs for each of said at least two sets of image data by the spatial sampling grid applied when sampling said same view of said at least a part of said three-dimensional image .
23. The device according to any of the claims 14-22, wherein said processing unit is further configured to determine if at least a part of said three-dimensional image requires increased sharpness, wherein said image data is only included if it has been determined that at least a part of said three-dimensional image requires increased sharpness.
24. The device according to any of the claims 14-23, wherein said three-dimensional image comprises at least one of a natural image and an artificial element.
25. The device according to any of the claims 14-24, wherein said device further comprises said multiview display.
26. A system, comprising a first device and a second device, wherein said first device is configured to provide image data related to a three-dimensional image for a multiview display, wherein said image data comprises sets of image data, wherein said multiview display comprises a plurality of sets of one or more viewing sectors, and wherein said multiview display is configured to display each set of image data of said sets of image data in a different set of viewing sectors of said plurality of sets of viewing sectors to create a stereoscopic effect, and wherein said first device comprises : a processing unit configured to include, in at least two sets of image data of said sets of image data, image data stemming from the same view of at least a part of said three-dimensional image, so that the number of views displayed for said at least a part of said three-dimensional image is smaller than the number of sets of viewing sectors of said multiview display, and an interface configured to transmit said sets of image data; and wherein said second device comprises: an interface configured to receive said sets of image data; and said multiview display for displaying said sets of image data .
27. The system according to claim 26, wherein said sets of image data form groups, and wherein in all sets of image data forming a group, said image data stemming from said same view of said at least a part of said three-dimensional image is included.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IB2007/051208 WO2008122838A1 (en) | 2007-04-04 | 2007-04-04 | Improved image quality in stereoscopic multiview displays |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/IB2007/051208 WO2008122838A1 (en) | 2007-04-04 | 2007-04-04 | Improved image quality in stereoscopic multiview displays |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008122838A1 true WO2008122838A1 (en) | 2008-10-16 |
Family
ID=38617437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2007/051208 WO2008122838A1 (en) | 2007-04-04 | 2007-04-04 | Improved image quality in stereoscopic multiview displays |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2008122838A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102769769A (en) * | 2011-05-06 | 2012-11-07 | 株式会社东芝 | Medical image processing apparatus |
US9215436B2 (en) | 2009-06-24 | 2015-12-15 | Dolby Laboratories Licensing Corporation | Insertion of 3D objects in a stereoscopic image at relative depth |
US9215435B2 (en) | 2009-06-24 | 2015-12-15 | Dolby Laboratories Licensing Corp. | Method for embedding subtitles and/or graphic overlays in a 3D or multi-view video data |
US9225975B2 (en) | 2010-06-21 | 2015-12-29 | Microsoft Technology Licensing, Llc | Optimization of a multi-view display |
WO2015074807A3 (en) * | 2013-11-20 | 2016-01-21 | Koninklijke Philips N.V. | Generation of images for an autosteroscopic multi-view display |
US9426441B2 (en) | 2010-03-08 | 2016-08-23 | Dolby Laboratories Licensing Corporation | Methods for carrying and transmitting 3D z-norm attributes in digital TV closed captioning |
US9519994B2 (en) | 2011-04-15 | 2016-12-13 | Dolby Laboratories Licensing Corporation | Systems and methods for rendering 3D image independent of display size and viewing distance |
US10089937B2 (en) | 2010-06-21 | 2018-10-02 | Microsoft Technology Licensing, Llc | Spatial and temporal multiplexing display |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0540137A1 (en) * | 1991-10-28 | 1993-05-05 | Nippon Hoso Kyokai | Three-dimensional image display using electrically generated parallax barrier stripes |
DE102005013822A1 (en) * | 2005-03-24 | 2006-09-28 | X3D Technologies Gmbh | Method for generating image data for the stereoscopic display of an object |
WO2006111919A2 (en) * | 2005-04-22 | 2006-10-26 | Koninklijke Philips Electronics, N.V. | Auto-stereoscopic display with mixed mode for concurrent display of two- and three-dimensional images |
-
2007
- 2007-04-04 WO PCT/IB2007/051208 patent/WO2008122838A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0540137A1 (en) * | 1991-10-28 | 1993-05-05 | Nippon Hoso Kyokai | Three-dimensional image display using electrically generated parallax barrier stripes |
DE102005013822A1 (en) * | 2005-03-24 | 2006-09-28 | X3D Technologies Gmbh | Method for generating image data for the stereoscopic display of an object |
WO2006111919A2 (en) * | 2005-04-22 | 2006-10-26 | Koninklijke Philips Electronics, N.V. | Auto-stereoscopic display with mixed mode for concurrent display of two- and three-dimensional images |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9215436B2 (en) | 2009-06-24 | 2015-12-15 | Dolby Laboratories Licensing Corporation | Insertion of 3D objects in a stereoscopic image at relative depth |
US9215435B2 (en) | 2009-06-24 | 2015-12-15 | Dolby Laboratories Licensing Corp. | Method for embedding subtitles and/or graphic overlays in a 3D or multi-view video data |
US9426441B2 (en) | 2010-03-08 | 2016-08-23 | Dolby Laboratories Licensing Corporation | Methods for carrying and transmitting 3D z-norm attributes in digital TV closed captioning |
US9225975B2 (en) | 2010-06-21 | 2015-12-29 | Microsoft Technology Licensing, Llc | Optimization of a multi-view display |
US10089937B2 (en) | 2010-06-21 | 2018-10-02 | Microsoft Technology Licensing, Llc | Spatial and temporal multiplexing display |
US10356399B2 (en) | 2010-06-21 | 2019-07-16 | Microsoft Technology Licensing, Llc | Optimization of a multi-view display |
US9519994B2 (en) | 2011-04-15 | 2016-12-13 | Dolby Laboratories Licensing Corporation | Systems and methods for rendering 3D image independent of display size and viewing distance |
US9020219B2 (en) | 2011-05-06 | 2015-04-28 | Kabushiki Kaisha Toshiba | Medical image processing apparatus |
EP2521362A3 (en) * | 2011-05-06 | 2013-09-18 | Kabushiki Kaisha Toshiba | Medical image processing apparatus |
CN102769769A (en) * | 2011-05-06 | 2012-11-07 | 株式会社东芝 | Medical image processing apparatus |
JP2012234447A (en) * | 2011-05-06 | 2012-11-29 | Toshiba Corp | Medical image processor |
WO2015074807A3 (en) * | 2013-11-20 | 2016-01-21 | Koninklijke Philips N.V. | Generation of images for an autosteroscopic multi-view display |
CN105723705A (en) * | 2013-11-20 | 2016-06-29 | 皇家飞利浦有限公司 | Generation of images for an autosteroscopic multi-view display |
CN105723705B (en) * | 2013-11-20 | 2019-07-26 | 皇家飞利浦有限公司 | The generation of image for automatic stereo multi-view display |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2357841B1 (en) | Method and apparatus for processing three-dimensional images | |
EP2332340B1 (en) | A method of processing parallax information comprised in a signal | |
US9083963B2 (en) | Method and device for the creation of pseudo-holographic images | |
EP2347597B1 (en) | Method and system for encoding a 3d image signal, encoded 3d image signal, method and system for decoding a 3d image signal | |
Hill et al. | 3-D liquid crystal displays and their applications | |
US20130033586A1 (en) | System, Method and Apparatus for Generation, Transmission and Display of 3D Content | |
US20100091012A1 (en) | 3 menu display | |
US20110193861A1 (en) | Method and apparatus for processing three-dimensional images | |
WO2008122838A1 (en) | Improved image quality in stereoscopic multiview displays | |
Gotchev et al. | Three-dimensional media for mobile devices | |
CN101636747A (en) | Two dimensional/three dimensional digital information obtains and display device | |
US8368690B1 (en) | Calibrator for autostereoscopic image display | |
JP2010237410A (en) | Image display apparatus and method, and program | |
JPH08205201A (en) | Pseudo stereoscopic vision method | |
WO2019041035A1 (en) | Viewer-adjusted stereoscopic image display | |
Berretty et al. | Real-time rendering for multiview autostereoscopic displays | |
CN102984483B (en) | A kind of three-dimensional user interface display system and method | |
JP4657066B2 (en) | 3D display device | |
TWI572899B (en) | Augmented reality imaging method and system | |
JP2012134885A (en) | Image processing system and image processing method | |
KR100823561B1 (en) | Display device for displaying two-three dimensional image | |
Zinger et al. | iGLANCE project: free-viewpoint 3D video | |
WO2013006731A1 (en) | Three-dimensional image display using a dynamically variable grid | |
CN118476213A (en) | Scaling three-dimensional content displayed on an autostereoscopic display device | |
WO2013061334A1 (en) | 3d stereoscopic imaging device with auto parallax |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07735382 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07735382 Country of ref document: EP Kind code of ref document: A1 |