WO2022246844A1 - 屏幕检测方法、装置、设备、计算机程序和可读介质 - Google Patents
屏幕检测方法、装置、设备、计算机程序和可读介质 Download PDFInfo
- Publication number
- WO2022246844A1 WO2022246844A1 PCT/CN2021/096964 CN2021096964W WO2022246844A1 WO 2022246844 A1 WO2022246844 A1 WO 2022246844A1 CN 2021096964 W CN2021096964 W CN 2021096964W WO 2022246844 A1 WO2022246844 A1 WO 2022246844A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- target
- viewpoint
- cylindrical lens
- content
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 163
- 238000004590 computer program Methods 0.000 title claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 54
- 238000013041 optical simulation Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 18
- 230000004044 response Effects 0.000 abstract description 4
- 230000000875 corresponding effect Effects 0.000 description 43
- 238000010586 diagram Methods 0.000 description 16
- 230000000694 effects Effects 0.000 description 15
- 230000002596 correlated effect Effects 0.000 description 9
- 238000013461 design Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000037237 body shape Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
- H04N13/117—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/305—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B5/00—Optical elements other than lenses
- G02B5/18—Diffraction gratings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/167—Synchronising or controlling image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/327—Calibration thereof
Definitions
- the present disclosure belongs to the technical field of screens, and in particular relates to a screen detection method, device, equipment, computer program and readable medium.
- the ultra-multi-viewpoint display can realize continuous motion parallax and have a more realistic 3D display effect.
- the method of realizing ultra-multi-viewpoint display is mainly to display the images of multiple viewpoints on the screen according to a specific layout method, and the cylindrical lens array is attached to the screen at a specific angle, so that the images of different viewpoints pass through the cylindrical lens array. It will be projected in different directions, so that the left and right eyes of the user can see images from different viewpoints to generate parallax and create a 3D display effect.
- the disclosure provides a screen detection method, device, equipment, computer program and readable medium.
- Some implementations of the present disclosure provide a screen detection method, the method comprising:
- the cylindrical lens detection instruction at least includes: a target viewpoint;
- the target screen is a screen with a cylindrical lens on the light-emitting side
- the browsing image contains the target content, using the browsing image as a viewpoint image;
- the detection parameters of the cylindrical lenses on the target screen are output.
- the acquisition of the browsing image taken for the target screen under the target viewpoint, where the target screen is a screen with a cylindrical lens on the light-emitting side includes:
- the viewpoint of the image acquisition device is adjusted to the target viewpoint, so as to shoot the light-emitting side of the target screen to obtain a browsing image.
- the adjusting the viewpoint of the image acquisition device to the target viewpoint to shoot the light-emitting side of the target screen to obtain the browsing image includes:
- the adjusting the shooting position of the image acquisition device relative to the target screen to the target position includes:
- the shooting position parameters of the image acquisition device are adjusted so that the shooting position of the image acquisition device is at the target position, and the shooting position parameters include: at least one of shooting angle, shooting height and shooting distance.
- using the browsing image as a viewpoint image includes:
- the browsing image contains the target content
- the browsing image is used as a viewpoint image, wherein the viewpoints of at least two of the viewpoint images are on the same straight line, and the straight line is parallel to the pixel plane of the target screen .
- the image parameters at least include: the placement height of the cylindrical lens;
- the outputting the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image includes:
- the acquisition of the placement height of the cylindrical lens on the target screen based on the position of the viewpoint, the number of viewpoints, the distance from the first pixel point, and the medium refractive index from the cylindrical lens to the pixel surface includes:
- T is the placement height
- N is the number of viewpoints
- n is the medium refractive index from the cylindrical lens to the pixel surface
- P sub is the pixel corresponding to two adjacent viewpoint images on the same cylindrical lens
- the first pixel distance between positions x N is the x-axis coordinate value of the Nth viewpoint image
- x 1 is the x-axis coordinate value of the first viewpoint image
- z is the z-axis coordinate value of each viewpoint image
- N ⁇ 2 is a positive integer.
- the target content includes: target horizontal content;
- using the browsing image as a viewpoint image includes:
- the browsing image is used as a viewpoint image.
- the detection parameters at least include: the center distance between two adjacent cylindrical lenses;
- the outputting the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image includes:
- the center distance between the two adjacent cylindrical lenses is obtained.
- the obtaining the center-to-center distance of the two adjacent cylindrical lenses based on the placement height of the cylindrical lenses and the medium refractive index from the cylindrical lenses to the pixel surface includes:
- the center distance between the two adjacent cylindrical lenses is output by the following formula:
- the P lens is the center distance between the two adjacent cylindrical lenses
- the T is the placement height of the cylindrical lenses
- the n is the medium refractive index from the cylindrical lenses to the pixel surface
- the obtaining the center-to-center distance of the two adjacent cylindrical lenses based on the placement height of the cylindrical lenses and the medium refractive index from the cylindrical lenses to the pixel surface includes:
- the center distance between the two adjacent cylindrical lenses is output by the following formula:
- the P lens is the center distance between the two adjacent cylindrical lenses
- the L is the viewing distance of the viewpoint image
- the P pixel is the corresponding pixel point of the viewpoint image on the two adjacent cylindrical lenses
- the T is the placement height of the cylindrical lens
- the n is the medium refractive index from the cylindrical lens to the pixel plane.
- the target content includes: multiple target vertical content;
- using the browsing image as a viewpoint image includes:
- the browsing image is used as a viewpoint image.
- the detection parameters at least include: an alignment angle deviation of the cylindrical lens
- the outputting the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image includes:
- the alignment angle deviation of the cylindrical lens is acquired.
- the acquisition of the alignment angle deviation of the cylindrical lens based on the quantity of the target vertical content, the first pixel distance, and the content width includes:
- the alignment angle deviation of the cylindrical lens is output by the following formula:
- the ⁇ is the alignment angle deviation of the cylindrical lens
- the N is the number of the target longitudinal content
- the P sub is the corresponding pixel position of two adjacent viewpoint images on the same cylindrical lens
- the first pixel distance between W is the content width of the target vertical content on the viewpoint image.
- using the browsing image as a viewpoint image includes:
- the browsing image is obtained by shooting the target screen in a front view, and the central content in the central position in the browsing image is not the target content, the browsing image is used as a viewpoint image.
- the detection parameters at least include: alignment position deviation of the cylindrical lens;
- the outputting the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image includes:
- the alignment position deviation of the cylindrical lens is acquired.
- the obtaining the alignment position deviation of the cylindrical lens based on the image parameters of the viewpoint image includes:
- the alignment position deviation of the cylindrical lens is output by the following formula:
- ⁇ P is the alignment position deviation of the cylindrical lens
- M is the difference value between the center content and the target content
- P sub is the difference between two adjacent viewpoint images on the same cylindrical lens. The first pixel distance between corresponding pixel positions.
- the obtaining the alignment position deviation of the cylindrical lens based on the image parameters of the viewpoint image includes:
- the alignment position deviation of the cylindrical lens is output by the following formula:
- ⁇ P is the alignment position deviation of the cylindrical lens
- n is the medium refractive index from the cylindrical lens to the pixel surface
- ⁇ 1 and ⁇ 2 are the brightness of the viewpoint image relative to In the angular distribution of the target viewpoints, two viewing angles adjacent to 0 degrees are respectively used as the first target viewing angle and the second target viewing angle.
- using the browsing image as a viewpoint image includes:
- the browse image is used as a viewpoint image.
- the detection parameters include at least: a radius of curvature of the cylindrical lens;
- the outputting the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image includes:
- the radius of curvature of the optical simulation model of the cylindrical lens when the angle of view at which the sharpness of the optical simulation model is maximum is the angle of view of the viewpoint image, the radius of curvature is used as the curvature of the cylindrical lens radius.
- the sharpness can be obtained through the following steps:
- the outputting detection parameters of cylindrical lenses on the target screen based on the image parameters of the viewpoint image includes:
- the radius of curvature of the optical simulation model of the cylindrical lens By adjusting the radius of curvature of the optical simulation model of the cylindrical lens, when the similarity between the optical simulation model and the viewing angle brightness distribution curve of the cylindrical lens meets the similarity requirement, the The radius of curvature is used as the radius of curvature of the cylindrical lens.
- Some embodiments of the present disclosure provide a screen detection device, the device comprising:
- a receiving module configured to receive a lenticular lens detection instruction for the target screen, where the lenticular lens detection instruction at least includes: a target viewpoint;
- the detection module is configured to, in response to the detection instruction, acquire a browsing image taken for the target screen under the target viewpoint, and the target screen is a screen with a cylindrical lens on the light-emitting side;
- the browsing image contains the target content, using the browsing image as a viewpoint image;
- the output module is configured to: output the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image.
- the detection module is also configured to:
- the viewpoint of the image acquisition device is adjusted to the target viewpoint, so as to shoot the light-emitting side of the target screen to obtain a browsing image.
- the detection module is also configured to:
- the detection module is also configured to:
- the shooting position parameters of the image acquisition device are adjusted so that the shooting position of the image acquisition device is at the target position, and the shooting position parameters include: at least one of shooting angle, shooting height and shooting distance.
- the detection module is also configured to:
- the browsing image contains the target content
- the browsing image is used as a viewpoint image, wherein the viewpoints of at least two of the viewpoint images are on the same straight line, and the straight line is parallel to the pixel plane of the target screen .
- the image parameters at least include: the placement height of the cylindrical lens;
- the output module is further configured to:
- the output module is also configured as:
- T is the placement height
- N is the number of viewpoints
- n is the medium refractive index from the cylindrical lens to the pixel surface
- P sub is the pixel corresponding to two adjacent viewpoint images on the same cylindrical lens
- the first pixel distance between positions x N is the x-axis coordinate value of the Nth viewpoint image
- x 1 is the x-axis coordinate value of the first viewpoint image
- z is the z-axis coordinate value of each viewpoint image
- N ⁇ 2 is a positive integer.
- the target content includes: target horizontal content;
- the detection module is also configured to:
- the browsing image is used as a viewpoint image.
- the detection parameters at least include: the center distance between two adjacent cylindrical lenses;
- the output module is further configured to:
- the center distance between the two adjacent cylindrical lenses is obtained.
- the output module is also configured as:
- the center distance between the two adjacent cylindrical lenses is output by the following formula:
- the P lens is the center distance between the two adjacent cylindrical lenses
- the T is the placement height of the cylindrical lenses
- the n is the medium refractive index from the cylindrical lenses to the pixel surface
- the output module is also configured as:
- the center distance between the two adjacent cylindrical lenses is output by the following formula:
- the P lens is the center distance between the two adjacent cylindrical lenses
- the L is the viewing distance of the viewpoint image
- the P pixel is the corresponding pixel point of the viewpoint image on the two adjacent cylindrical lenses
- the T is the placement height of the cylindrical lens
- the n is the medium refractive index from the cylindrical lens to the pixel plane.
- the target content includes: multiple target vertical content;
- the detection module is also configured to:
- the browsing image is used as a viewpoint image.
- the detection parameters at least include: an alignment angle deviation of the cylindrical lens
- the output module is further configured to:
- the alignment angle deviation of the cylindrical lens is acquired.
- the output module is further configured to:
- the alignment angle deviation of the cylindrical lens is output by the following formula:
- the ⁇ is the alignment angle deviation of the cylindrical lens
- the N is the number of the target longitudinal content
- the P sub is the corresponding pixel position of two adjacent viewpoint images on the same cylindrical lens
- the first pixel distance between W is the content width of the target vertical content on the viewpoint image.
- the detection module is also configured to:
- the browsing image is obtained by shooting the target screen in a front view, and the central content in the central position in the browsing image is not the target content, the browsing image is used as a viewpoint image.
- the detection parameters at least include: alignment position deviation of the cylindrical lens;
- the output module is further configured to:
- the alignment position deviation of the cylindrical lens is acquired.
- the output module is also configured as:
- the alignment position deviation of the cylindrical lens is output by the following formula:
- ⁇ P is the alignment position deviation of the cylindrical lens
- M is the difference value between the center content and the target content
- P sub is the difference between two adjacent viewpoint images on the same cylindrical lens. The first pixel distance between corresponding pixel positions.
- the output module is also configured as:
- the alignment position deviation of the cylindrical lens is output by the following formula:
- ⁇ P is the alignment position deviation of the cylindrical lens
- n is the medium refractive index from the cylindrical lens to the pixel surface
- ⁇ 1 and ⁇ 2 are the brightness of the viewpoint image relative to In the angular distribution of the target viewpoints, two viewing angles adjacent to 0 degrees are respectively used as the first target viewing angle and the second target viewing angle.
- the detection module is also configured to:
- the browse image is used as a viewpoint image.
- the detection parameters include at least: a radius of curvature of the cylindrical lens;
- the output module is further configured to:
- the radius of curvature of the optical simulation model of the cylindrical lens when the angle of view at which the sharpness of the optical simulation model is maximum is the angle of view of the viewpoint image, the radius of curvature is used as the curvature of the cylindrical lens radius.
- the detection module is also configured to:
- the output module is also configured as:
- the optical simulation model of the cylindrical lens By adjusting the radius of curvature of the optical simulation model of the cylindrical lens, when the similarity between the optical simulation model and the viewing angle brightness distribution curve of the cylindrical lens meets the similarity requirement, the optical simulation model The radius of curvature is used as the radius of curvature of the cylindrical lens.
- Some embodiments of the present disclosure provide a computing processing device, including:
- One or more processors when the computer-readable code is executed by the one or more processors, the computing processing device executes the above-mentioned screen detection method.
- Some embodiments of the present disclosure provide a computer program, including computer readable codes, which, when the computer readable codes are run on a computing processing device, cause the computing processing device to execute the screen detection method as described above.
- Some embodiments of the present disclosure provide a computer-readable medium, in which the computer program of the above-mentioned screen detection method is stored.
- Fig. 1 schematically shows a schematic flowchart of a screen detection method provided by some embodiments of the present disclosure.
- Fig. 2 schematically shows a schematic diagram of a screen detection method provided by some embodiments of the present disclosure.
- Fig. 3 schematically shows one of the schematic flowcharts of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 4 schematically shows one of the schematic diagrams of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 5 schematically shows one of the effect diagrams of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 6 schematically shows the second schematic flowchart of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 7 schematically shows the second schematic diagram of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 8 schematically shows the second schematic diagram of the effect of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 9 schematically shows the third schematic flowchart of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 10 schematically shows the third schematic diagram of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 11 schematically shows the third schematic diagram of the effect of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 12 schematically shows the fourth schematic flowchart of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 13 schematically shows the fourth schematic diagram of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 14 schematically shows the fifth schematic flowchart of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 15 schematically shows a fifth schematic diagram of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 16 schematically shows a fourth schematic diagram of effects of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 17 schematically shows the fifth schematic diagram of the effect of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 18 schematically shows the sixth schematic diagram of the effect of another screen detection method provided by some embodiments of the present disclosure.
- Fig. 19 schematically shows a schematic structural view of a screen detection device provided by some embodiments of the present disclosure.
- Fig. 20 schematically shows a block diagram of a computing processing device for performing a method according to some embodiments of the present disclosure.
- Fig. 21 schematically shows a storage unit for holding or carrying program codes implementing methods according to some embodiments of the present disclosure.
- the various parameters of the cylindrical lens in the related art correspond to the layout method.
- the actual parameters of the cylinder lens deviate from the design value due to the process and other reasons, it will directly affect the viewing effect. It is necessary to correct the process conditions or modify the layout method according to the actual parameters. to correct the display effect.
- the present disclosure proposes to display a specific image on the screen and analyze the displayed image to detect the detection parameters of the cylindrical lens on the screen.
- Fig. 1 schematically shows a schematic flow chart of a screen detection method provided by the present disclosure
- the method may be executed by any electronic device, for example, it may be applied to an application program with a screen detection function, and the method may be implemented by The server or terminal device of the application program executes, and the method includes:
- Step 101 receiving a cylindrical lens detection instruction for the target screen, the cylindrical lens detection instruction at least includes: the target viewpoint;
- Step 102 in response to the detection instruction, acquire a browsing image of a target screen under the target viewpoint, where the target screen is a screen with a cylindrical lens on a light-emitting side.
- the target screen is a display device with cylindrical lenses arranged on the light emitting side, and the cylindrical lenses can be arranged in a specific array arrangement. Since the image light from different viewpoints on the target screen will be projected in different directions after encountering the cylindrical lens, it is possible to set the layout of the images displayed on the target screen so that the user's eyes can see different images at different viewpoints. Correspondingly, browsing images captured by the image acquisition device at different shooting viewpoints may also be different.
- the target viewpoint refers to the shooting viewpoint required for shooting the target screen.
- the target viewpoint can be set by the user or automatically set by the system according to the detection requirements. Specifically, it can be set according to actual needs, and there is no limitation here. .
- Step 103 if the browsing image contains the target content, use the browsing image as a viewpoint image.
- the target content refers to the display content that needs to be included in the viewpoint images participating in this detection. It can be understood that since the contents of the browsing images on the target screen are different under different viewpoints, if the browsing images contain If the content of the images is different, it means that the shooting viewpoints of the browsing images are also different, so it can be determined by setting the target content whether the browsing images are obtained by shooting the target screen at the viewpoint required for this detection.
- the viewpoint image containing the target content can be selected according to the image content contained in the browsing image obtained by shooting, if the browsing image contains the target content
- the browsing image is used as the viewpoint image, and if the browsing image does not contain the target content, the browsing image may be filtered out.
- the numbers arranged in a full-screen array can be displayed on the target screen, and the browsing images under different shooting viewpoints are different numbers. If there is no deviation in the detection parameters of the cylindrical lens of the target screen, that is, the detection parameters are standard parameters When , the image content in the browsing image of the target screen is the same number, if there is a deviation in the detection parameters, the image content in the browsing image of the target screen will have different numbers, so that it can be checked according to whether there are different numbers in the browsing image under different viewpoints. to determine whether there is a deviation in the detection parameters of the cylindrical lens on the target screen. With reference to Fig.
- the image content of each browsing image in the browsing images 2-1, 2-2, 2-3 under the three shooting viewpoints only includes “1", “2", “ 3”; and when there is a deviation in the detection parameters of the cylindrical lens, only the partial image content of the browsing image 2-3 under the first viewpoint is “1”, and there are other image contents of “2” and “3”, which is obvious
- the image content of the browsing image under the first viewpoint is only "1” and there is a difference, then it can be determined that there is a deviation in the detection parameters of the cylindrical lens corresponding to the browsing image 2-3, and the browsing image 2- 5 and 2-6 also have deviations in detection parameters.
- Step 104 based on the image parameters of the viewpoint image, output the detection parameters of the cylindrical lenses on the target screen.
- the detection parameter refers to the actual index parameter of the cylindrical lens to be detected.
- the detection parameters of the cylindrical lens due to factors such as technology, there may be deviations between the detection parameters of the cylindrical lens and the expected parameters. These deviations will cause the browsing images of different viewpoints actually displayed on the target screen, which are different from those under the standard parameters. There is a deviation in the browsing image of the viewpoint.
- the browsing image of the target screen at a specific viewpoint should contain an image content of 1, but due to the deviation of the detection parameters of the cylindrical lens, the actual viewing image at a specific viewpoint contains an image of Content may be 2.
- the cylindrical Lens inspection parameters since the image content contained in the browsing images under different shooting viewpoints is affected by the detection parameters of the cylindrical lens, the cylindrical Lens inspection parameters.
- the embodiment of the present disclosure selects the viewpoint image containing the target content from the browsing images captured on the screen at a specific viewpoint, and detects the detection parameters of the cylindrical lens on the screen according to the image parameters of the viewpoint image, which can be efficient and convenient.
- Various detection parameters of the cylindrical lens on the screen are obtained accurately, and the detection efficiency of the detection parameters of the cylindrical lens on the screen is improved.
- the step 102 may include: adjusting the viewpoint of the image acquisition device to the target viewpoint, so as to capture the light-emitting side of the target screen to obtain a browsing image.
- the image acquisition device may be an electronic device with an image acquisition function, and the image acquisition device may also have functions such as data processing, data storage, and data transmission.
- the system may connect the image acquisition device through a transmission device to pass Control the transmission device to adjust the shooting viewpoint of the image acquisition device.
- the image acquisition device can also be manually adjusted to shoot the target screen.
- the specific settings can be set according to actual needs, which is not limited here.
- the target viewpoint is the shooting viewpoint required for shooting the light-emitting side of the target screen.
- the target viewpoint can be a pre-specified fixed viewpoint, or a randomly selected shooting viewpoint, or be adaptively adjusted according to different detection parameters, for example Shooting at the front view point or at the 30° viewpoint can be set according to actual needs, and there is no limitation here.
- the browsing image of the target screen may be acquired by adjusting the shooting viewpoint of the image acquisition device to adjust to the target viewpoint required for this shooting.
- the shooting viewpoint of the image acquisition device By adjusting the shooting viewpoint of the image acquisition device to the target viewpoint to shoot the light-emitting side of the target screen, the browsing images required for this detection can be quickly obtained.
- the step 101 may include:
- the system can be connected to the image acquisition device through a transmission device, so as to adjust the shooting position of the image acquisition device by controlling the transmission device, so as to realize convenient adjustment of the image acquisition device.
- the step 101 may include: adjusting the shooting position parameters of the image acquisition device so that the shooting position of the image acquisition device is at the target position, and the shooting position parameters include: shooting angle, shooting height and At least one of the shooting distances.
- the target angle refers to the shooting angle of the browsing image required for this detection
- the target position is the shooting position of the browsing image required for this detection relative to the light-emitting side of the target screen
- the target height is the image The height of the collection device relative to the ground.
- the image acquisition device can be adjusted through the transmission device by setting a position parameter including at least one of the shooting angle, shooting height and shooting distance, and the target position is used to shoot the light-emitting side of the target screen, so as to realize the control of the image collection device. Convenient adjustment
- the image parameters include at least: the placement height of the cylindrical lens.
- FIG. 3 it shows another One of the schematic flow charts of a screen detection method, the method comprising:
- Step 201 acquiring a browsing image of a target screen shot at the target viewpoint, the target screen is a screen with a cylindrical lens on the light output side, and the target screen is a screen with a cylindrical lens on the light output side.
- the placement height of the cylindrical lens refers to the actual distance between the upper surface of the cylindrical lens and the pixel surface of the target screen. Since the content of the browsing images displayed on the target screen is different under different shooting viewpoints, the target viewpoint can be set to be multiple shooting viewpoints on the same straight line, and the straight lines where the multiple shooting viewpoints are located are parallel to the pixel plane of the target screen, Multiple browsing images may be obtained by shooting the light-emitting side of the target screen, and the multiple browsing images may contain various contents displayed on the target screen under different viewpoints. If there are N image contents displayed on the target screen, multiple shooting viewpoints may be set on a straight line parallel to the pixel plane of the target screen to capture and acquire N browsing images respectively containing the N image contents.
- the image content displayed on the target screen contains numbers “1", “2”, “3”, and “4", and the image content under each shooting viewpoint is different, then the Multiple shooting viewpoints are set on straight lines parallel to the pixel planes to shoot the light-emitting side of the target screen, so that multiple browsing images respectively including “1", “2”, “3” and "4" can be obtained.
- Step 202 if the browsing image contains the target content, use the browsing image as a viewpoint image, wherein the viewpoints of at least two viewpoint images are on the same straight line, and the straight line and the target screen Pixel faces are parallel.
- the viewpoint images for parameter detection can clearly reflect the image content of different shooting viewpoints displayed on the target screen, and avoid cross-affecting the subsequent parameter detection due to the image content under different shooting viewpoints, it can be browsed according to Whether or not the image contains only one target content is used to filter viewpoint images that participate in parameter detection from browsing images. For example: when the target content is four numbers "1", “2", “3”, and "4", it is possible to select from the browsing image only "1", only "2", and only "3” ”, and four browsing images containing only “4” as viewpoint images.
- the setting method of the specific target content can be set according to actual needs, which is not limited here.
- Step 203 based on the viewpoint image, obtain the viewpoint position corresponding to the viewpoint image and the pixel point position on the pixel plane.
- the viewpoint image and the viewpoint position and the pixel plane are in one-to-one correspondence, and the viewpoint position of the viewpoint image and the corresponding pixel position of the viewpoint image on the pixel plane of the target screen can be obtained by observing and analyzing the viewpoint image and the target screen.
- Step 204 acquiring a first pixel distance between two adjacent viewpoint images on the same cylindrical lens and corresponding pixel positions.
- the screen light at the viewpoint position of two adjacent viewpoint images is caused by which two phases on the pixel plane of the target screen
- the actual distance between the adjacent pixels is taken as the first pixel distance. Since the distance between adjacent pixels of a pixel on a pixel plane is the same, the first pixel distance between any pair of adjacent pixels can reflect the pixel distance between other pairs of adjacent pixels .
- Step 205 based on the position of the viewpoint, the number of viewpoints, the distance from the first pixel, and the refractive index of the medium from the rod lens to the pixel plane, obtain the placement height of the rod lens on the target screen.
- the placement height of the cylindrical lens is positively correlated with the distance sum of the first pixel, is positively correlated with the refractive index of the medium from the cylindrical lens to the pixel surface, and is positively correlated with the distance from the screen where the shooting viewpoint is located to the pixel.
- the placement height of the cylindrical lens on the target screen can be calculated by setting the algorithm based on the viewpoint position, the number of viewpoints, the distance between the first pixel and the refractive index of the medium.
- the step 205 includes:
- T is the placement height
- N is the number of viewpoints
- n is the medium refractive index from the cylindrical lens to the pixel surface
- P sub is the pixel corresponding to two adjacent viewpoint images on the same cylindrical lens
- the first pixel distance between positions x N is the x-axis coordinate value of the Nth viewpoint image
- x 1 is the x-axis coordinate value of the first viewpoint image
- z is the z-axis coordinate value of each viewpoint image
- N ⁇ 2 is a positive integer.
- the plane where the pixel plane of the target screen is located can be used as the xy plane, specifically, the straight line where the target viewpoint is located can be used as the x-axis, and the x-axis is on the plane where the pixel plane is located.
- the vertical line is used as the y-axis, and the straight line perpendicular to the plane where the pixel surface is located is used as the z-axis to establish a plane space coordinate system, and the space coordinate values of each target viewpoint in the plane space coordinate system are used as the viewpoint positions of each target viewpoint to substitute into into the formula for calculation.
- the air surface between the lower surface of the cylindrical lens and the pixel surface also has a certain refraction effect on the screen light, it is necessary to introduce the medium refractive index n from the cylindrical lens to the pixel surface in the formula to correct the calculation process , minimize the influence of the refraction effect of the air surface on the calculated placement height of the cylindrical lens, and ensure the accuracy of the detected placement height of the cylindrical lens.
- the plane where the pixel plane is located is the xy plane
- the straight line where the target viewpoint is located is the x-axis
- the vertical line of the x-axis on the xy plane is the y-axis
- the vertical line of the xy plane is the y-axis to establish spatial rectangular coordinates Tie.
- the spatial coordinate values of the four target viewpoints are (x1, y, z), (x2, y, z), (x3, y, z), (x4 , y, z), sequentially photographing the light-emitting side of the target screen at the target viewpoint, as shown in Figure 5, four images containing only "1", “2", “3”, and "4" can be obtained point of view images. It can be understood that if there are N viewpoints, at (xN, y, z), a viewpoint image including the number "N" in full screen or partially can be captured.
- the spatial coordinate values of the four viewpoint images are (-57,0,350), (-19,0,350), (19,0,350), (57,0,350), at this time, if the first pixel distance P sub is 8.725 ⁇ m, and the refractive index n of the medium is 1.53. Then, the space coordinate value of the viewpoint image, the distance of the first pixel point and the refractive index of the medium are substituted into the formula for calculation, and the placement height T of the cylindrical lens can be obtained as 120.5 ⁇ m.
- the target content includes: target horizontal content
- the detection parameters include at least: the center distance between two adjacent cylindrical lenses.
- Step 301 acquiring a browsing image of a target screen shot at the target viewpoint, the target screen is a screen with a cylindrical lens on the light output side, and the target screen is a screen with a cylindrical lens on the light output side.
- the center distance refers to the actual distance between two adjacent cylindrical lenses in the cylindrical lens array of the target screen.
- Step 302 in the case that the horizontal content included in the browsing image is the target horizontal content, use the browsing image as a viewpoint image.
- the horizontal content refers to the horizontally arranged image content in the browsing image
- the target horizontal content refers to the horizontal content that needs to be included in the viewpoint image required to participate in this parameter detection.
- the target horizontal content can be based on
- the image content displayed on the target screen contains image content for setting, for example: if the image content is the four numbers "1", “2", "3" and "4" arranged in rows, then the target horizontal content can be set to be horizontal for each row
- the number contained in the layout content is the same, and all four numbers are included in the browsing image, so that the viewing distance of the browsing image containing the four numbers and the same number in each row is the actual viewing distance where the image content can be clearly viewed Distance, the browsing image is used as a viewpoint image participating in parameter detection.
- Step 303 based on the placement height of the cylindrical lens and the medium refractive index from the cylindrical lens to the pixel surface, the center distance between the two adjacent cylindrical lenses is obtained.
- the distance between the centers of two adjacent cylindrical lenses is positively correlated with the product of the second pixel distance and the viewing distance, and is negatively correlated with the sum of the viewing distance and the placement height of the cylindrical lenses. It is proportional to the refractive index of the medium from the cylindrical lens to the pixel surface, and the distance between two adjacent cylindrical lenses can be calculated by establishing an algorithm formula based on the viewing distance, the distance from the second pixel point, the placement height of the cylindrical lens, and the refractive index of the medium. center distance.
- the step 303 includes: outputting the center distance between the two adjacent cylindrical lenses through the following formula:
- the P lens is the center distance between the two adjacent cylindrical lenses
- the L is the viewing distance of the viewpoint image
- the P pixel is the corresponding pixel point of the viewpoint image on the two adjacent cylindrical lenses
- the T is the placement height of the cylindrical lens
- the n is the medium refractive index from the cylindrical lens to the pixel plane.
- the viewpoint position of the viewpoint image can be regarded as the position of the user's eyes, so that the viewpoint position and the screen where the cylindrical lens is located can be compared.
- the vertical distance between is taken as the viewing distance of the viewpoint image.
- the distance between the centers of all adjacent rod lenses can be represented by the second pixel point distance.
- this is ideal and usually different
- the distance of the second pixel point corresponding to each pair of adjacent cylindrical lenses can be independently detected.
- the target content includes: various target longitudinal content
- the detection parameters include at least: the alignment angle deviation of the cylindrical lens, referring to FIG. 9 , which shows the The third schematic flow chart of another screen detection method publicly provided, the method includes:
- the step 303 may include:
- the center distance between the two adjacent cylindrical lenses is output by the following formula:
- the P lens is the center distance between the two adjacent cylindrical lenses
- the T is the placement height of the cylindrical lenses
- the n is the medium refractive index from the cylindrical lenses to the pixel surface
- P lens is the center distance between the two adjacent cylindrical lenses
- P pixel is the second pixel distance between the corresponding pixel positions of the viewpoint image on the adjacent two cylindrical lenses
- T is the cylindrical lens
- L is the viewing distance
- n is the medium refractive index from the cylindrical lens to the pixel surface.
- Step 401 acquiring a browsing image of a target screen shot at the target viewpoint, the target screen is a screen with a cylindrical lens on the light output side, and the target screen is a screen with a cylindrical lens on the light output side.
- the alignment angle deviation of the cylindrical lens refers to the angular deviation of the position of the image content between the actual image content displayed by the cylindrical lens and the design expected image content.
- the frame 10-1 is used to reflect the actual position of the image content
- frame 10-2 is used to reflect the expected design position of the image content
- the angle between the alignment edges between 10-1 and 10-2 is the alignment angle deviation.
- the viewpoint that allows the user to clearly view the specific content in the browsing image can be considered as the expected viewing distance that meets the expected requirements, but because the detection parameters of the cylindrical lens may have deviations, so The actual viewing distance when the actual user can clearly view the specific content in the browsing image may also deviate from the expected viewing distance. Therefore, it is necessary to collect images of the target screen to determine the shooting viewpoint and screen when the specific content can actually be clearly viewed. The actual viewing distance between.
- Step 402 if the vertical content included in the browsing image is at least two target vertical content, use the browsing image as a viewpoint image.
- the vertical content refers to the image content arranged vertically in the browsing image
- the target vertical content refers to the vertical content that needs to be included in the viewpoint image required to participate in this parameter detection.
- the target vertical content can be based on The image content displayed on the target screen contains the image content and is set in columns. For example, if the image content is the four numbers "1", “2", "3" and "4" arranged in columns, the target vertical content can be set as each The numbers contained in the vertically arranged content of the column are the same, and all four numbers are included in the browsing image, so that any viewing distance can be clearly viewed at any viewing distance The actual viewing distance of the image content. On the contrary, if the numbers in each column in the browsing image are different, it indicates that there is an alignment angle deviation of the cylindrical lens. Therefore, the browsing image containing at least two vertical contents of the target can be used as the viewpoint participating in the parameter detection image.
- Step 403 Obtain the quantity of the target vertical content, the viewpoint position corresponding to the viewpoint image, and the pixel point position on the pixel plane based on the viewpoint image.
- the quantity of the target vertical content can be obtained from the image content displayed on the target screen.
- viewpoint position and pixel point position corresponding to the viewpoint image please refer to the detailed description in the axis 203, which will not be repeated here. .
- Step 404 acquiring the first pixel distance between the corresponding pixel positions of two adjacent viewpoint images on the same cylindrical lens, and the content width of the vertical content of the target on the viewpoint images.
- the distance of the first pixel can refer to the detailed description of step 204 , which will not be repeated here.
- the content width of the target vertical content refers to the display width of the target vertical content in the viewpoint image.
- Step 405 based on the quantity of the target vertical content, the first pixel distance, and the content width, the alignment angle deviation of the cylindrical lens is acquired.
- the alignment angle deviation of the cylindrical lens is negatively correlated with the ratio of the number of vertical content of the target to the content width, and the distance of the first pixel point is also negatively correlated, so it can be obtained by This correlation sets the algorithmic formula to obtain the alignment angle deviation of the cylindrical lens.
- the step 405 includes: outputting the alignment angle deviation of the cylindrical lens through the following formula:
- the ⁇ is the alignment angle deviation of the cylindrical lens
- the N is the number of the target longitudinal content
- the P sub is the corresponding pixel position of two adjacent viewpoint images on the same cylindrical lens
- the first pixel distance between W is the content width of the target vertical content on the viewpoint image.
- the cylindrical lens can be calculated by substituting the above formula
- the alignment angle deviation ⁇ is 0.067°.
- the target content includes: the detection parameters at least include: the alignment position deviation of the cylindrical lens, referring to FIG. 12 , which shows another The fourth schematic flow diagram of the screen detection method, the method includes:
- Step 501 acquiring a browsing image of a target screen shot at the target viewpoint, the target screen is a screen with a cylindrical lens on the light output side, and the target screen is a screen with a cylindrical lens on the light output side.
- the alignment position deviation refers to the horizontal distance between the image content actually displayed by the cylindrical lens and the design expected image content, where the image content is located.
- the frame 13-1 is used to reflect the actual position of the image content
- the frame 13-2 is used to reflect the expected design position of the image content
- the alignment between 13-1 and 13-2 The horizontal distance between points is the alignment position deviation.
- Step 502 in the case that the browsing image is obtained by shooting the target screen from a frontal viewing angle, and the central content in the central position in the browsing image is not the target content, use the browsing image as a viewpoint image .
- the central content at the center position in the browsing image of the normal viewing angle is the same as the expected design image content, it can be determined that there is no deviation in the alignment position of the cylindrical lens of the target screen. If the image content in the browsing image is different from the expected design image content, it can be determined that the alignment position of the cylindrical lens of the target screen is biased, and parameter detection is required, and the browsing image is used as a viewpoint image participating in parameter detection.
- Step 503 based on the image parameters of the viewpoint image, the alignment position deviation of the cylindrical lens is obtained.
- the alignment position deviation of the cylindrical lens is positively correlated with the difference between the center content and the target content, and is negatively correlated with the distance from the first pixel point. Therefore, according to the correlation
- the relationship setting algorithm publicly calculates the alignment position deviation of the cylindrical lens.
- the step 503 may include: outputting the alignment position deviation of the cylindrical lens through the following formula:
- ⁇ P is the alignment position deviation of the cylindrical lens
- M is the difference value between the center content and the target content
- P sub is the difference between two adjacent viewpoint images on the same cylindrical lens. The first pixel distance between corresponding pixel positions.
- the difference value between the central content and the target content refers to an index value that characterizes the degree of quality inspection difference between the central content and the target content, which may be the
- the difference between the content types may also be the area difference between the central content and the difference content contained in the target content, which can be set according to actual needs, and is not limited here.
- the method of obtaining the distance of the first pixel point refer to the detailed description of step 203, which will not be repeated here.
- the step 503 may include: outputting the alignment position deviation of the cylindrical lens through the following formula:
- ⁇ P is the alignment position deviation of the cylindrical lens
- n is the medium refractive index from the cylindrical lens to the pixel surface
- ⁇ 1 and ⁇ 2 are the brightness of the viewpoint image relative to In the angular distribution of the target viewpoints, two viewing angles adjacent to 0 degrees are respectively used as the first target viewing angle and the second target viewing angle.
- the detection parameters at least include: the radius of curvature of the cylindrical lens, referring to FIG. 14 , which shows the fifth schematic flow diagram of another screen detection method provided by the present disclosure , the method includes:
- Step 601 acquiring a browsing image of a target screen shot at the target viewpoint, the target screen is a screen with a cylindrical lens on the light output side, and the target screen is a screen with a cylindrical lens on the light output side.
- the radius of curvature of the cylindrical lens refers to the rotation rate of the tangent direction angle of the central point of the upper surface of the cylindrical lens to the arc length of the upper surface.
- Part of the image content displayed on the target screen can be turned off to display only part of the image content, so that the display area of the part of the image content displayed on the target screen is black, so that the light-emitting side of the target screen can be photographed at different viewpoints.
- a browse image that reflects the sharpness of the screen is available.
- 15-1 when the target screen has an alignment angle deviation, the browsing image that displays part of the image content is closed, and the black stripes in it are the display area where the part of the image content that is displayed is closed; 15-2 When there is no alignment angle deviation on the target screen, turn off the browsing image that displays part of the Tuxiangxiang content, and the black stripes are also the display area where the part of the image content that is turned off is located.
- Step 602 if the sharpness of the specified content in the browse image is the largest, use the browse image as a viewpoint image.
- the sharpness of the browsed image is an index parameter used to characterize the display brightness and contrast of the image, which may be specifically obtained based on image parameters such as the display brightness or contrast of the image. Since the sharpness of specified content in browsing images under different shooting viewpoints is different, the browsing image with the highest sharpness can be selected as the viewpoint image participating in parameter detection by comparing multiple collected browsing images. For example, when the specified content is part of the image content with the display turned off, the browse image can be filtered according to the sharpness of the black stripes in the browse image, and of course the browse image can also be screened by comparing the sharpness of the image content that is not turned off. Screening, relatively speaking, because the sharpness of black lines is more obvious, it can be set according to actual needs, and there is no limitation here.
- Step 603 acquiring the viewing angle of the viewpoint image.
- the viewing angle of the viewpoint image may be calculated according to the recorded shooting angle and shooting position by recording the shooting angle and shooting position of the viewpoint image.
- Step 604 by adjusting the radius of curvature of the optical simulation model of the cylindrical lens, when the angle of view at which the sharpness of the optical simulation model is maximum is the angle of view of the viewpoint image, use the radius of curvature as the angle of the cylinder The radius of curvature of the lens.
- the optical simulation model of the cylindrical lens can be constructed by optical simulation software. After adjusting the radius of curvature of the optical simulation model, observe the viewing angle when the sharpness of the optical simulation model is the largest. If it is the same as the viewing angle of the viewpoint image, it indicates that the cylindrical lens
- the radius of curvature of is the radius of curvature of the optical simulation model at this viewing angle.
- 16-1 is a viewpoint image at a non-collimated viewing angle (the shooting viewing angle is 0°), and the contrast between light and dark is relatively small
- 16-2 is a collimating viewing angle (the shooting viewing angle is 21°).
- the contrast between light and shade of the image is relatively large, so it is determined that the sharpness of the viewpoint image is the largest when the shooting viewing angle is 21°, and then the sharpest viewing angle of 21° is substituted into the steps described in step 204 above for processing.
- the sharpness may be obtained through the following steps: acquiring the sharpness of the viewpoint image according to the negative correlation between the contrast and the sharpness of the viewpoint image.
- the contrast of the viewpoint image is also the largest at this time, and the browsing image with the highest contrast can be selected from the browsing images As a viewpoint image to efficiently obtain the sharpness of the image.
- sharpness acquisition methods in related technologies can also be used to calculate the sharpness of the viewpoint image, such as MTF (Modulation Transfer Function, analog transfer function), based on the image scheduling value to obtain the sharpness of the viewpoint image, of course the specific sharpness
- MTF Modulation Transfer Function, analog transfer function
- the degree calculation method can be set according to actual requirements, as long as the sharpness of the viewpoint image can be represented, it can be applied to the embodiments of the present disclosure, and there is no limitation here.
- the radius of curvature of the cylindrical lens can also be output through the following steps 605 to 606:
- Step 605 acquiring a viewing angle luminance distribution curve of the cylindrical lens.
- the upper surface of the cylindrical lens may be scanned by an image acquisition device provided with a laser lens to obtain a viewing angle luminance distribution curve of the cylindrical lens.
- Step 606 by adjusting the radius of curvature of the optical simulation model of the cylindrical lens, when the similarity between the optical simulation model and the viewing angle brightness distribution curve of the cylindrical lens meets the similarity requirement, the optical The radius of curvature of the simulation model is used as the radius of curvature of the cylindrical lens.
- the system scans the viewing angle brightness distribution curves under each curvature radius in the optical simulation model to obtain the corresponding viewing angle brightness distribution curves under each curvature radius, and then calculates the corresponding viewing angle brightness distribution curves under each curvature radius
- the similarity requirement may be that the similarity is greater than the similarity threshold, or the maximum similarity is taken, which can be set according to actual requirements, and is not limited here.
- the embodiment of the present disclosure screens out the user's target body type category from various body type categories based on the predicted image features extracted from the user's body image, and can accurately identify the user's body type category without relying on the body shape template. The accuracy of screen detection is improved.
- FIG. 19 schematically shows a schematic structural view of a screen detection device 70 provided by the present disclosure, and the device includes:
- the receiving module 701 is configured to receive a cylindrical lens detection instruction for the target screen, where the cylindrical lens detection instruction includes at least: a target viewpoint;
- the detection module 702 is configured to, in response to the detection instruction, acquire a browsing image taken for the target screen under the target viewpoint, and the target screen is a screen with a cylindrical lens on the light-emitting side;
- the browsing image contains the target content, using the browsing image as a viewpoint image;
- the output module 703 is configured to: output the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image.
- the detection module 702 is further configured to:
- the detection module 702 is further configured to:
- the detection module 702 is further configured to:
- the shooting position parameters of the image acquisition device are adjusted so that the shooting position of the image acquisition device is at the target position, and the shooting position parameters include: at least one of shooting angle, shooting height and shooting distance.
- the detection module 702 is further configured to:
- the browsing image contains the target content
- the browsing image is used as a viewpoint image, wherein the viewpoints of at least two of the viewpoint images are on the same straight line, and the straight line is parallel to the pixel plane of the target screen .
- the image parameters at least include: the placement height of the cylindrical lens;
- the output module 703 is also configured to:
- the output module 703 is further configured to:
- T is the placement height
- N is the number of viewpoints
- n is the medium refractive index from the cylindrical lens to the pixel surface
- P sub is the pixel corresponding to two adjacent viewpoint images on the same cylindrical lens
- the first pixel distance between positions x N is the x-axis coordinate value of the Nth viewpoint image
- x 1 is the x-axis coordinate value of the first viewpoint image
- z is the z-axis coordinate value of each viewpoint image
- N ⁇ 2 is a positive integer.
- the target content includes: target horizontal content;
- the detection module 702 is also configured to:
- the browsing image is used as a viewpoint image.
- the detection parameters at least include: the center distance between two adjacent cylindrical lenses;
- the output module 703 is also configured to:
- the center distance between the two adjacent cylindrical lenses is obtained.
- the output module 703 is further configured to:
- the center distance between the two adjacent cylindrical lenses is output by the following formula:
- the P lens is the center distance between the two adjacent cylindrical lenses
- the T is the placement height of the cylindrical lenses
- the n is the medium refractive index from the cylindrical lenses to the pixel surface
- the output module 703 is further configured to:
- the center distance between the two adjacent cylindrical lenses is output by the following formula:
- the P lens is the center distance between the two adjacent cylindrical lenses
- the L is the viewing distance of the viewpoint image
- the P pixel is the corresponding pixel point of the viewpoint image on the two adjacent cylindrical lenses
- the T is the placement height of the cylindrical lens
- the n is the medium refractive index from the cylindrical lens to the pixel plane.
- the target content includes: multiple target vertical content;
- the detection module 702 is also configured to:
- the browsing image is used as a viewpoint image.
- the detection parameters at least include: an alignment angle deviation of the cylindrical lens
- the output module 703 is also configured to:
- the alignment angle deviation of the cylindrical lens is acquired.
- the output module 703 is also configured to:
- the alignment angle deviation of the cylindrical lens is output by the following formula:
- the ⁇ is the alignment angle deviation of the cylindrical lens
- the N is the number of the target longitudinal content
- the P sub is the corresponding pixel position of two adjacent viewpoint images on the same cylindrical lens
- the first pixel distance between W is the content width of the target vertical content on the viewpoint image.
- the detection module 702 is also configured to:
- the browsing image is obtained by shooting the target screen in a front view, and the central content in the central position in the browsing image is not the target content, the browsing image is used as a viewpoint image.
- the detection parameters at least include: alignment position deviation of the cylindrical lens;
- the output module 703 is also configured to:
- the alignment position deviation of the cylindrical lens is acquired.
- the output module 703 is further configured to:
- the alignment position deviation of the cylindrical lens is output by the following formula:
- ⁇ P is the alignment position deviation of the cylindrical lens
- M is the difference value between the center content and the target content
- P sub is the difference between two adjacent viewpoint images on the same cylindrical lens. The first pixel distance between corresponding pixel positions.
- the output module 703 is further configured to:
- the alignment position deviation of the cylindrical lens is output by the following formula:
- ⁇ P is the alignment position deviation of the cylindrical lens
- n is the medium refractive index from the cylindrical lens to the pixel surface
- ⁇ 1 and ⁇ 2 are the brightness of the viewpoint image relative to In the angular distribution of the target viewpoints, two viewing angles adjacent to 0 degrees are respectively used as the first target viewing angle and the second target viewing angle.
- the detection module 702 is further configured to:
- the browse image is used as a viewpoint image.
- the detection parameters include at least: a radius of curvature of the cylindrical lens;
- the output module 703 is also configured to:
- the radius of curvature of the optical simulation model of the cylindrical lens when the angle of view at which the sharpness of the optical simulation model is maximum is the angle of view of the viewpoint image, the radius of curvature is used as the curvature of the cylindrical lens radius.
- the detection module 702 is further configured to:
- the output module 703 is further configured to:
- the radius of curvature of the optical simulation model of the cylindrical lens By adjusting the radius of curvature of the optical simulation model of the cylindrical lens, when the similarity between the optical simulation model and the viewing angle brightness distribution curve of the cylindrical lens meets the similarity requirement, the The radius of curvature is used as the radius of curvature of the cylindrical lens.
- the embodiment of the present disclosure selects the viewpoint image containing the target content from the browsing images captured on the screen at a specific viewpoint, and detects the detection parameters of the cylindrical lens on the screen according to the image parameters of the viewpoint image, which can be efficient and convenient.
- Various detection parameters of the cylindrical lens on the screen are obtained accurately, and the detection efficiency of the detection parameters of the cylindrical lens on the screen is improved.
- the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without any creative efforts.
- the various component embodiments of the present disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
- a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all components in the computing processing device according to the embodiments of the present disclosure.
- DSP digital signal processor
- the present disclosure can also be implemented as an apparatus or apparatus program (eg, computer program and computer program product) for performing a part or all of the methods described herein.
- Such a program realizing the present disclosure may be stored on a computer-readable medium, or may have the form of one or more signals.
- Such a signal may be downloaded from an Internet site, or provided on a carrier signal, or provided in any other form.
- FIG. 20 illustrates a computing processing device that may implement methods according to the present disclosure.
- the computing processing device conventionally includes a processor 810 and a computer program product or computer readable medium in the form of memory 820 .
- Memory 820 may be electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
- the memory 820 has a storage space 830 for program code 831 for performing any method steps in the methods described above.
- the storage space 830 for program codes may include respective program codes 831 for respectively implementing various steps in the above methods. These program codes can be read from or written into one or more computer program products.
- These computer program products comprise program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
- Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG. 21 .
- the storage unit may have storage segments, storage spaces, and the like arranged similarly to the memory 820 in the computing processing device of FIG. 20 .
- the program code can eg be compressed in a suitable form.
- the storage unit includes computer readable code 831', i.e. code readable by, for example, a processor such as 810, which code, when executed by a computing processing device, causes the computing processing device to perform the above-described methods. each step.
- references herein to "one embodiment,” “an embodiment,” or “one or more embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Additionally, please note that examples of the word “in one embodiment” herein do not necessarily all refer to the same embodiment.
- any reference signs placed between parentheses shall not be construed as limiting the claim.
- the word “comprising” does not exclude the presence of elements or steps not listed in a claim.
- the word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
- the disclosure can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means can be embodied by one and the same item of hardware.
- the use of the words first, second, and third, etc. does not indicate any order. These words can be interpreted as names.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims (26)
- 一种屏幕检测方法,其特征在于,所述方法包括:接收对于目标屏幕的柱透镜检测指令,所述柱透镜检测指令至少包括:目标视点;响应于所述检测指令,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕;在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像;基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数。
- 根据权利要求1所述的方法,其特征在于,所述获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕,包括:将图像采集设备的视点调整至目标视点,以对目标屏幕的出光侧进行拍摄,得到浏览图像。
- 根据权利要求2所述的方法,其特征在于,所述将图像采集设备的视点调整至目标视点,以对目标屏幕的出光侧进行拍摄,得到浏览图像,包括:将图像采集设备相对于目标屏幕的拍摄位置调整至目标位置,对目标屏幕的出光侧进行拍摄,得到浏览图像。
- 根据权利要求3所述的方法,其特征在于,所述将图像采集设备相对于目标屏幕的拍摄位置调整至目标位置,包括:将图像采集设备的拍摄位置参数进行调整,以使得所述图像采集设备的拍摄位置处于目标位置,所述拍摄位置参数包括:拍摄角度、拍摄高度和拍摄距离中的至少一种。
- 根据权利要求1所述的方法,其特征在于,所述目标内容存在至少两个;所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,其中,至少两幅所述视点图像的视点处于同一直线,且所述直线与所述目标屏幕的像素面平行。
- 根据权利要求5所述的方法,其特征在于,所述图像参数至少包括:所述柱透镜的放置高度;所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:基于所述视点图像获取所述视点图像所对应的视点位置、像素面上的像素点位置;获取相邻两幅所述视点图像在同一柱透镜上,所对应像素点位置之间的第一像素点距离;基于所述视点位置、所述视点数量、所述第一像素点距离、所述柱透镜到所述像素面的介质折射率,获取所述目标屏幕上柱透镜的放置高度。
- 根据权利要求6所述的方法,其特征在于,所述基于所述视点位置、所述视点数量、所述第一像素点距离、所述柱透镜到所述像素面的介质折射率,获取所述目标屏幕上柱透镜的放置高度,包括:以所述目标屏幕的像素面所在平面为xy面建立空间直角坐标系(x,y,z),获取各视点位置在所述空间直角坐标系中的空间坐标值,并通过如下公式输出所述目标屏幕上所述柱透镜的放置高度:其中,T为放置高度,所述N为视点数量,n为所述柱透镜到所述像素面的介质折射率,P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,x N为第N个视点图像的x轴空间坐标值,x 1为第1个视点图像的x轴坐标值,z为各视点图像的z轴坐标值,其中N≥2,N为正整数。
- 根据权利要求1所述的方法,其特征在于,所述目标内容包括:目标横向内容;所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:在所述浏览图像中所包含的横向内容均为目标横向内容的情况下,将所述浏览图像作为视点图像。
- 根据权利要求8所述的方法,其特征在于,所述检测参数至少包括:相邻两个柱透镜的中心距离;所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测 参数,包括:基于所述柱透镜的放置高度、所述柱透镜到所述像素面的介质折射率,获取所述相邻两个柱透镜的中心距离。
- 根据权利要求1所述的方法,其特征在于,所述目标内容包括:多种目标纵向内容;所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:在所述浏览图像中所包含的纵向内容为至少两个目标纵向内容的情况下,将所述浏览图像作为视点图像。
- 根据权利要求12所述的方法,其特征在于,所述检测参数至少包括:所述柱透镜的对位角度偏差;所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:基于所述视点图像获取所述目标纵向内容的数量、所述视点图像所对应的视点位置、像素面上的像素点位置;获取相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离、所述视点图像上目标纵向内容的内容宽度;基于所述目标纵向内容的数量、所述第一像素点距离、所述内容宽度,获取所述柱透镜的对位角度偏差。
- 根据权利要求1所述的方法,其特征在于,所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:在所述浏览图像是通过正视角下对所述目标屏幕进行拍摄得到,且所述浏览图像中处于中心位置的中心内容不为目标内容的情况下,将所述浏览图像作为视点图像。
- 根据权利要求15所述的方法,其特征在于,所述检测参数至少包括:所述柱透镜的对位位置偏差;所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:基于所述视点图像的图像参数,获取所述柱透镜的对位位置偏差。
- 根据权利要求16所述的方法,其特征在于,所述基于所述视点图像的图像参数,获取所述柱透镜的对位位置偏差,包括:通过如下公式输出所述柱透镜的对位位置偏差:ΔP=M·P sub其中,△P为所述柱透镜的对位位置偏差,所述M为获取所述中心内容与目标内容的差异值,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离。
- 根据权利要求1所述的方法,其特征在于,所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:在所述浏览图像中指定内容的锐利度最大的情况下,将所述浏览图像作为视点图像。
- 根据权利要求19所述的方法,其特征在于,所述检测参数至少包括:所述柱透镜的曲率半径;所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:获取所述视点图像的视角;通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿真模型的锐利度最大时的视角为所述视点图像的视角时,将所述曲率半径作为所述柱透镜的曲率半径。
- 根据权利要求19或20所述的方法,其特征在于,所述锐利度可通过以下步骤获得:依据所述视点图像的对比度和锐利度之间的负相关关系,获取所述视点图像的锐利度。
- 根据权利要求19所述的方法,其特征在于,所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:获取所述柱透镜的视角亮度分布曲线;通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿 真模型与所述柱透镜的视角亮度分布曲线之间的相似度符合相似度要求时,将所述光学仿真模型的曲率半径作为所述柱透镜的曲率半径。
- 一种屏幕检测装置,其特征在于,所述装置包括:一个或多个处理器;存储器,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,能使得所述一个或多个处理器实现权利要求1-22中任一项所述的屏幕检测方法。
- 一种计算处理设备,其特征在于,包括:存储器,其中存储有计算机可读代码;一个或多个处理器,当所述计算机可读代码被所述一个或多个处理器执行时,所述计算处理设备执行如权利要求1-22中任一项所述的屏幕检测方法。
- 一种计算机程序,其特征在于,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行如权利要求1-22中任一项所述的屏幕检测方法。
- 一种计算机可读介质,其特征在于,其中存储了如权利要求1-22中任一项所述的屏幕检测方法的计算机程序。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/765,390 US20240121369A1 (en) | 2021-05-28 | 2021-05-28 | Screen detection method, apparatus and device, computer program and readable medium |
PCT/CN2021/096964 WO2022246844A1 (zh) | 2021-05-28 | 2021-05-28 | 屏幕检测方法、装置、设备、计算机程序和可读介质 |
CN202180001334.9A CN115836236A (zh) | 2021-05-28 | 2021-05-28 | 屏幕检测方法、装置、设备、计算机程序和可读介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/096964 WO2022246844A1 (zh) | 2021-05-28 | 2021-05-28 | 屏幕检测方法、装置、设备、计算机程序和可读介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022246844A1 true WO2022246844A1 (zh) | 2022-12-01 |
Family
ID=84229464
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/096964 WO2022246844A1 (zh) | 2021-05-28 | 2021-05-28 | 屏幕检测方法、装置、设备、计算机程序和可读介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20240121369A1 (zh) |
CN (1) | CN115836236A (zh) |
WO (1) | WO2022246844A1 (zh) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103105146A (zh) * | 2013-01-22 | 2013-05-15 | 福州大学 | 用于三维显示的柱透镜光栅的平整性检测方法 |
US20140254008A1 (en) * | 2013-03-11 | 2014-09-11 | Canon Kabushiki Kaisha | Image display device and image display method |
CN105892078A (zh) * | 2016-06-20 | 2016-08-24 | 京东方科技集团股份有限公司 | 一种显示装置及其驱动方法、显示系统 |
CN110133781A (zh) * | 2019-05-29 | 2019-08-16 | 京东方科技集团股份有限公司 | 一种柱透镜光栅和显示装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09289655A (ja) * | 1996-04-22 | 1997-11-04 | Fujitsu Ltd | 立体画像表示方法及び多視画像入力方法及び多視画像処理方法及び立体画像表示装置及び多視画像入力装置及び多視画像処理装置 |
JP2010282090A (ja) * | 2009-06-05 | 2010-12-16 | Sony Corp | 立体表示装置 |
CN104898292B (zh) * | 2015-06-30 | 2018-02-13 | 京东方科技集团股份有限公司 | 3d显示基板及其制作方法、3d显示装置 |
JP7076246B2 (ja) * | 2018-03-23 | 2022-05-27 | マクセル株式会社 | 撮像装置および撮像システム |
CN209432409U (zh) * | 2019-03-11 | 2019-09-24 | 苏州科技大学 | 一种裸眼3d显示屏测试平台 |
CN110657948B (zh) * | 2019-09-26 | 2021-01-15 | 联想(北京)有限公司 | 用于测试电子设备的屏幕的方法、装置、测试设备和介质 |
KR20210086341A (ko) * | 2019-12-31 | 2021-07-08 | 엘지디스플레이 주식회사 | 렌티큘러 렌즈들을 포함하는 입체 영상 표시 장치 |
-
2021
- 2021-05-28 CN CN202180001334.9A patent/CN115836236A/zh active Pending
- 2021-05-28 US US17/765,390 patent/US20240121369A1/en active Pending
- 2021-05-28 WO PCT/CN2021/096964 patent/WO2022246844A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103105146A (zh) * | 2013-01-22 | 2013-05-15 | 福州大学 | 用于三维显示的柱透镜光栅的平整性检测方法 |
US20140254008A1 (en) * | 2013-03-11 | 2014-09-11 | Canon Kabushiki Kaisha | Image display device and image display method |
CN105892078A (zh) * | 2016-06-20 | 2016-08-24 | 京东方科技集团股份有限公司 | 一种显示装置及其驱动方法、显示系统 |
CN110133781A (zh) * | 2019-05-29 | 2019-08-16 | 京东方科技集团股份有限公司 | 一种柱透镜光栅和显示装置 |
Also Published As
Publication number | Publication date |
---|---|
US20240121369A1 (en) | 2024-04-11 |
CN115836236A (zh) | 2023-03-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7124865B2 (ja) | 情報処理装置、物体計測システム、物体計測方法、コンピュータプログラムおよび情報提供システム | |
CN104769930B (zh) | 具有多个像素阵列的成像装置 | |
JP6590792B2 (ja) | 3d映像を補正する方法、装置及び表示システム | |
WO2021197370A1 (zh) | 一种光场显示方法及系统、存储介质和显示面板 | |
CN107024339B (zh) | 一种头戴显示设备的测试装置及方法 | |
EP3516625A1 (en) | A device and method for obtaining distance information from views | |
US20150341618A1 (en) | Calibration of multi-camera devices using reflections thereof | |
US11360304B2 (en) | Image distortion detection method and system | |
CN107181918A (zh) | 一种光学动捕摄像机的拍摄控制方法及系统 | |
CN102595178B (zh) | 视场拼接三维显示图像校正系统及校正方法 | |
CN108827597A (zh) | 一种结构光投影器的光斑均匀度检测方法和检测系统 | |
CN117670961A (zh) | 基于深度学习的低空遥感影像多视立体匹配方法及系统 | |
CN107977998B (zh) | 一种基于多视角采样的光场校正拼接装置及方法 | |
CN108507484A (zh) | 成捆圆钢多目视觉识别系统及计数方法 | |
CN112361989A (zh) | 一种通过点云均匀性考量测量系统标定参数的方法 | |
KR20100067085A (ko) | 멀티 컴포넌트 디스플레이용 인터스티셜 확산기의 위치 결정 | |
JP5313187B2 (ja) | 立体画像補正装置および立体画像補正方法 | |
WO2022246844A1 (zh) | 屏幕检测方法、装置、设备、计算机程序和可读介质 | |
CN112102307A (zh) | 全局区域的热度数据确定方法、装置及存储介质 | |
TWI473026B (zh) | 影像處理系統、顯示裝置及影像顯示方法 | |
CN117455912B (zh) | 一种基于三平面镜的玉米穗粒全景计数方法及计数系统 | |
JPWO2018061926A1 (ja) | 計数システムおよび計数方法 | |
CN118014832A (zh) | 一种基于线性特征不变性的图像拼接方法及相关装置 | |
CN114879377B (zh) | 水平视差三维光场显示系统的参数确定方法、装置及设备 | |
AU2013308155B2 (en) | Method for description of object points of the object space and connection for its implementation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 17765390 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21942415 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202347028975 Country of ref document: IN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.03.2024) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21942415 Country of ref document: EP Kind code of ref document: A1 |