WO2022246844A1 - 屏幕检测方法、装置、设备、计算机程序和可读介质 - Google Patents

屏幕检测方法、装置、设备、计算机程序和可读介质 Download PDF

Info

Publication number
WO2022246844A1
WO2022246844A1 PCT/CN2021/096964 CN2021096964W WO2022246844A1 WO 2022246844 A1 WO2022246844 A1 WO 2022246844A1 CN 2021096964 W CN2021096964 W CN 2021096964W WO 2022246844 A1 WO2022246844 A1 WO 2022246844A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
viewpoint
cylindrical lens
content
Prior art date
Application number
PCT/CN2021/096964
Other languages
English (en)
French (fr)
Inventor
高健
马森
程芳
洪涛
朱劲野
梁蓬霞
于静
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US17/765,390 priority Critical patent/US20240121369A1/en
Priority to PCT/CN2021/096964 priority patent/WO2022246844A1/zh
Priority to CN202180001334.9A priority patent/CN115836236A/zh
Publication of WO2022246844A1 publication Critical patent/WO2022246844A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • G02B5/18Diffraction gratings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/167Synchronising or controlling image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/327Calibration thereof

Definitions

  • the present disclosure belongs to the technical field of screens, and in particular relates to a screen detection method, device, equipment, computer program and readable medium.
  • the ultra-multi-viewpoint display can realize continuous motion parallax and have a more realistic 3D display effect.
  • the method of realizing ultra-multi-viewpoint display is mainly to display the images of multiple viewpoints on the screen according to a specific layout method, and the cylindrical lens array is attached to the screen at a specific angle, so that the images of different viewpoints pass through the cylindrical lens array. It will be projected in different directions, so that the left and right eyes of the user can see images from different viewpoints to generate parallax and create a 3D display effect.
  • the disclosure provides a screen detection method, device, equipment, computer program and readable medium.
  • Some implementations of the present disclosure provide a screen detection method, the method comprising:
  • the cylindrical lens detection instruction at least includes: a target viewpoint;
  • the target screen is a screen with a cylindrical lens on the light-emitting side
  • the browsing image contains the target content, using the browsing image as a viewpoint image;
  • the detection parameters of the cylindrical lenses on the target screen are output.
  • the acquisition of the browsing image taken for the target screen under the target viewpoint, where the target screen is a screen with a cylindrical lens on the light-emitting side includes:
  • the viewpoint of the image acquisition device is adjusted to the target viewpoint, so as to shoot the light-emitting side of the target screen to obtain a browsing image.
  • the adjusting the viewpoint of the image acquisition device to the target viewpoint to shoot the light-emitting side of the target screen to obtain the browsing image includes:
  • the adjusting the shooting position of the image acquisition device relative to the target screen to the target position includes:
  • the shooting position parameters of the image acquisition device are adjusted so that the shooting position of the image acquisition device is at the target position, and the shooting position parameters include: at least one of shooting angle, shooting height and shooting distance.
  • using the browsing image as a viewpoint image includes:
  • the browsing image contains the target content
  • the browsing image is used as a viewpoint image, wherein the viewpoints of at least two of the viewpoint images are on the same straight line, and the straight line is parallel to the pixel plane of the target screen .
  • the image parameters at least include: the placement height of the cylindrical lens;
  • the outputting the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image includes:
  • the acquisition of the placement height of the cylindrical lens on the target screen based on the position of the viewpoint, the number of viewpoints, the distance from the first pixel point, and the medium refractive index from the cylindrical lens to the pixel surface includes:
  • T is the placement height
  • N is the number of viewpoints
  • n is the medium refractive index from the cylindrical lens to the pixel surface
  • P sub is the pixel corresponding to two adjacent viewpoint images on the same cylindrical lens
  • the first pixel distance between positions x N is the x-axis coordinate value of the Nth viewpoint image
  • x 1 is the x-axis coordinate value of the first viewpoint image
  • z is the z-axis coordinate value of each viewpoint image
  • N ⁇ 2 is a positive integer.
  • the target content includes: target horizontal content;
  • using the browsing image as a viewpoint image includes:
  • the browsing image is used as a viewpoint image.
  • the detection parameters at least include: the center distance between two adjacent cylindrical lenses;
  • the outputting the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image includes:
  • the center distance between the two adjacent cylindrical lenses is obtained.
  • the obtaining the center-to-center distance of the two adjacent cylindrical lenses based on the placement height of the cylindrical lenses and the medium refractive index from the cylindrical lenses to the pixel surface includes:
  • the center distance between the two adjacent cylindrical lenses is output by the following formula:
  • the P lens is the center distance between the two adjacent cylindrical lenses
  • the T is the placement height of the cylindrical lenses
  • the n is the medium refractive index from the cylindrical lenses to the pixel surface
  • the obtaining the center-to-center distance of the two adjacent cylindrical lenses based on the placement height of the cylindrical lenses and the medium refractive index from the cylindrical lenses to the pixel surface includes:
  • the center distance between the two adjacent cylindrical lenses is output by the following formula:
  • the P lens is the center distance between the two adjacent cylindrical lenses
  • the L is the viewing distance of the viewpoint image
  • the P pixel is the corresponding pixel point of the viewpoint image on the two adjacent cylindrical lenses
  • the T is the placement height of the cylindrical lens
  • the n is the medium refractive index from the cylindrical lens to the pixel plane.
  • the target content includes: multiple target vertical content;
  • using the browsing image as a viewpoint image includes:
  • the browsing image is used as a viewpoint image.
  • the detection parameters at least include: an alignment angle deviation of the cylindrical lens
  • the outputting the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image includes:
  • the alignment angle deviation of the cylindrical lens is acquired.
  • the acquisition of the alignment angle deviation of the cylindrical lens based on the quantity of the target vertical content, the first pixel distance, and the content width includes:
  • the alignment angle deviation of the cylindrical lens is output by the following formula:
  • the ⁇ is the alignment angle deviation of the cylindrical lens
  • the N is the number of the target longitudinal content
  • the P sub is the corresponding pixel position of two adjacent viewpoint images on the same cylindrical lens
  • the first pixel distance between W is the content width of the target vertical content on the viewpoint image.
  • using the browsing image as a viewpoint image includes:
  • the browsing image is obtained by shooting the target screen in a front view, and the central content in the central position in the browsing image is not the target content, the browsing image is used as a viewpoint image.
  • the detection parameters at least include: alignment position deviation of the cylindrical lens;
  • the outputting the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image includes:
  • the alignment position deviation of the cylindrical lens is acquired.
  • the obtaining the alignment position deviation of the cylindrical lens based on the image parameters of the viewpoint image includes:
  • the alignment position deviation of the cylindrical lens is output by the following formula:
  • ⁇ P is the alignment position deviation of the cylindrical lens
  • M is the difference value between the center content and the target content
  • P sub is the difference between two adjacent viewpoint images on the same cylindrical lens. The first pixel distance between corresponding pixel positions.
  • the obtaining the alignment position deviation of the cylindrical lens based on the image parameters of the viewpoint image includes:
  • the alignment position deviation of the cylindrical lens is output by the following formula:
  • ⁇ P is the alignment position deviation of the cylindrical lens
  • n is the medium refractive index from the cylindrical lens to the pixel surface
  • ⁇ 1 and ⁇ 2 are the brightness of the viewpoint image relative to In the angular distribution of the target viewpoints, two viewing angles adjacent to 0 degrees are respectively used as the first target viewing angle and the second target viewing angle.
  • using the browsing image as a viewpoint image includes:
  • the browse image is used as a viewpoint image.
  • the detection parameters include at least: a radius of curvature of the cylindrical lens;
  • the outputting the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image includes:
  • the radius of curvature of the optical simulation model of the cylindrical lens when the angle of view at which the sharpness of the optical simulation model is maximum is the angle of view of the viewpoint image, the radius of curvature is used as the curvature of the cylindrical lens radius.
  • the sharpness can be obtained through the following steps:
  • the outputting detection parameters of cylindrical lenses on the target screen based on the image parameters of the viewpoint image includes:
  • the radius of curvature of the optical simulation model of the cylindrical lens By adjusting the radius of curvature of the optical simulation model of the cylindrical lens, when the similarity between the optical simulation model and the viewing angle brightness distribution curve of the cylindrical lens meets the similarity requirement, the The radius of curvature is used as the radius of curvature of the cylindrical lens.
  • Some embodiments of the present disclosure provide a screen detection device, the device comprising:
  • a receiving module configured to receive a lenticular lens detection instruction for the target screen, where the lenticular lens detection instruction at least includes: a target viewpoint;
  • the detection module is configured to, in response to the detection instruction, acquire a browsing image taken for the target screen under the target viewpoint, and the target screen is a screen with a cylindrical lens on the light-emitting side;
  • the browsing image contains the target content, using the browsing image as a viewpoint image;
  • the output module is configured to: output the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image.
  • the detection module is also configured to:
  • the viewpoint of the image acquisition device is adjusted to the target viewpoint, so as to shoot the light-emitting side of the target screen to obtain a browsing image.
  • the detection module is also configured to:
  • the detection module is also configured to:
  • the shooting position parameters of the image acquisition device are adjusted so that the shooting position of the image acquisition device is at the target position, and the shooting position parameters include: at least one of shooting angle, shooting height and shooting distance.
  • the detection module is also configured to:
  • the browsing image contains the target content
  • the browsing image is used as a viewpoint image, wherein the viewpoints of at least two of the viewpoint images are on the same straight line, and the straight line is parallel to the pixel plane of the target screen .
  • the image parameters at least include: the placement height of the cylindrical lens;
  • the output module is further configured to:
  • the output module is also configured as:
  • T is the placement height
  • N is the number of viewpoints
  • n is the medium refractive index from the cylindrical lens to the pixel surface
  • P sub is the pixel corresponding to two adjacent viewpoint images on the same cylindrical lens
  • the first pixel distance between positions x N is the x-axis coordinate value of the Nth viewpoint image
  • x 1 is the x-axis coordinate value of the first viewpoint image
  • z is the z-axis coordinate value of each viewpoint image
  • N ⁇ 2 is a positive integer.
  • the target content includes: target horizontal content;
  • the detection module is also configured to:
  • the browsing image is used as a viewpoint image.
  • the detection parameters at least include: the center distance between two adjacent cylindrical lenses;
  • the output module is further configured to:
  • the center distance between the two adjacent cylindrical lenses is obtained.
  • the output module is also configured as:
  • the center distance between the two adjacent cylindrical lenses is output by the following formula:
  • the P lens is the center distance between the two adjacent cylindrical lenses
  • the T is the placement height of the cylindrical lenses
  • the n is the medium refractive index from the cylindrical lenses to the pixel surface
  • the output module is also configured as:
  • the center distance between the two adjacent cylindrical lenses is output by the following formula:
  • the P lens is the center distance between the two adjacent cylindrical lenses
  • the L is the viewing distance of the viewpoint image
  • the P pixel is the corresponding pixel point of the viewpoint image on the two adjacent cylindrical lenses
  • the T is the placement height of the cylindrical lens
  • the n is the medium refractive index from the cylindrical lens to the pixel plane.
  • the target content includes: multiple target vertical content;
  • the detection module is also configured to:
  • the browsing image is used as a viewpoint image.
  • the detection parameters at least include: an alignment angle deviation of the cylindrical lens
  • the output module is further configured to:
  • the alignment angle deviation of the cylindrical lens is acquired.
  • the output module is further configured to:
  • the alignment angle deviation of the cylindrical lens is output by the following formula:
  • the ⁇ is the alignment angle deviation of the cylindrical lens
  • the N is the number of the target longitudinal content
  • the P sub is the corresponding pixel position of two adjacent viewpoint images on the same cylindrical lens
  • the first pixel distance between W is the content width of the target vertical content on the viewpoint image.
  • the detection module is also configured to:
  • the browsing image is obtained by shooting the target screen in a front view, and the central content in the central position in the browsing image is not the target content, the browsing image is used as a viewpoint image.
  • the detection parameters at least include: alignment position deviation of the cylindrical lens;
  • the output module is further configured to:
  • the alignment position deviation of the cylindrical lens is acquired.
  • the output module is also configured as:
  • the alignment position deviation of the cylindrical lens is output by the following formula:
  • ⁇ P is the alignment position deviation of the cylindrical lens
  • M is the difference value between the center content and the target content
  • P sub is the difference between two adjacent viewpoint images on the same cylindrical lens. The first pixel distance between corresponding pixel positions.
  • the output module is also configured as:
  • the alignment position deviation of the cylindrical lens is output by the following formula:
  • ⁇ P is the alignment position deviation of the cylindrical lens
  • n is the medium refractive index from the cylindrical lens to the pixel surface
  • ⁇ 1 and ⁇ 2 are the brightness of the viewpoint image relative to In the angular distribution of the target viewpoints, two viewing angles adjacent to 0 degrees are respectively used as the first target viewing angle and the second target viewing angle.
  • the detection module is also configured to:
  • the browse image is used as a viewpoint image.
  • the detection parameters include at least: a radius of curvature of the cylindrical lens;
  • the output module is further configured to:
  • the radius of curvature of the optical simulation model of the cylindrical lens when the angle of view at which the sharpness of the optical simulation model is maximum is the angle of view of the viewpoint image, the radius of curvature is used as the curvature of the cylindrical lens radius.
  • the detection module is also configured to:
  • the output module is also configured as:
  • the optical simulation model of the cylindrical lens By adjusting the radius of curvature of the optical simulation model of the cylindrical lens, when the similarity between the optical simulation model and the viewing angle brightness distribution curve of the cylindrical lens meets the similarity requirement, the optical simulation model The radius of curvature is used as the radius of curvature of the cylindrical lens.
  • Some embodiments of the present disclosure provide a computing processing device, including:
  • One or more processors when the computer-readable code is executed by the one or more processors, the computing processing device executes the above-mentioned screen detection method.
  • Some embodiments of the present disclosure provide a computer program, including computer readable codes, which, when the computer readable codes are run on a computing processing device, cause the computing processing device to execute the screen detection method as described above.
  • Some embodiments of the present disclosure provide a computer-readable medium, in which the computer program of the above-mentioned screen detection method is stored.
  • Fig. 1 schematically shows a schematic flowchart of a screen detection method provided by some embodiments of the present disclosure.
  • Fig. 2 schematically shows a schematic diagram of a screen detection method provided by some embodiments of the present disclosure.
  • Fig. 3 schematically shows one of the schematic flowcharts of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 4 schematically shows one of the schematic diagrams of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 5 schematically shows one of the effect diagrams of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 6 schematically shows the second schematic flowchart of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 7 schematically shows the second schematic diagram of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 8 schematically shows the second schematic diagram of the effect of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 9 schematically shows the third schematic flowchart of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 10 schematically shows the third schematic diagram of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 11 schematically shows the third schematic diagram of the effect of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 12 schematically shows the fourth schematic flowchart of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 13 schematically shows the fourth schematic diagram of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 14 schematically shows the fifth schematic flowchart of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 15 schematically shows a fifth schematic diagram of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 16 schematically shows a fourth schematic diagram of effects of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 17 schematically shows the fifth schematic diagram of the effect of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 18 schematically shows the sixth schematic diagram of the effect of another screen detection method provided by some embodiments of the present disclosure.
  • Fig. 19 schematically shows a schematic structural view of a screen detection device provided by some embodiments of the present disclosure.
  • Fig. 20 schematically shows a block diagram of a computing processing device for performing a method according to some embodiments of the present disclosure.
  • Fig. 21 schematically shows a storage unit for holding or carrying program codes implementing methods according to some embodiments of the present disclosure.
  • the various parameters of the cylindrical lens in the related art correspond to the layout method.
  • the actual parameters of the cylinder lens deviate from the design value due to the process and other reasons, it will directly affect the viewing effect. It is necessary to correct the process conditions or modify the layout method according to the actual parameters. to correct the display effect.
  • the present disclosure proposes to display a specific image on the screen and analyze the displayed image to detect the detection parameters of the cylindrical lens on the screen.
  • Fig. 1 schematically shows a schematic flow chart of a screen detection method provided by the present disclosure
  • the method may be executed by any electronic device, for example, it may be applied to an application program with a screen detection function, and the method may be implemented by The server or terminal device of the application program executes, and the method includes:
  • Step 101 receiving a cylindrical lens detection instruction for the target screen, the cylindrical lens detection instruction at least includes: the target viewpoint;
  • Step 102 in response to the detection instruction, acquire a browsing image of a target screen under the target viewpoint, where the target screen is a screen with a cylindrical lens on a light-emitting side.
  • the target screen is a display device with cylindrical lenses arranged on the light emitting side, and the cylindrical lenses can be arranged in a specific array arrangement. Since the image light from different viewpoints on the target screen will be projected in different directions after encountering the cylindrical lens, it is possible to set the layout of the images displayed on the target screen so that the user's eyes can see different images at different viewpoints. Correspondingly, browsing images captured by the image acquisition device at different shooting viewpoints may also be different.
  • the target viewpoint refers to the shooting viewpoint required for shooting the target screen.
  • the target viewpoint can be set by the user or automatically set by the system according to the detection requirements. Specifically, it can be set according to actual needs, and there is no limitation here. .
  • Step 103 if the browsing image contains the target content, use the browsing image as a viewpoint image.
  • the target content refers to the display content that needs to be included in the viewpoint images participating in this detection. It can be understood that since the contents of the browsing images on the target screen are different under different viewpoints, if the browsing images contain If the content of the images is different, it means that the shooting viewpoints of the browsing images are also different, so it can be determined by setting the target content whether the browsing images are obtained by shooting the target screen at the viewpoint required for this detection.
  • the viewpoint image containing the target content can be selected according to the image content contained in the browsing image obtained by shooting, if the browsing image contains the target content
  • the browsing image is used as the viewpoint image, and if the browsing image does not contain the target content, the browsing image may be filtered out.
  • the numbers arranged in a full-screen array can be displayed on the target screen, and the browsing images under different shooting viewpoints are different numbers. If there is no deviation in the detection parameters of the cylindrical lens of the target screen, that is, the detection parameters are standard parameters When , the image content in the browsing image of the target screen is the same number, if there is a deviation in the detection parameters, the image content in the browsing image of the target screen will have different numbers, so that it can be checked according to whether there are different numbers in the browsing image under different viewpoints. to determine whether there is a deviation in the detection parameters of the cylindrical lens on the target screen. With reference to Fig.
  • the image content of each browsing image in the browsing images 2-1, 2-2, 2-3 under the three shooting viewpoints only includes “1", “2", “ 3”; and when there is a deviation in the detection parameters of the cylindrical lens, only the partial image content of the browsing image 2-3 under the first viewpoint is “1”, and there are other image contents of “2” and “3”, which is obvious
  • the image content of the browsing image under the first viewpoint is only "1” and there is a difference, then it can be determined that there is a deviation in the detection parameters of the cylindrical lens corresponding to the browsing image 2-3, and the browsing image 2- 5 and 2-6 also have deviations in detection parameters.
  • Step 104 based on the image parameters of the viewpoint image, output the detection parameters of the cylindrical lenses on the target screen.
  • the detection parameter refers to the actual index parameter of the cylindrical lens to be detected.
  • the detection parameters of the cylindrical lens due to factors such as technology, there may be deviations between the detection parameters of the cylindrical lens and the expected parameters. These deviations will cause the browsing images of different viewpoints actually displayed on the target screen, which are different from those under the standard parameters. There is a deviation in the browsing image of the viewpoint.
  • the browsing image of the target screen at a specific viewpoint should contain an image content of 1, but due to the deviation of the detection parameters of the cylindrical lens, the actual viewing image at a specific viewpoint contains an image of Content may be 2.
  • the cylindrical Lens inspection parameters since the image content contained in the browsing images under different shooting viewpoints is affected by the detection parameters of the cylindrical lens, the cylindrical Lens inspection parameters.
  • the embodiment of the present disclosure selects the viewpoint image containing the target content from the browsing images captured on the screen at a specific viewpoint, and detects the detection parameters of the cylindrical lens on the screen according to the image parameters of the viewpoint image, which can be efficient and convenient.
  • Various detection parameters of the cylindrical lens on the screen are obtained accurately, and the detection efficiency of the detection parameters of the cylindrical lens on the screen is improved.
  • the step 102 may include: adjusting the viewpoint of the image acquisition device to the target viewpoint, so as to capture the light-emitting side of the target screen to obtain a browsing image.
  • the image acquisition device may be an electronic device with an image acquisition function, and the image acquisition device may also have functions such as data processing, data storage, and data transmission.
  • the system may connect the image acquisition device through a transmission device to pass Control the transmission device to adjust the shooting viewpoint of the image acquisition device.
  • the image acquisition device can also be manually adjusted to shoot the target screen.
  • the specific settings can be set according to actual needs, which is not limited here.
  • the target viewpoint is the shooting viewpoint required for shooting the light-emitting side of the target screen.
  • the target viewpoint can be a pre-specified fixed viewpoint, or a randomly selected shooting viewpoint, or be adaptively adjusted according to different detection parameters, for example Shooting at the front view point or at the 30° viewpoint can be set according to actual needs, and there is no limitation here.
  • the browsing image of the target screen may be acquired by adjusting the shooting viewpoint of the image acquisition device to adjust to the target viewpoint required for this shooting.
  • the shooting viewpoint of the image acquisition device By adjusting the shooting viewpoint of the image acquisition device to the target viewpoint to shoot the light-emitting side of the target screen, the browsing images required for this detection can be quickly obtained.
  • the step 101 may include:
  • the system can be connected to the image acquisition device through a transmission device, so as to adjust the shooting position of the image acquisition device by controlling the transmission device, so as to realize convenient adjustment of the image acquisition device.
  • the step 101 may include: adjusting the shooting position parameters of the image acquisition device so that the shooting position of the image acquisition device is at the target position, and the shooting position parameters include: shooting angle, shooting height and At least one of the shooting distances.
  • the target angle refers to the shooting angle of the browsing image required for this detection
  • the target position is the shooting position of the browsing image required for this detection relative to the light-emitting side of the target screen
  • the target height is the image The height of the collection device relative to the ground.
  • the image acquisition device can be adjusted through the transmission device by setting a position parameter including at least one of the shooting angle, shooting height and shooting distance, and the target position is used to shoot the light-emitting side of the target screen, so as to realize the control of the image collection device. Convenient adjustment
  • the image parameters include at least: the placement height of the cylindrical lens.
  • FIG. 3 it shows another One of the schematic flow charts of a screen detection method, the method comprising:
  • Step 201 acquiring a browsing image of a target screen shot at the target viewpoint, the target screen is a screen with a cylindrical lens on the light output side, and the target screen is a screen with a cylindrical lens on the light output side.
  • the placement height of the cylindrical lens refers to the actual distance between the upper surface of the cylindrical lens and the pixel surface of the target screen. Since the content of the browsing images displayed on the target screen is different under different shooting viewpoints, the target viewpoint can be set to be multiple shooting viewpoints on the same straight line, and the straight lines where the multiple shooting viewpoints are located are parallel to the pixel plane of the target screen, Multiple browsing images may be obtained by shooting the light-emitting side of the target screen, and the multiple browsing images may contain various contents displayed on the target screen under different viewpoints. If there are N image contents displayed on the target screen, multiple shooting viewpoints may be set on a straight line parallel to the pixel plane of the target screen to capture and acquire N browsing images respectively containing the N image contents.
  • the image content displayed on the target screen contains numbers “1", “2”, “3”, and “4", and the image content under each shooting viewpoint is different, then the Multiple shooting viewpoints are set on straight lines parallel to the pixel planes to shoot the light-emitting side of the target screen, so that multiple browsing images respectively including “1", “2”, “3” and "4" can be obtained.
  • Step 202 if the browsing image contains the target content, use the browsing image as a viewpoint image, wherein the viewpoints of at least two viewpoint images are on the same straight line, and the straight line and the target screen Pixel faces are parallel.
  • the viewpoint images for parameter detection can clearly reflect the image content of different shooting viewpoints displayed on the target screen, and avoid cross-affecting the subsequent parameter detection due to the image content under different shooting viewpoints, it can be browsed according to Whether or not the image contains only one target content is used to filter viewpoint images that participate in parameter detection from browsing images. For example: when the target content is four numbers "1", “2", “3”, and "4", it is possible to select from the browsing image only "1", only "2", and only "3” ”, and four browsing images containing only “4” as viewpoint images.
  • the setting method of the specific target content can be set according to actual needs, which is not limited here.
  • Step 203 based on the viewpoint image, obtain the viewpoint position corresponding to the viewpoint image and the pixel point position on the pixel plane.
  • the viewpoint image and the viewpoint position and the pixel plane are in one-to-one correspondence, and the viewpoint position of the viewpoint image and the corresponding pixel position of the viewpoint image on the pixel plane of the target screen can be obtained by observing and analyzing the viewpoint image and the target screen.
  • Step 204 acquiring a first pixel distance between two adjacent viewpoint images on the same cylindrical lens and corresponding pixel positions.
  • the screen light at the viewpoint position of two adjacent viewpoint images is caused by which two phases on the pixel plane of the target screen
  • the actual distance between the adjacent pixels is taken as the first pixel distance. Since the distance between adjacent pixels of a pixel on a pixel plane is the same, the first pixel distance between any pair of adjacent pixels can reflect the pixel distance between other pairs of adjacent pixels .
  • Step 205 based on the position of the viewpoint, the number of viewpoints, the distance from the first pixel, and the refractive index of the medium from the rod lens to the pixel plane, obtain the placement height of the rod lens on the target screen.
  • the placement height of the cylindrical lens is positively correlated with the distance sum of the first pixel, is positively correlated with the refractive index of the medium from the cylindrical lens to the pixel surface, and is positively correlated with the distance from the screen where the shooting viewpoint is located to the pixel.
  • the placement height of the cylindrical lens on the target screen can be calculated by setting the algorithm based on the viewpoint position, the number of viewpoints, the distance between the first pixel and the refractive index of the medium.
  • the step 205 includes:
  • T is the placement height
  • N is the number of viewpoints
  • n is the medium refractive index from the cylindrical lens to the pixel surface
  • P sub is the pixel corresponding to two adjacent viewpoint images on the same cylindrical lens
  • the first pixel distance between positions x N is the x-axis coordinate value of the Nth viewpoint image
  • x 1 is the x-axis coordinate value of the first viewpoint image
  • z is the z-axis coordinate value of each viewpoint image
  • N ⁇ 2 is a positive integer.
  • the plane where the pixel plane of the target screen is located can be used as the xy plane, specifically, the straight line where the target viewpoint is located can be used as the x-axis, and the x-axis is on the plane where the pixel plane is located.
  • the vertical line is used as the y-axis, and the straight line perpendicular to the plane where the pixel surface is located is used as the z-axis to establish a plane space coordinate system, and the space coordinate values of each target viewpoint in the plane space coordinate system are used as the viewpoint positions of each target viewpoint to substitute into into the formula for calculation.
  • the air surface between the lower surface of the cylindrical lens and the pixel surface also has a certain refraction effect on the screen light, it is necessary to introduce the medium refractive index n from the cylindrical lens to the pixel surface in the formula to correct the calculation process , minimize the influence of the refraction effect of the air surface on the calculated placement height of the cylindrical lens, and ensure the accuracy of the detected placement height of the cylindrical lens.
  • the plane where the pixel plane is located is the xy plane
  • the straight line where the target viewpoint is located is the x-axis
  • the vertical line of the x-axis on the xy plane is the y-axis
  • the vertical line of the xy plane is the y-axis to establish spatial rectangular coordinates Tie.
  • the spatial coordinate values of the four target viewpoints are (x1, y, z), (x2, y, z), (x3, y, z), (x4 , y, z), sequentially photographing the light-emitting side of the target screen at the target viewpoint, as shown in Figure 5, four images containing only "1", “2", “3”, and "4" can be obtained point of view images. It can be understood that if there are N viewpoints, at (xN, y, z), a viewpoint image including the number "N" in full screen or partially can be captured.
  • the spatial coordinate values of the four viewpoint images are (-57,0,350), (-19,0,350), (19,0,350), (57,0,350), at this time, if the first pixel distance P sub is 8.725 ⁇ m, and the refractive index n of the medium is 1.53. Then, the space coordinate value of the viewpoint image, the distance of the first pixel point and the refractive index of the medium are substituted into the formula for calculation, and the placement height T of the cylindrical lens can be obtained as 120.5 ⁇ m.
  • the target content includes: target horizontal content
  • the detection parameters include at least: the center distance between two adjacent cylindrical lenses.
  • Step 301 acquiring a browsing image of a target screen shot at the target viewpoint, the target screen is a screen with a cylindrical lens on the light output side, and the target screen is a screen with a cylindrical lens on the light output side.
  • the center distance refers to the actual distance between two adjacent cylindrical lenses in the cylindrical lens array of the target screen.
  • Step 302 in the case that the horizontal content included in the browsing image is the target horizontal content, use the browsing image as a viewpoint image.
  • the horizontal content refers to the horizontally arranged image content in the browsing image
  • the target horizontal content refers to the horizontal content that needs to be included in the viewpoint image required to participate in this parameter detection.
  • the target horizontal content can be based on
  • the image content displayed on the target screen contains image content for setting, for example: if the image content is the four numbers "1", “2", "3" and "4" arranged in rows, then the target horizontal content can be set to be horizontal for each row
  • the number contained in the layout content is the same, and all four numbers are included in the browsing image, so that the viewing distance of the browsing image containing the four numbers and the same number in each row is the actual viewing distance where the image content can be clearly viewed Distance, the browsing image is used as a viewpoint image participating in parameter detection.
  • Step 303 based on the placement height of the cylindrical lens and the medium refractive index from the cylindrical lens to the pixel surface, the center distance between the two adjacent cylindrical lenses is obtained.
  • the distance between the centers of two adjacent cylindrical lenses is positively correlated with the product of the second pixel distance and the viewing distance, and is negatively correlated with the sum of the viewing distance and the placement height of the cylindrical lenses. It is proportional to the refractive index of the medium from the cylindrical lens to the pixel surface, and the distance between two adjacent cylindrical lenses can be calculated by establishing an algorithm formula based on the viewing distance, the distance from the second pixel point, the placement height of the cylindrical lens, and the refractive index of the medium. center distance.
  • the step 303 includes: outputting the center distance between the two adjacent cylindrical lenses through the following formula:
  • the P lens is the center distance between the two adjacent cylindrical lenses
  • the L is the viewing distance of the viewpoint image
  • the P pixel is the corresponding pixel point of the viewpoint image on the two adjacent cylindrical lenses
  • the T is the placement height of the cylindrical lens
  • the n is the medium refractive index from the cylindrical lens to the pixel plane.
  • the viewpoint position of the viewpoint image can be regarded as the position of the user's eyes, so that the viewpoint position and the screen where the cylindrical lens is located can be compared.
  • the vertical distance between is taken as the viewing distance of the viewpoint image.
  • the distance between the centers of all adjacent rod lenses can be represented by the second pixel point distance.
  • this is ideal and usually different
  • the distance of the second pixel point corresponding to each pair of adjacent cylindrical lenses can be independently detected.
  • the target content includes: various target longitudinal content
  • the detection parameters include at least: the alignment angle deviation of the cylindrical lens, referring to FIG. 9 , which shows the The third schematic flow chart of another screen detection method publicly provided, the method includes:
  • the step 303 may include:
  • the center distance between the two adjacent cylindrical lenses is output by the following formula:
  • the P lens is the center distance between the two adjacent cylindrical lenses
  • the T is the placement height of the cylindrical lenses
  • the n is the medium refractive index from the cylindrical lenses to the pixel surface
  • P lens is the center distance between the two adjacent cylindrical lenses
  • P pixel is the second pixel distance between the corresponding pixel positions of the viewpoint image on the adjacent two cylindrical lenses
  • T is the cylindrical lens
  • L is the viewing distance
  • n is the medium refractive index from the cylindrical lens to the pixel surface.
  • Step 401 acquiring a browsing image of a target screen shot at the target viewpoint, the target screen is a screen with a cylindrical lens on the light output side, and the target screen is a screen with a cylindrical lens on the light output side.
  • the alignment angle deviation of the cylindrical lens refers to the angular deviation of the position of the image content between the actual image content displayed by the cylindrical lens and the design expected image content.
  • the frame 10-1 is used to reflect the actual position of the image content
  • frame 10-2 is used to reflect the expected design position of the image content
  • the angle between the alignment edges between 10-1 and 10-2 is the alignment angle deviation.
  • the viewpoint that allows the user to clearly view the specific content in the browsing image can be considered as the expected viewing distance that meets the expected requirements, but because the detection parameters of the cylindrical lens may have deviations, so The actual viewing distance when the actual user can clearly view the specific content in the browsing image may also deviate from the expected viewing distance. Therefore, it is necessary to collect images of the target screen to determine the shooting viewpoint and screen when the specific content can actually be clearly viewed. The actual viewing distance between.
  • Step 402 if the vertical content included in the browsing image is at least two target vertical content, use the browsing image as a viewpoint image.
  • the vertical content refers to the image content arranged vertically in the browsing image
  • the target vertical content refers to the vertical content that needs to be included in the viewpoint image required to participate in this parameter detection.
  • the target vertical content can be based on The image content displayed on the target screen contains the image content and is set in columns. For example, if the image content is the four numbers "1", “2", "3" and "4" arranged in columns, the target vertical content can be set as each The numbers contained in the vertically arranged content of the column are the same, and all four numbers are included in the browsing image, so that any viewing distance can be clearly viewed at any viewing distance The actual viewing distance of the image content. On the contrary, if the numbers in each column in the browsing image are different, it indicates that there is an alignment angle deviation of the cylindrical lens. Therefore, the browsing image containing at least two vertical contents of the target can be used as the viewpoint participating in the parameter detection image.
  • Step 403 Obtain the quantity of the target vertical content, the viewpoint position corresponding to the viewpoint image, and the pixel point position on the pixel plane based on the viewpoint image.
  • the quantity of the target vertical content can be obtained from the image content displayed on the target screen.
  • viewpoint position and pixel point position corresponding to the viewpoint image please refer to the detailed description in the axis 203, which will not be repeated here. .
  • Step 404 acquiring the first pixel distance between the corresponding pixel positions of two adjacent viewpoint images on the same cylindrical lens, and the content width of the vertical content of the target on the viewpoint images.
  • the distance of the first pixel can refer to the detailed description of step 204 , which will not be repeated here.
  • the content width of the target vertical content refers to the display width of the target vertical content in the viewpoint image.
  • Step 405 based on the quantity of the target vertical content, the first pixel distance, and the content width, the alignment angle deviation of the cylindrical lens is acquired.
  • the alignment angle deviation of the cylindrical lens is negatively correlated with the ratio of the number of vertical content of the target to the content width, and the distance of the first pixel point is also negatively correlated, so it can be obtained by This correlation sets the algorithmic formula to obtain the alignment angle deviation of the cylindrical lens.
  • the step 405 includes: outputting the alignment angle deviation of the cylindrical lens through the following formula:
  • the ⁇ is the alignment angle deviation of the cylindrical lens
  • the N is the number of the target longitudinal content
  • the P sub is the corresponding pixel position of two adjacent viewpoint images on the same cylindrical lens
  • the first pixel distance between W is the content width of the target vertical content on the viewpoint image.
  • the cylindrical lens can be calculated by substituting the above formula
  • the alignment angle deviation ⁇ is 0.067°.
  • the target content includes: the detection parameters at least include: the alignment position deviation of the cylindrical lens, referring to FIG. 12 , which shows another The fourth schematic flow diagram of the screen detection method, the method includes:
  • Step 501 acquiring a browsing image of a target screen shot at the target viewpoint, the target screen is a screen with a cylindrical lens on the light output side, and the target screen is a screen with a cylindrical lens on the light output side.
  • the alignment position deviation refers to the horizontal distance between the image content actually displayed by the cylindrical lens and the design expected image content, where the image content is located.
  • the frame 13-1 is used to reflect the actual position of the image content
  • the frame 13-2 is used to reflect the expected design position of the image content
  • the alignment between 13-1 and 13-2 The horizontal distance between points is the alignment position deviation.
  • Step 502 in the case that the browsing image is obtained by shooting the target screen from a frontal viewing angle, and the central content in the central position in the browsing image is not the target content, use the browsing image as a viewpoint image .
  • the central content at the center position in the browsing image of the normal viewing angle is the same as the expected design image content, it can be determined that there is no deviation in the alignment position of the cylindrical lens of the target screen. If the image content in the browsing image is different from the expected design image content, it can be determined that the alignment position of the cylindrical lens of the target screen is biased, and parameter detection is required, and the browsing image is used as a viewpoint image participating in parameter detection.
  • Step 503 based on the image parameters of the viewpoint image, the alignment position deviation of the cylindrical lens is obtained.
  • the alignment position deviation of the cylindrical lens is positively correlated with the difference between the center content and the target content, and is negatively correlated with the distance from the first pixel point. Therefore, according to the correlation
  • the relationship setting algorithm publicly calculates the alignment position deviation of the cylindrical lens.
  • the step 503 may include: outputting the alignment position deviation of the cylindrical lens through the following formula:
  • ⁇ P is the alignment position deviation of the cylindrical lens
  • M is the difference value between the center content and the target content
  • P sub is the difference between two adjacent viewpoint images on the same cylindrical lens. The first pixel distance between corresponding pixel positions.
  • the difference value between the central content and the target content refers to an index value that characterizes the degree of quality inspection difference between the central content and the target content, which may be the
  • the difference between the content types may also be the area difference between the central content and the difference content contained in the target content, which can be set according to actual needs, and is not limited here.
  • the method of obtaining the distance of the first pixel point refer to the detailed description of step 203, which will not be repeated here.
  • the step 503 may include: outputting the alignment position deviation of the cylindrical lens through the following formula:
  • ⁇ P is the alignment position deviation of the cylindrical lens
  • n is the medium refractive index from the cylindrical lens to the pixel surface
  • ⁇ 1 and ⁇ 2 are the brightness of the viewpoint image relative to In the angular distribution of the target viewpoints, two viewing angles adjacent to 0 degrees are respectively used as the first target viewing angle and the second target viewing angle.
  • the detection parameters at least include: the radius of curvature of the cylindrical lens, referring to FIG. 14 , which shows the fifth schematic flow diagram of another screen detection method provided by the present disclosure , the method includes:
  • Step 601 acquiring a browsing image of a target screen shot at the target viewpoint, the target screen is a screen with a cylindrical lens on the light output side, and the target screen is a screen with a cylindrical lens on the light output side.
  • the radius of curvature of the cylindrical lens refers to the rotation rate of the tangent direction angle of the central point of the upper surface of the cylindrical lens to the arc length of the upper surface.
  • Part of the image content displayed on the target screen can be turned off to display only part of the image content, so that the display area of the part of the image content displayed on the target screen is black, so that the light-emitting side of the target screen can be photographed at different viewpoints.
  • a browse image that reflects the sharpness of the screen is available.
  • 15-1 when the target screen has an alignment angle deviation, the browsing image that displays part of the image content is closed, and the black stripes in it are the display area where the part of the image content that is displayed is closed; 15-2 When there is no alignment angle deviation on the target screen, turn off the browsing image that displays part of the Tuxiangxiang content, and the black stripes are also the display area where the part of the image content that is turned off is located.
  • Step 602 if the sharpness of the specified content in the browse image is the largest, use the browse image as a viewpoint image.
  • the sharpness of the browsed image is an index parameter used to characterize the display brightness and contrast of the image, which may be specifically obtained based on image parameters such as the display brightness or contrast of the image. Since the sharpness of specified content in browsing images under different shooting viewpoints is different, the browsing image with the highest sharpness can be selected as the viewpoint image participating in parameter detection by comparing multiple collected browsing images. For example, when the specified content is part of the image content with the display turned off, the browse image can be filtered according to the sharpness of the black stripes in the browse image, and of course the browse image can also be screened by comparing the sharpness of the image content that is not turned off. Screening, relatively speaking, because the sharpness of black lines is more obvious, it can be set according to actual needs, and there is no limitation here.
  • Step 603 acquiring the viewing angle of the viewpoint image.
  • the viewing angle of the viewpoint image may be calculated according to the recorded shooting angle and shooting position by recording the shooting angle and shooting position of the viewpoint image.
  • Step 604 by adjusting the radius of curvature of the optical simulation model of the cylindrical lens, when the angle of view at which the sharpness of the optical simulation model is maximum is the angle of view of the viewpoint image, use the radius of curvature as the angle of the cylinder The radius of curvature of the lens.
  • the optical simulation model of the cylindrical lens can be constructed by optical simulation software. After adjusting the radius of curvature of the optical simulation model, observe the viewing angle when the sharpness of the optical simulation model is the largest. If it is the same as the viewing angle of the viewpoint image, it indicates that the cylindrical lens
  • the radius of curvature of is the radius of curvature of the optical simulation model at this viewing angle.
  • 16-1 is a viewpoint image at a non-collimated viewing angle (the shooting viewing angle is 0°), and the contrast between light and dark is relatively small
  • 16-2 is a collimating viewing angle (the shooting viewing angle is 21°).
  • the contrast between light and shade of the image is relatively large, so it is determined that the sharpness of the viewpoint image is the largest when the shooting viewing angle is 21°, and then the sharpest viewing angle of 21° is substituted into the steps described in step 204 above for processing.
  • the sharpness may be obtained through the following steps: acquiring the sharpness of the viewpoint image according to the negative correlation between the contrast and the sharpness of the viewpoint image.
  • the contrast of the viewpoint image is also the largest at this time, and the browsing image with the highest contrast can be selected from the browsing images As a viewpoint image to efficiently obtain the sharpness of the image.
  • sharpness acquisition methods in related technologies can also be used to calculate the sharpness of the viewpoint image, such as MTF (Modulation Transfer Function, analog transfer function), based on the image scheduling value to obtain the sharpness of the viewpoint image, of course the specific sharpness
  • MTF Modulation Transfer Function, analog transfer function
  • the degree calculation method can be set according to actual requirements, as long as the sharpness of the viewpoint image can be represented, it can be applied to the embodiments of the present disclosure, and there is no limitation here.
  • the radius of curvature of the cylindrical lens can also be output through the following steps 605 to 606:
  • Step 605 acquiring a viewing angle luminance distribution curve of the cylindrical lens.
  • the upper surface of the cylindrical lens may be scanned by an image acquisition device provided with a laser lens to obtain a viewing angle luminance distribution curve of the cylindrical lens.
  • Step 606 by adjusting the radius of curvature of the optical simulation model of the cylindrical lens, when the similarity between the optical simulation model and the viewing angle brightness distribution curve of the cylindrical lens meets the similarity requirement, the optical The radius of curvature of the simulation model is used as the radius of curvature of the cylindrical lens.
  • the system scans the viewing angle brightness distribution curves under each curvature radius in the optical simulation model to obtain the corresponding viewing angle brightness distribution curves under each curvature radius, and then calculates the corresponding viewing angle brightness distribution curves under each curvature radius
  • the similarity requirement may be that the similarity is greater than the similarity threshold, or the maximum similarity is taken, which can be set according to actual requirements, and is not limited here.
  • the embodiment of the present disclosure screens out the user's target body type category from various body type categories based on the predicted image features extracted from the user's body image, and can accurately identify the user's body type category without relying on the body shape template. The accuracy of screen detection is improved.
  • FIG. 19 schematically shows a schematic structural view of a screen detection device 70 provided by the present disclosure, and the device includes:
  • the receiving module 701 is configured to receive a cylindrical lens detection instruction for the target screen, where the cylindrical lens detection instruction includes at least: a target viewpoint;
  • the detection module 702 is configured to, in response to the detection instruction, acquire a browsing image taken for the target screen under the target viewpoint, and the target screen is a screen with a cylindrical lens on the light-emitting side;
  • the browsing image contains the target content, using the browsing image as a viewpoint image;
  • the output module 703 is configured to: output the detection parameters of the cylindrical lens on the target screen based on the image parameters of the viewpoint image.
  • the detection module 702 is further configured to:
  • the detection module 702 is further configured to:
  • the detection module 702 is further configured to:
  • the shooting position parameters of the image acquisition device are adjusted so that the shooting position of the image acquisition device is at the target position, and the shooting position parameters include: at least one of shooting angle, shooting height and shooting distance.
  • the detection module 702 is further configured to:
  • the browsing image contains the target content
  • the browsing image is used as a viewpoint image, wherein the viewpoints of at least two of the viewpoint images are on the same straight line, and the straight line is parallel to the pixel plane of the target screen .
  • the image parameters at least include: the placement height of the cylindrical lens;
  • the output module 703 is also configured to:
  • the output module 703 is further configured to:
  • T is the placement height
  • N is the number of viewpoints
  • n is the medium refractive index from the cylindrical lens to the pixel surface
  • P sub is the pixel corresponding to two adjacent viewpoint images on the same cylindrical lens
  • the first pixel distance between positions x N is the x-axis coordinate value of the Nth viewpoint image
  • x 1 is the x-axis coordinate value of the first viewpoint image
  • z is the z-axis coordinate value of each viewpoint image
  • N ⁇ 2 is a positive integer.
  • the target content includes: target horizontal content;
  • the detection module 702 is also configured to:
  • the browsing image is used as a viewpoint image.
  • the detection parameters at least include: the center distance between two adjacent cylindrical lenses;
  • the output module 703 is also configured to:
  • the center distance between the two adjacent cylindrical lenses is obtained.
  • the output module 703 is further configured to:
  • the center distance between the two adjacent cylindrical lenses is output by the following formula:
  • the P lens is the center distance between the two adjacent cylindrical lenses
  • the T is the placement height of the cylindrical lenses
  • the n is the medium refractive index from the cylindrical lenses to the pixel surface
  • the output module 703 is further configured to:
  • the center distance between the two adjacent cylindrical lenses is output by the following formula:
  • the P lens is the center distance between the two adjacent cylindrical lenses
  • the L is the viewing distance of the viewpoint image
  • the P pixel is the corresponding pixel point of the viewpoint image on the two adjacent cylindrical lenses
  • the T is the placement height of the cylindrical lens
  • the n is the medium refractive index from the cylindrical lens to the pixel plane.
  • the target content includes: multiple target vertical content;
  • the detection module 702 is also configured to:
  • the browsing image is used as a viewpoint image.
  • the detection parameters at least include: an alignment angle deviation of the cylindrical lens
  • the output module 703 is also configured to:
  • the alignment angle deviation of the cylindrical lens is acquired.
  • the output module 703 is also configured to:
  • the alignment angle deviation of the cylindrical lens is output by the following formula:
  • the ⁇ is the alignment angle deviation of the cylindrical lens
  • the N is the number of the target longitudinal content
  • the P sub is the corresponding pixel position of two adjacent viewpoint images on the same cylindrical lens
  • the first pixel distance between W is the content width of the target vertical content on the viewpoint image.
  • the detection module 702 is also configured to:
  • the browsing image is obtained by shooting the target screen in a front view, and the central content in the central position in the browsing image is not the target content, the browsing image is used as a viewpoint image.
  • the detection parameters at least include: alignment position deviation of the cylindrical lens;
  • the output module 703 is also configured to:
  • the alignment position deviation of the cylindrical lens is acquired.
  • the output module 703 is further configured to:
  • the alignment position deviation of the cylindrical lens is output by the following formula:
  • ⁇ P is the alignment position deviation of the cylindrical lens
  • M is the difference value between the center content and the target content
  • P sub is the difference between two adjacent viewpoint images on the same cylindrical lens. The first pixel distance between corresponding pixel positions.
  • the output module 703 is further configured to:
  • the alignment position deviation of the cylindrical lens is output by the following formula:
  • ⁇ P is the alignment position deviation of the cylindrical lens
  • n is the medium refractive index from the cylindrical lens to the pixel surface
  • ⁇ 1 and ⁇ 2 are the brightness of the viewpoint image relative to In the angular distribution of the target viewpoints, two viewing angles adjacent to 0 degrees are respectively used as the first target viewing angle and the second target viewing angle.
  • the detection module 702 is further configured to:
  • the browse image is used as a viewpoint image.
  • the detection parameters include at least: a radius of curvature of the cylindrical lens;
  • the output module 703 is also configured to:
  • the radius of curvature of the optical simulation model of the cylindrical lens when the angle of view at which the sharpness of the optical simulation model is maximum is the angle of view of the viewpoint image, the radius of curvature is used as the curvature of the cylindrical lens radius.
  • the detection module 702 is further configured to:
  • the output module 703 is further configured to:
  • the radius of curvature of the optical simulation model of the cylindrical lens By adjusting the radius of curvature of the optical simulation model of the cylindrical lens, when the similarity between the optical simulation model and the viewing angle brightness distribution curve of the cylindrical lens meets the similarity requirement, the The radius of curvature is used as the radius of curvature of the cylindrical lens.
  • the embodiment of the present disclosure selects the viewpoint image containing the target content from the browsing images captured on the screen at a specific viewpoint, and detects the detection parameters of the cylindrical lens on the screen according to the image parameters of the viewpoint image, which can be efficient and convenient.
  • Various detection parameters of the cylindrical lens on the screen are obtained accurately, and the detection efficiency of the detection parameters of the cylindrical lens on the screen is improved.
  • the device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without any creative efforts.
  • the various component embodiments of the present disclosure may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof.
  • a microprocessor or a digital signal processor (DSP) may be used in practice to implement some or all functions of some or all components in the computing processing device according to the embodiments of the present disclosure.
  • DSP digital signal processor
  • the present disclosure can also be implemented as an apparatus or apparatus program (eg, computer program and computer program product) for performing a part or all of the methods described herein.
  • Such a program realizing the present disclosure may be stored on a computer-readable medium, or may have the form of one or more signals.
  • Such a signal may be downloaded from an Internet site, or provided on a carrier signal, or provided in any other form.
  • FIG. 20 illustrates a computing processing device that may implement methods according to the present disclosure.
  • the computing processing device conventionally includes a processor 810 and a computer program product or computer readable medium in the form of memory 820 .
  • Memory 820 may be electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the memory 820 has a storage space 830 for program code 831 for performing any method steps in the methods described above.
  • the storage space 830 for program codes may include respective program codes 831 for respectively implementing various steps in the above methods. These program codes can be read from or written into one or more computer program products.
  • These computer program products comprise program code carriers such as hard disks, compact disks (CDs), memory cards or floppy disks.
  • Such a computer program product is typically a portable or fixed storage unit as described with reference to FIG. 21 .
  • the storage unit may have storage segments, storage spaces, and the like arranged similarly to the memory 820 in the computing processing device of FIG. 20 .
  • the program code can eg be compressed in a suitable form.
  • the storage unit includes computer readable code 831', i.e. code readable by, for example, a processor such as 810, which code, when executed by a computing processing device, causes the computing processing device to perform the above-described methods. each step.
  • references herein to "one embodiment,” “an embodiment,” or “one or more embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Additionally, please note that examples of the word “in one embodiment” herein do not necessarily all refer to the same embodiment.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of elements or steps not listed in a claim.
  • the word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements.
  • the disclosure can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means can be embodied by one and the same item of hardware.
  • the use of the words first, second, and third, etc. does not indicate any order. These words can be interpreted as names.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)

Abstract

一种屏幕检测方法、装置、设备、计算机程序和可读介质,属于屏幕技术领域。所述方法包括:接收对于目标屏幕的柱透镜检测指令,所述柱透镜检测指令至少包括:目标视点(101);响应于所述检测指令,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕(102);在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像(103);基于所述视点图像的图像参数,获取输出所述目标屏幕上柱透镜的检测参数(104)。

Description

屏幕检测方法、装置、设备、计算机程序和可读介质 技术领域
本公开属于屏幕技术领域,特别涉及一种屏幕检测方法、装置、设备、计算机程序和可读介质。
背景技术
超多视点显示能够实现连续运动视差,具有更真实的3D显示效果。目前实现超多视点显示的方法主要是将多个视点的图像,按照特定的排图方式显示在屏幕上,柱透镜阵列以特定角度与屏幕贴合,从而使得不同视点的图像经过柱镜阵列后会投射到不同的方向,使得用户左右眼看到不同的视点的图像来产生视差,营造3D显示效果。
概述
本公开提供的一种屏幕检测方法、装置、设备、计算机程序和可读介质。
本公开一些实施方式提供一种屏幕检测方法,所述方法包括:
接收对于目标屏幕的柱透镜检测指令,所述柱透镜检测指令至少包括:目标视点;
响应于所述检测指令,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕;
在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像;
基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数。
可选地,所述获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕,包括:
将图像采集设备的视点调整至目标视点,以对目标屏幕的出光侧进行拍摄,得到浏览图像。
可选地,所述将图像采集设备的视点调整至目标视点,以对目标屏幕的出光侧进行拍摄,得到浏览图像,包括:
将图像采集设备相对于目标屏幕的拍摄位置调整至目标位置,对目标屏 幕的出光侧进行拍摄,得到浏览图像。
可选地,所述将图像采集设备相对于目标屏幕的拍摄位置调整至目标位置,包括:
将图像采集设备的拍摄位置参数进行调整,以使得所述图像采集设备的拍摄位置处于目标位置,所述拍摄位置参数包括:拍摄角度、拍摄高度和拍摄距离中的至少一种。
可选地,所述目标内容存在至少两个;
所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:
在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,其中,至少两幅所述视点图像的视点处于同一直线,且所述直线与所述目标屏幕的像素面平行。
可选地,所述图像参数至少包括:所述柱透镜的放置高度;
所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
基于所述视点图像获取所述视点图像所对应的视点位置、像素面上的像素点位置;
获取相邻两幅所述视点图像在同一柱透镜上,所对应像素点位置之间的第一像素点距离;
基于所述视点位置、所述视点数量、所述第一像素点距离、所述柱透镜到所述像素面的介质折射率,获取所述目标屏幕上柱透镜的放置高度。
可选地,所述基于所述视点位置、所述视点数量、所述第一像素点距离、所述柱透镜到所述像素面的介质折射率,获取所述目标屏幕上柱透镜的放置高度,包括:
以所述目标屏幕的像素面所在平面为xy面建立空间直角坐标系(x,y,z),获取各视点位置在所述空间直角坐标系中的空间坐标值,并通过如下公式输出所述目标屏幕上所述柱透镜的放置高度:
Figure PCTCN2021096964-appb-000001
其中,T为放置高度,所述N为视点数量,n为所述柱透镜到所述像素面的介质折射率,P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,x N为第N个视点图像的x轴空间坐标值,x 1 为第1个视点图像的x轴坐标值,z为各视点图像的z轴坐标值,其中N≥2,N为正整数。
可选地,所述目标内容包括:目标横向内容;
所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:
在所述浏览图像中所包含的横向内容均为目标横向内容的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:相邻两个柱透镜的中心距离;
所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
基于所述柱透镜的放置高度、所述柱透镜到所述像素面的介质折射率,获取所述相邻两个柱透镜的中心距离。
可选地,所述基于所述柱透镜的放置高度、所述柱透镜到所述像素面的介质折射率,获取所述相邻两个柱透镜的中心距离,包括:
通过如下公式输出所述相邻两个柱透镜的中心距离:
Figure PCTCN2021096964-appb-000002
其中,所述P lens为所述相邻两个柱透镜的中心距离,所述T为所述柱透镜的放置高度,所述n为所述柱透镜到所述像素面的介质折射率,所述α 1、α 2分别是所述视点图像的亮度相对于目标视点的角度分布中,与0度相邻的两个视角分别作为第一目标视角和第二目标视角。
可选地,所述基于所述柱透镜的放置高度、所述柱透镜到所述像素面的介质折射率,获取所述相邻两个柱透镜的中心距离,包括:
通过如下公式输出所述相邻两个柱透镜的中心距离:
Figure PCTCN2021096964-appb-000003
其中,所述P lens为所述相邻两个柱透镜的中心距离,所述L为视点图像的观看距离,所述P pixel为所述视点图像在相邻两个柱透镜上所对应像素点位置之间的第二像素点距离,所述T为所述柱透镜的放置高度,所述n为所述柱透镜到所述像素面的介质折射率。
可选地,所述目标内容包括:多种目标纵向内容;
所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:
在所述浏览图像中所包含的纵向内容为至少两个目标纵向内容的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:所述柱透镜的对位角度偏差;
所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
基于所述视点图像获取所述目标纵向内容的数量、所述视点图像所对应的视点位置、像素面上的像素点位置;
获取相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离、所述视点图像上目标纵向内容的内容宽度;
基于所述目标纵向内容的数量、所述第一像素点距离、所述内容宽度,获取所述柱透镜的对位角度偏差。
可选地,所述基于所述目标纵向内容的数量、所述第一像素点距离、所述内容宽度,获取所述柱透镜的对位角度偏差,包括:
通过如下公式输出所述柱透镜的对位角度偏差:
Figure PCTCN2021096964-appb-000004
其中,所述△θ为所述柱透镜的对位角度偏差,所述N为目标纵向内容的数量,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,所述W为所述视点图像上目标纵向内容的内容宽度。
可选地,所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:
在所述浏览图像是通过正视角下对所述目标屏幕进行拍摄得到,且所述浏览图像中处于中心位置的中心内容不为目标内容的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:所述柱透镜的对位位置偏差;
所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
基于所述视点图像的图像参数,获取所述柱透镜的对位位置偏差。
可选地,所述基于所述视点图像的图像参数,获取所述柱透镜的对位位 置偏差,包括:
通过如下公式输出所述柱透镜的对位位置偏差:
ΔP=M·P sub
其中,△P为所述柱透镜的对位位置偏差,所述M为获取所述中心内容与目标内容的差异值,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离。
可选地,所述基于所述视点图像的图像参数,获取所述柱透镜的对位位置偏差,包括:
通过如下公式输出所述柱透镜的对位位置偏差:
Figure PCTCN2021096964-appb-000005
其中,△P为所述柱透镜的对位位置偏差,所述n为所述柱透镜到所述像素面的介质折射率,所述α 1、α 2分别是所述视点图像的亮度相对于目标视点的角度分布中,与0度相邻的两个视角分别作为第一目标视角和第二目标视角。
可选地,所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:
在所述浏览图像中指定内容的锐利度最大的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:所述柱透镜的曲率半径;
所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
获取所述视点图像的视角;
通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿真模型的锐利度最大时的视角为所述视点图像的视角时,将所述曲率半径作为所述柱透镜的曲率半径。
可选地,所述锐利度可通过以下步骤获得:
依据所述视点图像的对比度和锐利度之间的负相关关系,获取所述视点图像的锐利度。
可选地,所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
获取所述柱透镜的视角亮度分布曲线;
通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿真模型与所述柱透镜的视角亮度分布曲线之间的相似度符合相似度要求时,将所述光学仿真模型的曲率半径作为所述柱透镜的曲率半径。
本公开一些实施例提供一种屏幕检测装置,所述装置包括:
接收模块,被配置为接收对于目标屏幕的柱透镜检测指令,所述柱透镜检测指令至少包括:目标视点;
检测模块,被配置为响应于所述检测指令,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕;
在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像;
输出模块,被配置为:基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数。
可选地,所述检测模块,还被配置为:
将图像采集设备的视点调整至目标视点,以对目标屏幕的出光侧进行拍摄,得到浏览图像。
可选地,所述检测模块,还被配置为:
将图像采集设备相对于目标屏幕的拍摄位置调整至目标位置,对目标屏幕的出光侧进行拍摄,得到浏览图像。
可选地,所述检测模块,还被配置为:
将图像采集设备的拍摄位置参数进行调整,以使得所述图像采集设备的拍摄位置处于目标位置,所述拍摄位置参数包括:拍摄角度、拍摄高度和拍摄距离中的至少一种。
可选地,所述目标内容存在至少两个;
可选地,所述检测模块,还被配置为:
在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,其中,至少两幅所述视点图像的视点处于同一直线,且所述直线与所述目标屏幕的像素面平行。
可选地,所述图像参数至少包括:所述柱透镜的放置高度;
所述输出模块,还被配置为:
基于所述视点图像获取所述视点图像所对应的视点位置、像素面上的像素点位置;
获取相邻两幅所述视点图像在同一柱透镜上,所对应像素点位置之间的 第一像素点距离;
基于所述视点位置、所述视点数量、所述第一像素点距离、所述柱透镜到所述像素面的介质折射率,获取所述目标屏幕上柱透镜的放置高度。
可选地,所述输出模块,还被配置为:
以所述目标屏幕的像素面所在平面为xy面建立空间直角坐标系(x,y,z),获取各视点位置在所述空间直角坐标系中的空间坐标值,并通过如下公式输出所述目标屏幕上所述柱透镜的放置高度:
Figure PCTCN2021096964-appb-000006
其中,T为放置高度,所述N为视点数量,n为所述柱透镜到所述像素面的介质折射率,P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,x N为第N个视点图像的x轴空间坐标值,x 1为第1个视点图像的x轴坐标值,z为各视点图像的z轴坐标值,其中N≥2,N为正整数。
可选地,所述目标内容包括:目标横向内容;
所述检测模块,还被配置为:
在所述浏览图像中所包含的横向内容均为目标横向内容的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:相邻两个柱透镜的中心距离;
所述输出模块,还被配置为:
基于所述柱透镜的放置高度、所述柱透镜到所述像素面的介质折射率,获取所述相邻两个柱透镜的中心距离。
可选地,所述输出模块,还被配置为:
通过如下公式输出所述相邻两个柱透镜的中心距离:
Figure PCTCN2021096964-appb-000007
其中,所述P lens为所述相邻两个柱透镜的中心距离,所述T为所述柱透镜的放置高度,所述n为所述柱透镜到所述像素面的介质折射率,所述α 1、α 2分别是所述视点图像的亮度相对于目标视点的角度分布中,与0度相邻的两个视角分别作为第一目标视角和第二目标视角。
可选地,所述输出模块,还被配置为:
通过如下公式输出所述相邻两个柱透镜的中心距离:
Figure PCTCN2021096964-appb-000008
其中,所述P lens为所述相邻两个柱透镜的中心距离,所述L为视点图像的观看距离,所述P pixel为所述视点图像在相邻两个柱透镜上所对应像素点位置之间的第二像素点距离,所述T为所述柱透镜的放置高度,所述n为所述柱透镜到所述像素面的介质折射率。
可选地,所述目标内容包括:多种目标纵向内容;
所述检测模块,还被配置为:
在所述浏览图像中所包含的纵向内容为至少两个目标纵向内容的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:所述柱透镜的对位角度偏差;
所述输出模块,还被配置为:
基于所述视点图像获取所述目标纵向内容的数量、所述视点图像所对应的视点位置、像素面上的像素点位置;
获取相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离、所述视点图像上目标纵向内容的内容宽度;
基于所述目标纵向内容的数量、所述第一像素点距离、所述内容宽度,获取所述柱透镜的对位角度偏差。
所述输出模块,还被配置为:
通过如下公式输出所述柱透镜的对位角度偏差:
Figure PCTCN2021096964-appb-000009
其中,所述△θ为所述柱透镜的对位角度偏差,所述N为目标纵向内容的数量,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,所述W为所述视点图像上目标纵向内容的内容宽度。
可选地,检测模块,还被配置为:
在所述浏览图像是通过正视角下对所述目标屏幕进行拍摄得到,且所述浏览图像中处于中心位置的中心内容不为目标内容的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:所述柱透镜的对位位置偏差;
所述输出模块,还被配置为:
基于所述视点图像的图像参数,获取所述柱透镜的对位位置偏差。
可选地,所述输出模块,还被配置为:
通过如下公式输出所述柱透镜的对位位置偏差:
ΔP=M·P sub
其中,△P为所述柱透镜的对位位置偏差,所述M为获取所述中心内容与目标内容的差异值,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离。
可选地,所述输出模块,还被配置为:
通过如下公式输出所述柱透镜的对位位置偏差:
Figure PCTCN2021096964-appb-000010
其中,△P为所述柱透镜的对位位置偏差,所述n为所述柱透镜到所述像素面的介质折射率,所述α 1、α 2分别是所述视点图像的亮度相对于目标视点的角度分布中,与0度相邻的两个视角分别作为第一目标视角和第二目标视角。
可选地,所述检测模块,还被配置为:
在所述浏览图像中指定内容的锐利度最大的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:所述柱透镜的曲率半径;
所述输出模块,还被配置为:
获取所述视点图像的视角;
通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿真模型的锐利度最大时的视角为所述视点图像的视角时,将所述曲率半径作为所述柱透镜的曲率半径。
可选地,所述检测模块,还被配置为:
依据所述视点图像的对比度和锐利度之间的负相关关系,获取所述视点图像的锐利度。
可选地,所述输出模块,还被配置为:
获取所述柱透镜的视角亮度分布曲线;
通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿真模型与所述柱透镜的视角亮度分布曲线之间的相似度符合相似度要求时, 将所述光学仿真模型的曲率半径作为所述柱透镜的曲率半径。
本公开一些实施例提供一种计算处理设备,包括:
存储器,其中存储有计算机可读代码;
一个或多个处理器,当所述计算机可读代码被所述一个或多个处理器执行时,所述计算处理设备执行如上述的屏幕检测方法。
本公开一些实施例提供一种计算机程序,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行如上述的屏幕检测方法。
本公开一些实施例提供一种计算机可读介质,其中存储了如上述的屏幕检测方法的计算机程序。
上述说明仅是本公开技术方案的概述,为了能够更清楚了解本公开的技术手段,而可依照说明书的内容予以实施,并且为了让本公开的上述和其它目的、特征和优点能够更明显易懂,以下特举本公开的具体实施方式。
附图简述
为了更清楚地说明本公开实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示意性地示出了本公开一些实施例提供的一种屏幕检测方法的流程示意图。
图2示意性地示出了本公开一些实施例提供的一种屏幕检测方法的原理示意图。
图3示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的流程示意图之一。
图4示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的原理示意图之一。
图5示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的效果示意图之一。
图6示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的流程示意图之二。
图7示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的 原理示意图之二。
图8示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的效果示意图之二。
图9示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的流程示意图之三。
图10示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的原理示意图之三。
图11示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的效果示意图之三。
图12示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的流程示意图之四。
图13示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的原理示意图之四。
图14示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的流程示意图之五。
图15示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的原理示意图之五。
图16示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的效果示意图之四。
图17示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的效果示意图之五。
图18示意性地示出了本公开一些实施例提供的另一种屏幕检测方法的效果示意图之六。
图19示意性地示出了本公开一些实施例提供的一种屏幕检测装置的结构示意图。
图20示意性地示出了用于执行根据本公开一些实施例的方法的计算处理设备的框图。
图21示意性地示出了用于保持或者携带实现根据本公开一些实施例的方法的程序代码的存储单元。
详细描述
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公 开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
相关技术中柱透镜的各种参数与排图方式是对应关系,当由于工艺等原因使得柱镜实际参数偏离设计值,会直接影响观看效果,需要根据实际参数来修正工艺条件或修改排图方式来校正显示效果。但有时由于检测条件的限制,难以对柱透镜的实际参数进行测量,因此本公开提出了利用屏幕显示特定图像,通过对显示图像进行分析来检测屏幕上柱透镜的检测参数。
图1示意性地示出了本公开提供的一种屏幕检测方法的流程示意图,该方法的执行主体可以为任一电子设备,例如可以应用于具有屏幕检测功能的应用程序中,该方法可以由该应用程序的服务器或者终端设备执行,所述方法包括:
步骤101,接收对于目标屏幕的柱透镜检测指令,所述柱透镜检测指令至少包括:目标视点;
步骤102,响应于所述检测指令,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕。
在本公开的一些实施例中,目标屏幕是出光侧设置有柱透镜的显示设备,柱透镜可以以特定阵列排布方式进行设置。由于目标屏幕的中不同视点的图像光线在遇到柱透镜后会投射至不同的方向,因此可通过设置目标屏幕所显示图像的排图方式,来使得用户的双眼在不同视点可以观看到不同的图像,相应的,通过图像采集设备在不同拍摄视点下拍摄到的浏览图像也可以不同。其中,目标视点是指对目标屏幕进行拍摄所需的拍摄视点,该目标视点可以是用户自行设置的,也可以是系统根据检测需求自动设置的,具体可以根据实际需求设置,此处不做限定。
步骤103,在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像。
在公开的一些实施例中,目标内容是指参与本次检测的视点图像中所需包含的显示内容,可以理解,由于目标屏幕在不同视点下的浏览图像的内容不同,因此若浏览图像中包含的图像内容不同,则表明浏览图像的拍摄视点也不同,从而可以通过设置目标内容来确定浏览图像是否是在本次检测所需 的视点对目标屏幕进行拍摄得到。具体地,可以通过设置目标屏幕所显示图像内容与拍摄视点之间的对应关系,从而依据在拍摄所得到的浏览图像中包含图像内容来挑选包含目标内容的视点图像,若浏览图像中包含目标内容则将该浏览图像作为视点图像,若浏览图像中不包含目标内容则过滤掉浏览图像即可。
示例性的,可在目标屏幕上显示是全屏阵列排布的数字,不同拍摄视点下的浏览图像则为不同的数字,若目标屏幕的柱透镜的检测参数无偏差,也就是检测参数为标准参数时,目标屏幕的浏览图像中的图像内容为同一数字,若检测参数存在偏差时,目标屏幕的浏览图像中的图像内容将存在不同的数字,从而可以通过依据不同视点下浏览图像中是否存在不同的数字来确定目标屏幕上柱透镜的检测参数是否存在偏差。参照图2,其中2-1、2-2、2-3依次为在目标屏幕的柱透镜的检测参数无偏差时,第一视点、第二视点、第三视点的浏览图像;而2-4、2-5、2-6依次为目标屏幕的柱透镜的检测参数存在偏差时,第一视点、第二视点、第三视点的浏览图像。可见,若柱透镜的检测参数无偏差,三个拍摄视点下的浏览图像2-1、2-2、2-3中每个浏览图像的图像内容分别仅包括“1”、“2”、“3”;而在柱透镜的检测参数存在偏差时,第一视点下的浏览图像2-3中只是局部图像内容为“1”,还存在“2”、“3”的其他图像内容,这显然与柱透镜的检测参数无偏差时第一视点下的浏览图像的图像内容仅为“1”存在差异,则可确定浏览图像2-3所对应的柱透镜的检测参数存在偏差,浏览图像2-5和2-6同理检测参数也存在偏差。
步骤104,基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数。
在公开的一些实施例中,检测参数是指所需检测柱透镜的实际指标参数。由于柱透镜在加工过程中,由于工艺等因素,可能导致柱透镜的检测参数与预期参数之间的存在偏差,这些偏差会导致目标屏幕实际所显示不同视点的浏览图像,与标准参数下的不同视点的浏览图像存在偏差,例如在标准参数下的目标屏幕在特定视点的浏览图像应该包含的图像内容为1,而由于柱透镜的检测参数存在偏差,因此实际在特定视点的浏览图像包含的图像内容可能是2。但是由于不同拍摄视点下的浏览图像中包含的图像内容受柱透镜的检测参数的影响,因此可依据包含有目标内容的视点图像中视点位置、图像亮度、图像对比度等图像参数进行分析来得到柱透镜的检测参数。
本公开实施例通过从特定视点下对屏幕拍摄得到的浏览图像中挑选出 包含有目标内容的视点图像,以根据该视点图像的图像参数对屏幕上柱透镜的检测参数进行检测,可以高效且便捷地获取屏幕上柱透镜的各项检测参数,提高了屏幕上柱透镜检测参数的检测效率。
可选地,所述步骤102,可以包括:将图像采集设备的视点调整至目标视点,以对目标屏幕的出光侧进行拍摄,得到浏览图像。
在本公开实施例中,图像采集设备可以是具有图像采集功能的电子设备,该图像采集设备还可以具有数据处理、数据存储和数据传输等功能,系统可以通过传动装置连接图像采集设备,以通过控制传动装置来对图像采集设备的拍摄视点进行调整,当然也可以通过人工对图像采集设备进行调整来对目标屏幕进行拍摄,具体可以根据实际需求进行设置,此处不做限定。目标视点是对目标屏幕的出光侧进行拍摄所需的拍摄视点,该目标视点可以是预先指定的固定视点,也可以是随机选取的拍摄视点,或者是依据不同的检测参数进行适应性调整,例如在正视点、或者是在30°视点进行拍摄,具体可以根据实际需求设置,此处不做限定。
在本公开的而一些实施例中,可以通过对图像采集设备的拍摄视点进行调整,以调整至本次拍摄所需的视点目标视点进行拍摄来获取目标屏幕的浏览图像。通过对图像采集设备的拍摄视点调整至目标视点来对目标屏幕的出光侧进行拍摄,从而可以快捷地获取本次检测所需的浏览图像。
可选地,所述步骤101,可以包括:
将图像采集设备相对于目标屏幕的拍摄位置调整至目标位置,对目标屏幕的出光侧进行拍摄,得到浏览图像。
在本公开的一些实施例中,系统可以通过传动装置连接图像采集设备,以通过控制传动装置来对图像采集设备的拍摄位置进行调整,以实现对于图像采集设备的便捷调整。
可选地,所述步骤101,可以包括:将图像采集设备的拍摄位置参数进行调整,以使得所述图像采集设备的拍摄位置处于目标位置,所述拍摄位置参数包括:拍摄角度、拍摄高度和拍摄距离中的至少一种。
在本公开的一些实施例中,目标角度是指本次检测所需拍摄浏览图像的拍摄角度,目标位置是本次检测所需获取浏览图像相对于目标屏幕出光侧的拍摄位置,目标高度是图像采集设备相对于地面的高度。具体可以通过设置包含有拍摄角度、拍摄高度和拍摄距离种至少一种的位置参数来通过传动装置对图像采集设备进行调整指目标位置对目标屏幕的出光侧进行拍摄,,实 现对于图像采集设备的便捷调整
可选地,在本公开提供的一些实施例中,所述目标内容存在至少两个,所述图像参数至少包括:所述柱透镜的放置高度,参照图3,示出本公开提供的另一种屏幕检测方法的流程示意图之一,所述方法包括:
步骤201,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕,所述目标屏幕是出光侧设置有柱透镜的屏幕。
在本公开实施例中,柱透镜的放置高度是指柱透镜的上表面与目标屏幕的像素面之间的实际距离。由于目标屏幕所显示不同拍摄视点下的浏览图像的内容不同,因此可通过设置目标视点为多个处于同一直线的拍摄视点,且该多个拍摄视点所在直线与所述目标屏幕的像素面平行,以对目标屏幕的出光侧进行拍摄则可获得多个浏览图像,该多个浏览图像中可以包含有目标屏幕所显示的不同视点下的多有不同内容。若目标屏幕所显示的图像内容存在N个,则可通过在与目标屏幕的像素面平行的直线上设置多个拍摄视点,以拍摄获取分别包含该N个图像内容的N个浏览图像。
示例性的,以目标屏幕显示的图像内容包含为“1”、“2”、“3”、“4”个数字,且每个拍摄视点下的图像内容不同,则可通过在与目标屏幕的像素面平行的直线上设置多个拍摄视点来对目标屏幕的出光侧进行拍摄,从而可以获取分别包含“1”、“2”、“3”、“4”的多幅浏览图像。
步骤202,在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,其中,至少两幅所述视点图像的视点处于同一直线,且所述直线与所述目标屏幕的像素面平行。
在本公开实施例中,为了保证参数检测的视点图像可以清晰地反映目标屏幕所显示不同拍摄视点的图像内容,避免由于不同拍摄视点下的图像内容交叉影响后续的参数检测,因此可通过依据浏览图像中是否仅包含一种目标内容的方式来从浏览图像中筛选参与参数检测的视点图像。例如:在目标内容为“1”、“2”、“3”、“4”的四个数字时,可以从浏览图像中挑选出仅包含“1”、仅包含“2”、仅包含“3”、仅包含“4”的四张浏览图像作为视点图像。当然此处只是示例性说明,具体目标内容的设置方式可根据实际需求设置,此处不做限限定。
步骤203,基于所述视点图像获取所述视点图像所对应的视点位置、像 素面上的像素点位置。
在本公开实施例中,由于目标屏幕的像素面上像素点对应发光组件发出的屏幕光线需要通过柱透镜的折射后达到各个拍摄视点所在的视点位置,因此视点图像和视点位置和像素面上的像素点位置是一一对应的,可通过对视点图像和目标屏幕进行观测和分析,以获取视点图像的视点位置和该视点图像在目标屏幕的像素面上相对应的像素点位置。
步骤204,获取相邻两幅所述视点图像在同一柱透镜上,所对应像素点位置之间的第一像素点距离。
在本公开实施例中,可通过对经过同一柱透镜折射的屏幕光线的光路进行观测,从而确定两幅相邻视点图像所在视点位置的屏幕光线,是由目标屏幕的像素面上哪个两个相邻像素点的发光组件发出的,从而将该相邻像素点之间的实际距离作为第一像素点距离。由于像素面上像素点的相邻像素点之间的距离相同,因此则取任一对相邻像素点之间的第一像素点距离即可反映其他对相邻像素点之间的像素点距离。
步骤205,基于所述视点位置、所述视点数量、所述第一像素点距离、所述柱透镜到所述像素面的介质折射率,获取所述目标屏幕上柱透镜的放置高度。
在本公开实施例中,经实验发现,柱透镜的放置高度与第一像素距离和呈正相关关系,与述柱透镜到所述像素面的介质折射率呈正相关关系,与拍摄视点所在屏幕到像素面的距离,和相邻拍摄视点之间距离的比值呈正相关关系,因此可通过依据视点位置、视点数量、第一像素点距离和介质折射率来设置算法计算目标屏幕上柱透镜的放置高度。
可选地,所述步骤205,包括:
以所述目标屏幕的像素面所在平面为xy面建立空间直角坐标系(x,y,z),获取各视点位置在所述空间直角坐标系中的空间坐标值,并通过如下公式输出所述目标屏幕上所述柱透镜的放置高度:
Figure PCTCN2021096964-appb-000011
其中,T为放置高度,所述N为视点数量,n为所述柱透镜到所述像素面的介质折射率,P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,x N为第N个视点图像的x轴空间坐标值,x 1为第1个视点图像的x轴坐标值,z为各视点图像的z轴坐标值,其中N≥2, N为正整数。
在本公开实施例中,为了归一化各视点图像的视点位置,可以以目标屏幕的像素面所在平面作为xy面,具体可以将目标视点所在直线作为x轴,x轴在像素面所在平面的垂线作为y轴,以垂直于像素面所在平面的直线作为z轴,以建立平面空间坐标系,将该平面空间坐标系中各目标视点的空间坐标值作为各目标视点的视点位置,以代入到公式中进行计算。其中,由于柱透镜下表面和像素面之间的空气面也发对屏幕光线有一定的折射效应,因此需要在公式中引入柱透镜到所述像素面的介质折射率n来对计算过程进行修正,尽量减少空气面的折射效应对于计算所得到的柱透镜的放置高度的影响,保证了所检测到柱透镜的放置高度的准确性。
示例性的,参照图4,其中像素面所在平面作为xy面,以目标视点所在直线作为x轴,x轴在xy面的垂直线为y轴,xy面的垂线为y轴建立空间直角坐标系。其中以4个目标视点为例,由光路图可知,4个目标视点的空间坐标值分别是(x1,y,z)、(x2,y,z)、(x3,y,z)、(x4,y,z),,在该目标视点对目标屏幕的出光侧依次拍摄可获得如图5所示的4幅分别仅包含“1”、“2”、“3”、“4”的图像内容的视点图像。可以理解,若有N个视点,则在(xN,y,z)出,可以拍摄到全屏或局部包含数字“N”的视点图像。参照图5,其中4幅视点图像的空间坐标值分别是(-57,0,350)、(-19,0,350)、(19,0,350)、(57,0,350),此时若去第一像素点距离P sub为8.725μm,介质折射率n为1.53,则将视点图像的空间坐标值、第一像素点距离和介质折射率代入公式进行计算,即可得到柱透镜的放置高度T为120.5μm。
可选地,在本公开提供的一些实施例中,所述目标内容包括:目标横向内容,所述检测参数至少包括:相邻两个柱透镜的中心距离,参照图6,示出本公开提供的另一种屏幕检测方法的流程示意图之二,所述方法包括:
步骤301,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕,所述目标屏幕是出光侧设置有柱透镜的屏幕。
在本公开实施例中,中心距离是指目标屏幕的柱透镜阵列中相邻两个柱透镜之间的实际距离。可通过调整图像采集设备与目标屏幕之间的拍摄距离,从而获取到可以呈现不同观看效果的浏览图像,值得说明的是,由于目标屏幕所显示的图像内容在不同视点下的浏览图像的内容不同,但是对于图 像内容的制作者而言,可以使得用户清晰地观看到浏览图像中的特定内容的视点则可以认定为符合预期要求的预期观看距离,但是由于柱透镜的检测参数可能存在偏差,因此实际用户清晰地观看到浏览图像中的特定内容时的实际观看距离可能与预期观看距离也将存在偏差,因此需要对目标屏幕进行图像采集来确定实际可以清晰地观看到特定内容时拍摄视点与屏幕之间的实际观看距离。
步骤302,在所述浏览图像中所包含的横向内容均为目标横向内容的情况下,将所述浏览图像作为视点图像。
在本公开实时例中,横向内容是指浏览图像中横向排布的图像内容,而目标横向内容是指参与本次参数检测所需的视点图像所需包含的横向内容,该目标横向内容可依据目标屏幕所显示图像内容包含图像内容进行设置,例如:图像内容为分行排布的“1”、“2”、“3”、“4”四个数字,则可设置目标横向内容为每行横向排布内容中包含的数字相同,且浏览图像中包含了所有四个数字,从而可以将包含该四个数字且每行数字相同的浏览图像的观看距离为可以清晰地观看到图像内容的实际观看距离,将该浏览图像作为参与参数检测的视点图像。
步骤303,基于所述柱透镜的放置高度、所述柱透镜到所述像素面的介质折射率,获取所述相邻两个柱透镜的中心距离。
在本公开实施例中,经实验发现,相邻两个柱透镜的中心距离与第二像素点距离和观看距离的乘积呈正相关关系,与观看距离和柱透镜放置高度的和呈负相关关系,与柱透镜到所述像素面的介质折射率呈正比关系,可以通过基于观看距离、第二像素点距离、柱透镜的放置高度、介质折射率建立算法公式来计算相邻两个柱透镜之间的中心距离。
可选地,所述步骤303,包括:通过如下公式输出所述相邻两个柱透镜的中心距离:
Figure PCTCN2021096964-appb-000012
其中,所述P lens为所述相邻两个柱透镜的中心距离,所述L为视点图像的观看距离,所述P pixel为所述视点图像在相邻两个柱透镜上所对应像素点位置之间的第二像素点距离,所述T为所述柱透镜的放置高度,所述n为所述柱透镜到所述像素面的介质折射率。
在本公开实施例中,由于观众用户观看视点图像的光线是通过柱透镜折 射后射出的,因此可以将视点图像的视点位置视为用户眼睛的所在位置,从而可以将视点位置与柱透镜所在屏幕之间的垂直距离作为该视点图像的观看距离。可通过对经过相邻两个柱透镜折射的屏幕光线的光路进行观测,从而确定同一视点图像所在视点位置的屏幕光线,是由目标屏幕的像素面上哪个两个相邻像素点的发光组件发出并经由该两个柱透镜折射,从而将该两个相邻像素点之间的实际距离作为第二像素点距离。若目标屏幕的柱透镜阵列中各柱透镜之间的中心距离均相同,则可通过该第二像素点距离表征所有相邻柱透镜之间的中心距离,当然这是在理想情况下,通常不同对柱透镜之间的设置距离存在一定误差,因此可以对每对相邻柱透镜对应的第二像素点距离进行独立检测。
参照图7,依据图中的几何关系可知,仅需获取到观看距离L,即可将第二像素点距离P pixel,观看距离,柱透镜的放置高度T和介质折射率n代入上述公式计算得到相邻两个柱透镜之间的中心距离P lens
示例性的,若取P pixel=54.9μm、n=1.53、T=120.5μm,将拍摄到图8所示效果的视点图的观看距离L=650mm带入上述公式中,即可算得相邻两个柱透镜之间的中心距离P lens为54.8933μm。
可选地,在本公开提供的一些实施例中,所述目标内容包括:多种目标纵向内容,所述检测参数至少包括:所述柱透镜的对位角度偏差,参照图9,示出本公开提供的另一种屏幕检测方法的流程示意图之三,所述方法包括:
可选地,所述步骤303,可以包括:
通过如下公式输出所述相邻两个柱透镜的中心距离:
Figure PCTCN2021096964-appb-000013
其中,所述P lens为所述相邻两个柱透镜的中心距离,所述T为所述柱透镜的放置高度,所述n为所述柱透镜到所述像素面的介质折射率,所述α 1、α 2分别是所述视点图像的亮度相对于目标视点的角度分布中,与0度相邻的两个视角分别作为第一目标视角和第二目标视角。
在本公开实施例中,参照上述公式,还可以在已知第一目标视角和第二目标视角、柱透镜到所述像素面的介质折射率、观看距离的情况下,结合下述公式(1)来推导算得相邻两个柱透镜的中心距离和柱透镜的放置高度:
Figure PCTCN2021096964-appb-000014
其中,P lens为所述相邻两个柱透镜的中心距离,P pixel为所述视点图像在相邻两个柱透镜上所对应像素点位置之间的第二像素点距离,T为柱透镜的放置高度,L为观看距离,n为柱透镜到像素面的介质折射率。
步骤401,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕,所述目标屏幕是出光侧设置有柱透镜的屏幕。
在本公开实施例中,柱透镜的对位角度偏差是指柱透镜实际所显示图像内容与设计预期图像内容之间,图像内容所在位置的角度偏差,示例性的,参照图10,其中框体10-1用于反映图像内容的实际位置,框体10-2用于反映图像内容的预期设计位置,10-1和10-2之间的对位边线之间的角度为对位角度偏差。可通过调整图像采集设备与目标屏幕之间的拍摄距离,从而获取到可以呈现不同观看效果的浏览图像,值得说明的是,由于目标屏幕所显示的图像内容在不同视点下的浏览图像的内容不同,但是对于图像内容的制作者而言,可以使得用户清晰地观看到浏览图像中的特定内容的视点则可以认定为符合预期要求的预期观看距离,但是由于柱透镜的检测参数可能存在偏差,因此实际用户清晰地观看到浏览图像中的特定内容时的实际观看距离可能与预期观看距离也将存在偏差,因此需要对目标屏幕进行图像采集来确定实际可以清晰地观看到特定内容时拍摄视点与屏幕之间的实际观看距离。
步骤402,在所述浏览图像中所包含的纵向内容为至少两个目标纵向内容的情况下,将所述浏览图像作为视点图像。
在本公开实时例中,纵向内容是指浏览图像中纵向排布的图像内容,而目标纵向内容是指参与本次参数检测所需的视点图像所需包含的纵向内容,该目标纵向内容可依据目标屏幕所显示图像内容包含图像内容进列设置,例如:图像内容为分列排布的“1”、“2”、“3”、“4”四个数字,则可设置目标纵向内容为每列纵向排布内容中包含的数字相同,且浏览图像中包含了所有四个数字,从而任一观看距离在包含该四个数字且每列数字相同的浏览图像的观看距离为可以清晰地观看到图像内容的实际观看距离,相反的,若浏览图像中的每列数字不同,则表明柱透镜存在对位角度偏差,因此则可将包含至少两个目标纵向内容的浏览图像作为参与参数检测的视点图像。
步骤403,基于所述视点图像获取所述目标纵向内容的数量、所述视点图像所对应的视点位置、像素面上的像素点位置。
在本公开实施例中,目标纵向内容的数量可依据目标屏幕所显示的图像内容中得到,视点图像对应的视点位置和像素点位置参找不轴203的中的详细描述,此处不再赘述。
步骤404,获取相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离、所述视点图像上目标纵向内容的内容宽度。
在本公开实施例中,第一像素点的距离可参照步骤204的详细描述,此处不再赘述。目标纵向内容的内容宽度是指目标纵向内容在视点图像中的显示宽度。
步骤405,基于所述目标纵向内容的数量、所述第一像素点距离、所述内容宽度,获取所述柱透镜的对位角度偏差。
在本公开实施例中,经实验测得,柱透镜的对位角偏差与目标纵向内容的数量和内容宽度的比值呈负相关关系,第一像素点距离也呈负相关关系,因此可通过依据该相关关系设置算法公式来获取柱透镜的对位角度偏差。
可选地,所述步骤405,包括:通过如下公式输出所述柱透镜的对位角度偏差:
Figure PCTCN2021096964-appb-000015
其中,所述△θ为所述柱透镜的对位角度偏差,所述N为目标纵向内容的数量,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,所述W为所述视点图像上目标纵向内容的内容宽度。
在本公开实施例中,若取第一像素点距离P sub为8.725μm,并测量图11中所拍摄的目标纵向内容的宽度W,在W为30mm时,代入上述公式即可算得柱透镜的对位角度偏差△θ为0.067°。
可选地,在本公开提供的一些实施例中,所述目标内容包括:所述检测参数至少包括:所述柱透镜的对位位置偏差,参照图12,示出本公开提供的另一种屏幕检测方法的流程示意图之四,所述方法包括:
步骤501,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕,所述目标屏幕是出光侧设置有柱透镜的屏幕。
在本公开实施例中,对位位置偏差是指柱透镜实际所显示图像内容与设计预期图像内容之间,图像内容所在位置的水平距离。示例性的,参照图13,其中框体13-1用于反映图像内容的实际位置,框体13-2用于反映图像内容的预期设计位置,13-1和13-2之间的对位点之间的水平距离为对位位置偏差。
步骤502,在所述浏览图像是通过正视角下对所述目标屏幕进行拍摄得到,且所述浏览图像中处于中心位置的中心内容不为目标内容的情况下,将所述浏览图像作为视点图像。
在本公开实施例中,若在正视角的浏览图像中处于中心位置的中心内容与预期设计的图像内容相同,则可认定该目标屏幕的柱透镜的对位位置无偏差,若在正视角的浏览图像中的图像内容与预期设计的图像内容不同,则可认定该目标屏幕的柱透镜的对位位置存在偏,需要进行参数检测,将该浏览图像作为参与参数检测的视点图像。
步骤503,基于所述视点图像的图像参数,获取所述柱透镜的对位位置偏差。
在本公开实施例中,经实验测得,柱透镜的对位位置偏差与中心内容与目标内容之间的差异值呈正相关关系,与第一像素点距离呈负相关关系,因此可依据该相关关系设置算法公开计算得到柱透镜的对位位置偏差。
可选地,所述步骤503,可以包括:通过如下公式输出所述柱透镜的对位位置偏差:
ΔP=M·P sub
其中,△P为所述柱透镜的对位位置偏差,所述M为获取所述中心内容与目标内容的差异值,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离。
在本公开实施例中,在本公开实施例中,中心内容与目标内容之间的差异值是指表征中心内容和目标内容质检差异程度的指标值,可以是中心内容与目标内容中所包含内容种类的差值,也可以是中心内容与目标内容中所包含差异内容的面积差异值,具体可以根据实际需求设置,此处不做限定。第一像素点距离的获取方法可参照步骤203的详细描述,此处不再赘述。
示例性的,若取第一像素点距离P sub为8.725μm、正视角的视点图像为介于3、4重叠视图(若主要倾向于4,可以取为3.7),而理论视图为2、3重叠视图(即为2.5视图),所以其差异值M=1.2,带入上述公式中,即可 计算得到柱透镜的对位位置偏差ΔP=10.5μm。
可选地,所述步骤503,可以包括:通过如下公式输出所述柱透镜的对位位置偏差:
Figure PCTCN2021096964-appb-000016
其中,△P为所述柱透镜的对位位置偏差,所述n为所述柱透镜到所述像素面的介质折射率,所述α 1、α 2分别是所述视点图像的亮度相对于目标视点的角度分布中,与0度相邻的两个视角分别作为第一目标视角和第二目标视角。
可选地,在本公开提供的一些实施例中,所述检测参数至少包括:所述柱透镜的曲率半径,参照图14,示出本公开提供的另一种屏幕检测方法的流程示意图之五,所述方法包括:
步骤601,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕,所述目标屏幕是出光侧设置有柱透镜的屏幕。
在本公开实施例中,柱透镜的曲率半径是指柱透镜上表面中心点的切线方向角对上表面弧长的转动率。可通过关闭目标屏幕中显示的部分图像内容,以仅显示部分图像内容,从而使得目标屏幕中关闭显示的部分图像内容的显示区域为黑色,从而在不同视点下对目标屏幕的出光侧进行拍摄即可得到可以反映屏幕锐利度的浏览图像。
示例性的,参照图15,其中15-1为目标屏幕存在对位角度偏差时,关闭显示部分图像内容的浏览图像,其中的黑色条纹为关闭显示的部分图像内容所在的显示区域;15-2为目标屏幕在没有对位角度偏差时,关闭显示部分涂香香内容的浏览图像,其中黑色条纹也为关闭显示的部分图像内容所在的显示区域。
步骤602,在所述浏览图像中指定内容的锐利度最大的情况下,将所述浏览图像作为视点图像。
在本公开实施例中,浏览图像的锐利度是用于表征图像显示亮度和对比度的指标参数,具体可以基于图像的显示亮度或者对比度等图像参数求得。由于不同拍摄视点下的浏览图像中指定内容的锐利度不同,因此可通过将所采集到的多幅浏览图像进行比对,从中挑选出锐利度最大的浏览图像作为参与参数检测的视点图像。例如,在指定内容为关闭显示的部分图像内容时, 可以依据浏览图像中的黑纹的锐利度来对浏览图像进行筛选,当然还可以比对未关闭显示的图像内容的锐利度来对浏览图像进行筛选,而相对而言,由于黑纹的锐利度更为明显,具体可以根据实际需求设置,此处不做限定。
步骤603,获取所述视点图像的视角。
在本公开实施例中,可通过对视点图像的拍摄角度及拍摄位置进行记录,以依据所记录的拍摄角度和拍摄位置计算视点图像的视角。
步骤604,通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿真模型的锐利度最大时的视角为所述视点图像的视角时,将所述曲率半径作为所述柱透镜的曲率半径。
在本公开实施例中,由于柱透镜的曲率半径和最大锐利度对应的视角相关,也就是说对于相同曲率半径的情况下,柱透镜最大锐利度下的视角是相同的。因此可通过光学仿真软件构建柱透镜的光学仿真模型,通过光学仿真模型的曲率半径进行调整后,观测光学仿真模型的锐利度最大时的视角,若与视点图像的视角相同,则表明该柱透镜的曲率半径为该视角下的光学仿真模型的曲率半径。
示例性的,参照图16,16-1为非准直视角(拍摄视角为0°)时的视点图像,明暗对比度相对较小,16-2为准直视角(拍摄视角为21°)时的视点,图像明暗对比度相对较大,因此确定该拍摄视角为21°时的视点图像的锐利度最大,然后将锐利度最大的视角角度21°代入上述步骤204所述的步骤进行处理。即通过软件仿真得到结果如图17所示,其中在曲率半径r为62.5μm时得到的像素出光亮度最高,锐利度最大,因此得到柱透镜的曲率半径r为62.5μm。
可选地,所述锐利度可通过以下步骤获得:依据所述视点图像的对比度和锐利度之间的负相关关系,获取所述视点图像的锐利度。
在本公开实施的一些实施例中,由于锐利度最大时,视点图像的清晰度最清晰且最准直,因此此时视点图像的对比度也是最大的,可从浏览图像中挑选对比度最大的浏览图像作为视点图像,以高效地获取图像的锐利度。
当然,还可采用相关技术中其他锐利度的获取方式来计算视点图像的锐利度,例如MTF(Modulation Transfer Function,模拟传递函数),基于图像调度值来获取视点图像的锐利度,当然具体的锐利度计算方式可以根据实际需求设置,只要可以表征视点图像的锐利度均可适用 于本公开实施例,此处不做限定。
可选地,参照图18,还可以通过如下步骤605至步骤606来输出柱透镜的曲率半径:
步骤605,获取所述柱透镜的视角亮度分布曲线。
在本公开实施例中,可通过设置有激光镜头的图像采集设备对柱透镜的上表面进行扫描以获取柱透镜的视角亮度分布曲线。
步骤606,通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿真模型与所述柱透镜的视角亮度分布曲线之间的相似度符合相似度要求时,将所述光学仿真模型的曲率半径作为所述柱透镜的曲率半径。
在本公开实施例中,系统通过对光学仿真模型中各曲率半径下的视角亮度分布曲线进行扫描,以获取各曲率半径下相对应的视角亮度分布曲线,然后计算各曲率半径下相对应的视角亮度分布曲线与柱透镜的实际视角亮度分布曲线之间的相似度,在该相似度符合相似度要求时,可以确认该曲率半径为柱透镜的曲率半径。该相似度要求可以是相似度大于相似度阈值,也可以是取相似度最大值,具体可以根据实际需求设置,此处不做限定。
本公开实施例通过依据从用户身体图像中提取到的预测图像特征,以从各体型类别中筛选出用户的目标体型类别,无需依赖于体型模板也可以对用户的体型类别进行准确的识别,提高了屏幕检测的准确性。
图19示意性地示出了本公开提供的一种屏幕检测装置70的结构示意图,所述装置包括:
接收模块701,被配置为接收对于目标屏幕的柱透镜检测指令,所述柱透镜检测指令至少包括:目标视点;
检测模块702,被配置为响应于所述检测指令,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕;
在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像;
输出模块703,被配置为:基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数。
可选地,所述检测模块702,还被配置为:
将图像采集设备的视点调整至目标视点,以对目标屏幕的出光侧进行拍 摄,得到浏览图像。
可选地,所述检测模块702,还被配置为:
将图像采集设备相对于目标屏幕的拍摄位置调整至目标位置,对目标屏幕的出光侧进行拍摄,得到浏览图像。
可选地,所述检测模块702,还被配置为:
将图像采集设备的拍摄位置参数进行调整,以使得所述图像采集设备的拍摄位置处于目标位置,所述拍摄位置参数包括:拍摄角度、拍摄高度和拍摄距离中的至少一种。
可选地,所述目标内容存在至少两个;
可选地,所述检测模块702,还被配置为:
在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,其中,至少两幅所述视点图像的视点处于同一直线,且所述直线与所述目标屏幕的像素面平行。
可选地,所述图像参数至少包括:所述柱透镜的放置高度;
所述输出模块703,还被配置为:
基于所述视点图像获取所述视点图像所对应的视点位置、像素面上的像素点位置;
获取相邻两幅所述视点图像在同一柱透镜上,所对应像素点位置之间的第一像素点距离;
基于所述视点位置、所述视点数量、所述第一像素点距离、所述柱透镜到所述像素面的介质折射率,获取所述目标屏幕上柱透镜的放置高度。
可选地,所述输出模块703,还被配置为:
以所述目标屏幕的像素面所在平面为xy面建立空间直角坐标系(x,y,z),获取各视点位置在所述空间直角坐标系中的空间坐标值,并通过如下公式输出所述目标屏幕上所述柱透镜的放置高度:
Figure PCTCN2021096964-appb-000017
其中,T为放置高度,所述N为视点数量,n为所述柱透镜到所述像素面的介质折射率,P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,x N为第N个视点图像的x轴空间坐标值,x 1为第1个视点图像的x轴坐标值,z为各视点图像的z轴坐标值,其中N≥2,N为正整数。
可选地,所述目标内容包括:目标横向内容;
所述检测模块702,还被配置为:
在所述浏览图像中所包含的横向内容均为目标横向内容的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:相邻两个柱透镜的中心距离;
所述输出模块703,还被配置为:
基于所述柱透镜的放置高度、所述柱透镜到所述像素面的介质折射率,获取所述相邻两个柱透镜的中心距离。
可选地,所述输出模块703,还被配置为:
通过如下公式输出所述相邻两个柱透镜的中心距离:
Figure PCTCN2021096964-appb-000018
其中,所述P lens为所述相邻两个柱透镜的中心距离,所述T为所述柱透镜的放置高度,所述n为所述柱透镜到所述像素面的介质折射率,所述α 1、α 2分别是所述视点图像的亮度相对于目标视点的角度分布中,与0度相邻的两个视角分别作为第一目标视角和第二目标视角。
可选地,所述输出模块703,还被配置为:
通过如下公式输出所述相邻两个柱透镜的中心距离:
Figure PCTCN2021096964-appb-000019
其中,所述P lens为所述相邻两个柱透镜的中心距离,所述L为视点图像的观看距离,所述P pixel为所述视点图像在相邻两个柱透镜上所对应像素点位置之间的第二像素点距离,所述T为所述柱透镜的放置高度,所述n为所述柱透镜到所述像素面的介质折射率。
可选地,所述目标内容包括:多种目标纵向内容;
所述检测模块702,还被配置为:
在所述浏览图像中所包含的纵向内容为至少两个目标纵向内容的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:所述柱透镜的对位角度偏差;
所述输出模块703,还被配置为:
基于所述视点图像获取所述目标纵向内容的数量、所述视点图像所对应 的视点位置、像素面上的像素点位置;
获取相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离、所述视点图像上目标纵向内容的内容宽度;
基于所述目标纵向内容的数量、所述第一像素点距离、所述内容宽度,获取所述柱透镜的对位角度偏差。
所述输出模块703,还被配置为:
通过如下公式输出所述柱透镜的对位角度偏差:
Figure PCTCN2021096964-appb-000020
其中,所述△θ为所述柱透镜的对位角度偏差,所述N为目标纵向内容的数量,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,所述W为所述视点图像上目标纵向内容的内容宽度。
可选地,检测模块702,还被配置为:
在所述浏览图像是通过正视角下对所述目标屏幕进行拍摄得到,且所述浏览图像中处于中心位置的中心内容不为目标内容的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:所述柱透镜的对位位置偏差;
所述输出模块703,还被配置为:
基于所述视点图像的图像参数,获取所述柱透镜的对位位置偏差。
可选地,所述输出模块703,还被配置为:
通过如下公式输出所述柱透镜的对位位置偏差:
ΔP=M·P sub
其中,△P为所述柱透镜的对位位置偏差,所述M为获取所述中心内容与目标内容的差异值,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离。
可选地,所述输出模块703,还被配置为:
通过如下公式输出所述柱透镜的对位位置偏差:
Figure PCTCN2021096964-appb-000021
其中,△P为所述柱透镜的对位位置偏差,所述n为所述柱透镜到所述像素面的介质折射率,所述α 1、α 2分别是所述视点图像的亮度相对于目标视 点的角度分布中,与0度相邻的两个视角分别作为第一目标视角和第二目标视角。
可选地,所述检测模块702,还被配置为:
在所述浏览图像中指定内容的锐利度最大的情况下,将所述浏览图像作为视点图像。
可选地,所述检测参数至少包括:所述柱透镜的曲率半径;
所述输出模块703,还被配置为:
获取所述视点图像的视角;
通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿真模型的锐利度最大时的视角为所述视点图像的视角时,将所述曲率半径作为所述柱透镜的曲率半径。
可选地,所述检测模块702,还被配置为:
依据所述视点图像的对比度和锐利度之间的负相关关系,获取所述视点图像的锐利度。
可选地,所述输出模块703,还被配置为:
获取所述柱透镜的视角亮度分布曲线;
通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿真模型与所述柱透镜的视角亮度分布曲线之间的相似度符合相似度要求时,将所述光学仿真模型的曲率半径作为所述柱透镜的曲率半径。
本公开实施例通过从特定视点下对屏幕拍摄得到的浏览图像中挑选出包含有目标内容的视点图像,以根据该视点图像的图像参数对屏幕上柱透镜的检测参数进行检测,可以高效且便捷地获取屏幕上柱透镜的各项检测参数,提高了屏幕上柱透镜检测参数的检测效率。
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。
本公开的各个部件实施例可以以硬件实现,或者以在一个或者多个处理器上运行的软件模块实现,或者以它们的组合实现。本领域的技术人员应当 理解,可以在实践中使用微处理器或者数字信号处理器(DSP)来实现根据本公开实施例的计算处理设备中的一些或者全部部件的一些或者全部功能。本公开还可以实现为用于执行这里所描述的方法的一部分或者全部的设备或者装置程序(例如,计算机程序和计算机程序产品)。这样的实现本公开的程序可以存储在计算机可读介质上,或者可以具有一个或者多个信号的形式。这样的信号可以从因特网网站上下载得到,或者在载体信号上提供,或者以任何其他形式提供。
例如,图20示出了可以实现根据本公开的方法的计算处理设备。该计算处理设备传统上包括处理器810和以存储器820形式的计算机程序产品或者计算机可读介质。存储器820可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。存储器820具有用于执行上述方法中的任何方法步骤的程序代码831的存储空间830。例如,用于程序代码的存储空间830可以包括分别用于实现上面的方法中的各种步骤的各个程序代码831。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。这些计算机程序产品包括诸如硬盘,紧致盘(CD)、存储卡或者软盘之类的程序代码载体。这样的计算机程序产品通常为如参考图21所述的便携式或者固定存储单元。该存储单元可以具有与图20的计算处理设备中的存储器820类似布置的存储段、存储空间等。程序代码可以例如以适当形式进行压缩。通常,存储单元包括计算机可读代码831’,即可以由例如诸如810之类的处理器读取的代码,这些代码当由计算处理设备运行时,导致该计算处理设备执行上面所描述的方法中的各个步骤。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
本文中所称的“一个实施例”、“实施例”或者“一个或者多个实施例”意味 着,结合实施例描述的特定特征、结构或者特性包括在本公开的至少一个实施例中。此外,请注意,这里“在一个实施例中”的词语例子不一定全指同一个实施例。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本公开的实施例可以在没有这些具体细节的情况下被实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本公开可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
最后应说明的是:以上实施例仅用以说明本公开的技术方案,而非对其限制;尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本公开各实施例技术方案的精神和范围。

Claims (26)

  1. 一种屏幕检测方法,其特征在于,所述方法包括:
    接收对于目标屏幕的柱透镜检测指令,所述柱透镜检测指令至少包括:目标视点;
    响应于所述检测指令,获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕;
    在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像;
    基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数。
  2. 根据权利要求1所述的方法,其特征在于,所述获取在所述目标视点下对于目标屏幕拍摄的浏览图像,所述目标屏幕是出光侧设置有柱透镜的屏幕,包括:
    将图像采集设备的视点调整至目标视点,以对目标屏幕的出光侧进行拍摄,得到浏览图像。
  3. 根据权利要求2所述的方法,其特征在于,所述将图像采集设备的视点调整至目标视点,以对目标屏幕的出光侧进行拍摄,得到浏览图像,包括:
    将图像采集设备相对于目标屏幕的拍摄位置调整至目标位置,对目标屏幕的出光侧进行拍摄,得到浏览图像。
  4. 根据权利要求3所述的方法,其特征在于,所述将图像采集设备相对于目标屏幕的拍摄位置调整至目标位置,包括:
    将图像采集设备的拍摄位置参数进行调整,以使得所述图像采集设备的拍摄位置处于目标位置,所述拍摄位置参数包括:拍摄角度、拍摄高度和拍摄距离中的至少一种。
  5. 根据权利要求1所述的方法,其特征在于,所述目标内容存在至少两个;
    所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:
    在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,其中,至少两幅所述视点图像的视点处于同一直线,且所述直线与所述目标屏幕的像素面平行。
  6. 根据权利要求5所述的方法,其特征在于,所述图像参数至少包括:所述柱透镜的放置高度;
    所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
    基于所述视点图像获取所述视点图像所对应的视点位置、像素面上的像素点位置;
    获取相邻两幅所述视点图像在同一柱透镜上,所对应像素点位置之间的第一像素点距离;
    基于所述视点位置、所述视点数量、所述第一像素点距离、所述柱透镜到所述像素面的介质折射率,获取所述目标屏幕上柱透镜的放置高度。
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述视点位置、所述视点数量、所述第一像素点距离、所述柱透镜到所述像素面的介质折射率,获取所述目标屏幕上柱透镜的放置高度,包括:
    以所述目标屏幕的像素面所在平面为xy面建立空间直角坐标系(x,y,z),获取各视点位置在所述空间直角坐标系中的空间坐标值,并通过如下公式输出所述目标屏幕上所述柱透镜的放置高度:
    Figure PCTCN2021096964-appb-100001
    其中,T为放置高度,所述N为视点数量,n为所述柱透镜到所述像素面的介质折射率,P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,x N为第N个视点图像的x轴空间坐标值,x 1为第1个视点图像的x轴坐标值,z为各视点图像的z轴坐标值,其中N≥2,N为正整数。
  8. 根据权利要求1所述的方法,其特征在于,所述目标内容包括:目标横向内容;
    所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:
    在所述浏览图像中所包含的横向内容均为目标横向内容的情况下,将所述浏览图像作为视点图像。
  9. 根据权利要求8所述的方法,其特征在于,所述检测参数至少包括:相邻两个柱透镜的中心距离;
    所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测 参数,包括:
    基于所述柱透镜的放置高度、所述柱透镜到所述像素面的介质折射率,获取所述相邻两个柱透镜的中心距离。
  10. 根据权利要求9所述的方法,其特征在于,所述基于所述柱透镜的放置高度、所述柱透镜到所述像素面的介质折射率,获取所述相邻两个柱透镜的中心距离,包括:
    通过如下公式输出所述相邻两个柱透镜的中心距离:
    Figure PCTCN2021096964-appb-100002
    其中,所述P lens为所述相邻两个柱透镜的中心距离,所述T为所述柱透镜的放置高度,所述n为所述柱透镜到所述像素面的介质折射率,所述α 1、α 2分别是所述视点图像的亮度相对于目标视点的角度分布中,与0度相邻的两个视角分别作为第一目标视角和第二目标视角。
  11. 根据权利要求9所述的方法,其特征在于,所述基于所述柱透镜的放置高度、所述柱透镜到所述像素面的介质折射率,获取所述相邻两个柱透镜的中心距离,包括:
    通过如下公式输出所述相邻两个柱透镜的中心距离:
    Figure PCTCN2021096964-appb-100003
    其中,所述P lens为所述相邻两个柱透镜的中心距离,所述L为视点图像的观看距离,所述P pixel为所述视点图像在相邻两个柱透镜上所对应像素点位置之间的第二像素点距离,所述T为所述柱透镜的放置高度,所述n为所述柱透镜到所述像素面的介质折射率。
  12. 根据权利要求1所述的方法,其特征在于,所述目标内容包括:多种目标纵向内容;
    所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:
    在所述浏览图像中所包含的纵向内容为至少两个目标纵向内容的情况下,将所述浏览图像作为视点图像。
  13. 根据权利要求12所述的方法,其特征在于,所述检测参数至少包括:所述柱透镜的对位角度偏差;
    所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
    基于所述视点图像获取所述目标纵向内容的数量、所述视点图像所对应的视点位置、像素面上的像素点位置;
    获取相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离、所述视点图像上目标纵向内容的内容宽度;
    基于所述目标纵向内容的数量、所述第一像素点距离、所述内容宽度,获取所述柱透镜的对位角度偏差。
  14. 根据权利要求13所述的方法,其特征在于,所述基于所述目标纵向内容的数量、所述第一像素点距离、所述内容宽度,获取所述柱透镜的对位角度偏差,包括:
    通过如下公式输出所述柱透镜的对位角度偏差:
    Figure PCTCN2021096964-appb-100004
    其中,所述△θ为所述柱透镜的对位角度偏差,所述N为目标纵向内容的数量,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离,所述W为所述视点图像上目标纵向内容的内容宽度。
  15. 根据权利要求1所述的方法,其特征在于,所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:
    在所述浏览图像是通过正视角下对所述目标屏幕进行拍摄得到,且所述浏览图像中处于中心位置的中心内容不为目标内容的情况下,将所述浏览图像作为视点图像。
  16. 根据权利要求15所述的方法,其特征在于,所述检测参数至少包括:所述柱透镜的对位位置偏差;
    所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
    基于所述视点图像的图像参数,获取所述柱透镜的对位位置偏差。
  17. 根据权利要求16所述的方法,其特征在于,所述基于所述视点图像的图像参数,获取所述柱透镜的对位位置偏差,包括:
    通过如下公式输出所述柱透镜的对位位置偏差:
    ΔP=M·P sub
    其中,△P为所述柱透镜的对位位置偏差,所述M为获取所述中心内容与目标内容的差异值,所述P sub为相邻两幅所述视点图像在同一柱透镜上所对应像素点位置之间的第一像素点距离。
  18. 根据权利要求16所述的方法,其特征在于,所述基于所述视点图像的图像参数,获取所述柱透镜的对位位置偏差,包括:
    通过如下公式输出所述柱透镜的对位位置偏差:
    Figure PCTCN2021096964-appb-100005
    其中,△P为所述柱透镜的对位位置偏差,所述n为所述柱透镜到所述像素面的介质折射率,所述α 1、α 2分别是所述视点图像的亮度相对于目标视点的角度分布中,与0度相邻的两个视角分别作为第一目标视角和第二目标视角。
  19. 根据权利要求1所述的方法,其特征在于,所述在所述浏览图像中包含目标内容的情况下,将所述浏览图像作为视点图像,包括:
    在所述浏览图像中指定内容的锐利度最大的情况下,将所述浏览图像作为视点图像。
  20. 根据权利要求19所述的方法,其特征在于,所述检测参数至少包括:所述柱透镜的曲率半径;
    所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
    获取所述视点图像的视角;
    通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿真模型的锐利度最大时的视角为所述视点图像的视角时,将所述曲率半径作为所述柱透镜的曲率半径。
  21. 根据权利要求19或20所述的方法,其特征在于,所述锐利度可通过以下步骤获得:
    依据所述视点图像的对比度和锐利度之间的负相关关系,获取所述视点图像的锐利度。
  22. 根据权利要求19所述的方法,其特征在于,所述基于所述视点图像的图像参数,输出所述目标屏幕上柱透镜的检测参数,包括:
    获取所述柱透镜的视角亮度分布曲线;
    通过对所述柱透镜的光学仿真模型的曲率半径进行调整,在所述光学仿 真模型与所述柱透镜的视角亮度分布曲线之间的相似度符合相似度要求时,将所述光学仿真模型的曲率半径作为所述柱透镜的曲率半径。
  23. 一种屏幕检测装置,其特征在于,所述装置包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,能使得所述一个或多个处理器实现权利要求1-22中任一项所述的屏幕检测方法。
  24. 一种计算处理设备,其特征在于,包括:
    存储器,其中存储有计算机可读代码;
    一个或多个处理器,当所述计算机可读代码被所述一个或多个处理器执行时,所述计算处理设备执行如权利要求1-22中任一项所述的屏幕检测方法。
  25. 一种计算机程序,其特征在于,包括计算机可读代码,当所述计算机可读代码在计算处理设备上运行时,导致所述计算处理设备执行如权利要求1-22中任一项所述的屏幕检测方法。
  26. 一种计算机可读介质,其特征在于,其中存储了如权利要求1-22中任一项所述的屏幕检测方法的计算机程序。
PCT/CN2021/096964 2021-05-28 2021-05-28 屏幕检测方法、装置、设备、计算机程序和可读介质 WO2022246844A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US17/765,390 US20240121369A1 (en) 2021-05-28 2021-05-28 Screen detection method, apparatus and device, computer program and readable medium
PCT/CN2021/096964 WO2022246844A1 (zh) 2021-05-28 2021-05-28 屏幕检测方法、装置、设备、计算机程序和可读介质
CN202180001334.9A CN115836236A (zh) 2021-05-28 2021-05-28 屏幕检测方法、装置、设备、计算机程序和可读介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/096964 WO2022246844A1 (zh) 2021-05-28 2021-05-28 屏幕检测方法、装置、设备、计算机程序和可读介质

Publications (1)

Publication Number Publication Date
WO2022246844A1 true WO2022246844A1 (zh) 2022-12-01

Family

ID=84229464

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096964 WO2022246844A1 (zh) 2021-05-28 2021-05-28 屏幕检测方法、装置、设备、计算机程序和可读介质

Country Status (3)

Country Link
US (1) US20240121369A1 (zh)
CN (1) CN115836236A (zh)
WO (1) WO2022246844A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103105146A (zh) * 2013-01-22 2013-05-15 福州大学 用于三维显示的柱透镜光栅的平整性检测方法
US20140254008A1 (en) * 2013-03-11 2014-09-11 Canon Kabushiki Kaisha Image display device and image display method
CN105892078A (zh) * 2016-06-20 2016-08-24 京东方科技集团股份有限公司 一种显示装置及其驱动方法、显示系统
CN110133781A (zh) * 2019-05-29 2019-08-16 京东方科技集团股份有限公司 一种柱透镜光栅和显示装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09289655A (ja) * 1996-04-22 1997-11-04 Fujitsu Ltd 立体画像表示方法及び多視画像入力方法及び多視画像処理方法及び立体画像表示装置及び多視画像入力装置及び多視画像処理装置
JP2010282090A (ja) * 2009-06-05 2010-12-16 Sony Corp 立体表示装置
CN104898292B (zh) * 2015-06-30 2018-02-13 京东方科技集团股份有限公司 3d显示基板及其制作方法、3d显示装置
JP7076246B2 (ja) * 2018-03-23 2022-05-27 マクセル株式会社 撮像装置および撮像システム
CN209432409U (zh) * 2019-03-11 2019-09-24 苏州科技大学 一种裸眼3d显示屏测试平台
CN110657948B (zh) * 2019-09-26 2021-01-15 联想(北京)有限公司 用于测试电子设备的屏幕的方法、装置、测试设备和介质
KR20210086341A (ko) * 2019-12-31 2021-07-08 엘지디스플레이 주식회사 렌티큘러 렌즈들을 포함하는 입체 영상 표시 장치

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103105146A (zh) * 2013-01-22 2013-05-15 福州大学 用于三维显示的柱透镜光栅的平整性检测方法
US20140254008A1 (en) * 2013-03-11 2014-09-11 Canon Kabushiki Kaisha Image display device and image display method
CN105892078A (zh) * 2016-06-20 2016-08-24 京东方科技集团股份有限公司 一种显示装置及其驱动方法、显示系统
CN110133781A (zh) * 2019-05-29 2019-08-16 京东方科技集团股份有限公司 一种柱透镜光栅和显示装置

Also Published As

Publication number Publication date
US20240121369A1 (en) 2024-04-11
CN115836236A (zh) 2023-03-21

Similar Documents

Publication Publication Date Title
JP7124865B2 (ja) 情報処理装置、物体計測システム、物体計測方法、コンピュータプログラムおよび情報提供システム
CN104769930B (zh) 具有多个像素阵列的成像装置
JP6590792B2 (ja) 3d映像を補正する方法、装置及び表示システム
WO2021197370A1 (zh) 一种光场显示方法及系统、存储介质和显示面板
CN107024339B (zh) 一种头戴显示设备的测试装置及方法
EP3516625A1 (en) A device and method for obtaining distance information from views
US20150341618A1 (en) Calibration of multi-camera devices using reflections thereof
US11360304B2 (en) Image distortion detection method and system
CN107181918A (zh) 一种光学动捕摄像机的拍摄控制方法及系统
CN102595178B (zh) 视场拼接三维显示图像校正系统及校正方法
CN108827597A (zh) 一种结构光投影器的光斑均匀度检测方法和检测系统
CN117670961A (zh) 基于深度学习的低空遥感影像多视立体匹配方法及系统
CN107977998B (zh) 一种基于多视角采样的光场校正拼接装置及方法
CN108507484A (zh) 成捆圆钢多目视觉识别系统及计数方法
CN112361989A (zh) 一种通过点云均匀性考量测量系统标定参数的方法
KR20100067085A (ko) 멀티 컴포넌트 디스플레이용 인터스티셜 확산기의 위치 결정
JP5313187B2 (ja) 立体画像補正装置および立体画像補正方法
WO2022246844A1 (zh) 屏幕检测方法、装置、设备、计算机程序和可读介质
CN112102307A (zh) 全局区域的热度数据确定方法、装置及存储介质
TWI473026B (zh) 影像處理系統、顯示裝置及影像顯示方法
CN117455912B (zh) 一种基于三平面镜的玉米穗粒全景计数方法及计数系统
JPWO2018061926A1 (ja) 計数システムおよび計数方法
CN118014832A (zh) 一种基于线性特征不变性的图像拼接方法及相关装置
CN114879377B (zh) 水平视差三维光场显示系统的参数确定方法、装置及设备
AU2013308155B2 (en) Method for description of object points of the object space and connection for its implementation

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 17765390

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21942415

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202347028975

Country of ref document: IN

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27.03.2024)

122 Ep: pct application non-entry in european phase

Ref document number: 21942415

Country of ref document: EP

Kind code of ref document: A1