CN105654142B - Based on natural scene statistics without reference stereo image quality evaluation method - Google Patents
Based on natural scene statistics without reference stereo image quality evaluation method Download PDFInfo
- Publication number
- CN105654142B CN105654142B CN201610006517.0A CN201610006517A CN105654142B CN 105654142 B CN105654142 B CN 105654142B CN 201610006517 A CN201610006517 A CN 201610006517A CN 105654142 B CN105654142 B CN 105654142B
- Authority
- CN
- China
- Prior art keywords
- image
- binocular
- eye image
- extracting
- quality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 48
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 20
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 238000012706 support-vector machine Methods 0.000 claims abstract description 16
- 238000012360 testing method Methods 0.000 claims abstract description 15
- 238000012549 training Methods 0.000 claims abstract description 15
- 230000000007 visual effect Effects 0.000 claims abstract description 12
- 239000013598 vector Substances 0.000 claims abstract description 11
- 230000008447 perception Effects 0.000 claims abstract description 10
- 239000000284 extract Substances 0.000 claims abstract description 9
- 230000016776 visual perception Effects 0.000 claims abstract description 9
- 238000009826 distribution Methods 0.000 claims description 99
- 230000004044 response Effects 0.000 claims description 25
- 230000006870 function Effects 0.000 claims description 20
- 230000014509 gene expression Effects 0.000 claims description 19
- 238000007781 pre-processing Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000007619 statistical method Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 4
- 210000000977 primary visual cortex Anatomy 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims description 2
- 230000002194 synthesizing effect Effects 0.000 claims description 2
- 230000001276 controlling effect Effects 0.000 abstract 1
- 230000002596 correlated effect Effects 0.000 abstract 1
- 238000011156 evaluation Methods 0.000 description 7
- 230000004438 eyesight Effects 0.000 description 7
- 238000002474 experimental method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 238000005562 fading Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000013210 evaluation model Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000026676 system process Effects 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 210000000857 visual cortex Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2193—Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
The invention discloses a kind of based on natural scene statistics without reference stereo image quality evaluation method.Its key step is as follows: (1) controlling binocular fusion model using energy gain to merge to the left images in stereo-picture, generate median eye image;(2) it is analyzed on median eye image obtained above, extracts the correlated characteristic that can accurately reflect 2D picture quality;(3) binocular parallax of stereo-picture is calculated using disparity correspondence algorithm;(4) feature that can influence 3D visual perception quality is extracted according to the binocular parallax and other 3D visual characteristics that are calculated in previous step;(5) finally, by 2D feature obtained above and 3D feature constitutive characteristic vector, model training and test are carried out using support vector machines.Method proposed by the present invention can accurately predict the objective value of stereo image quality, and have very high identical property with human eye perception.
Description
Technical Field
The invention relates to the field of quality evaluation of objective stereo images, in particular to a no-reference stereo image quality evaluation method based on natural scene statistics.
Background
With the increasing application of stereo images, how to evaluate these stereo images quickly, accurately and effectively becomes an urgent problem. First, as with 2D images, stereoscopic images also suffer from various types of distortion during acquisition, compression, encoding, transmission, and so on, which necessarily have some effect on the quality of the stereoscopic images. Second, although the stereoscopic image is developed from the 2D image, the definition of the quality is not exactly the same as the 2D image due to its unique display principle. The stereoscopic vision characteristics also affect the evaluation of the stereoscopic image quality by the human eye. Therefore, the quality evaluation work of the stereoscopic image is more complicated than that of the 2D image.
The stereoscopic image quality evaluation may be divided into subjective stereoscopic image quality evaluation and objective stereoscopic image quality evaluation. The objective stereo image quality evaluation method can be divided into full-reference, half-reference and no-reference stereo image quality evaluation, wherein the reference refers to a reference stereo image, namely a distortion-free stereo image. For the quality evaluation of the full-reference stereo image, at present, the quality evaluation generally refers to distortion quality evaluation, that is, a distortion model is used to calculate the degradation size of the distorted stereo image relative to the reference stereo image, and then a final stereo image quality index is obtained through a certain fusion mode. For the quality evaluation of the stereo image without reference, because no reference image exists and no contrast is lost, the general method is to extract the feature vector of the stereo image and then train and test the stereo image by combining with a subjective database and utilizing machine learning. A common Machine learning method is a Support Vector Machine (SVM), and a difficulty of this method is to extract what image features (brightness, parallax depth reflecting 3D characteristics, and the like) can better reflect the quality of a stereoscopic image.
The evaluation criterion of the objective image quality evaluation method is to compare the correlation between the objective predicted value of the image quality and the subjective value of human eyes, and the higher the correlation is, the more accurate the objective method is indicated. When the stereo image quality evaluation method is researched, it is very necessary to guide an objective evaluation model by using the 3D visual perception characteristic of human eyes, and a better effect can be obtained. The factors influencing the subjective perception of the stereo image are numerous, and the stereo image has monocular vision characteristics and binocular vision characteristics from the viewpoint of human eyes. The binocular vision characteristic is generated by different image contents seen by the left eye and the right eye of a person. The human eye vision system can carry out three-dimensional reconstruction on images observed by the left eye and the right eye, and position information of an object can be obtained after brain analysis and adjustment, so that stereoscopic impression is obtained. The binocular vision characteristics mainly include the following points: 1) stereoscopic vision: also called depth perception; 2) binocular competition: the contents of the left and right viewpoint images are not matched; 3) binocular fusion: both eyes fuse the left and right viewpoint images into a central eye image.
In the process of researching the quality evaluation of the non-reference stereo image, the specific characteristics of the stereo image should be firstly analyzed, the 3D visual perception characteristics of human eyes are considered, the relevant feature vectors capable of reflecting the perception quality of the stereo image are extracted, and then the training and the testing are carried out by utilizing machine learning.
Disclosure of Invention
The invention aims to provide a no-reference stereo image quality evaluation method based on natural scene statistics, which can extract the characteristics capable of reflecting the quality of a tested stereo image under the condition of not needing a reference image corresponding to the tested stereo image, perform model training on the extracted characteristic vectors by using a support vector machine, and then calculate the objective method predicted value of the quality of the tested stereo image by using the trained model. The correlation between the objective predicted value of the tested stereo image quality and the subjective value thereof can be used for showing the accuracy and the effectiveness of the objective quality evaluation method. Test results show that the quality evaluation method of the non-reference stereo image can accurately predict the quality of the stereo image.
To achieve the above object, the concept of the present invention is as follows, as shown in fig. 1:
firstly, generating a central eye image of a tested stereo image by adopting a binocular fusion model; then, analyzing the characteristics of the generated central eye image, and extracting 2D characteristics reflecting the image quality; then, calculating binocular parallax of the stereo image by using a parallax matching algorithm; then, analyzing the 3D visual perception characteristics, and extracting 3D characteristics influencing the stereoscopic perception quality; and finally, training and testing the extracted 2D and 3D features by using a support vector machine.
The inspiration of the invention comes from Natural Scene Statistics (NSS), that is, the extracted features are all features based on Natural Scene Statistics. The natural scene statistical analysis shows that the natural image has certain statistical characteristics, for example, after the natural image is subjected to certain preprocessing operation, the probability density distribution of the brightness values of the image pixels of the natural image approximately presents generalized Gaussian distribution. After the same natural image is subjected to different distortion treatments, the brightness probability density function curves of the preprocessed image have larger differences, such as different envelopes, trails, peak values and the like. Accordingly, the parameters of the generalized gaussian distribution model (NSS model) may be used as features to reflect the difference between different distorted image qualities.
Referring to fig. 2, fig. 2(a) is a graph of luminance probability density distribution of a reference image and its corresponding 5 kinds of distorted images. Fig. 2(b) is a graph of the luminance probability density distribution of the image after the preprocessing. It can be seen that the luminance probability density distribution of a natural image is generally not regular. However, if the luminance map of the natural image is subjected to a certain preprocessing operation, the probability density distribution is close to the generalized gaussian distribution, and these probability distributions can be approximately fitted by the generalized gaussian distribution. And the differently distorted images have different distribution characteristics, such as different envelopes, spreads, peaks, etc. Therefore, the distortion type and the image quality can be predicted by quantifying the difference of these distributions.
In view of this, the present invention provides a no-reference stereo image quality evaluation method based on natural scene statistics, which utilizes the difference of the statistical characteristics of different distorted stereo images to map the difference between the qualities thereof.
The preprocessing operation can be expressed as:
wherein f (x, y) is a luminance image,is a pre-processed luminance image. x e 1,2.. M and y e 1,2.. N are image pixel coordinates, M, N are the length and width of the image, respectively. C is a constant that can be increased when the denominator is close to zeroStability of (2). The preprocessing operation may be referred to as an MSCN (mean filtered normalized) operation for short.
Wherein w ═ { w ═ wk,lI K-K.. K, L-L.. L } is a gaussian weight function of 3x 3. K-L-3.
According to the inventive concept, the invention adopts the following technical scheme:
a no-reference stereo image quality evaluation method based on natural scene statistics comprises the following steps:
1) generation of center-eye image: fusing a left image and a right image in the stereoscopic image by using an energy gain control binocular fusion model to generate a central eye image;
2)2D feature extraction: analyzing the central eye image obtained in the step 1), and extracting relevant characteristics capable of accurately reflecting the quality of the 2D image;
3) binocular parallax calculation: calculating binocular parallax of the stereoscopic image by using a parallax matching algorithm;
4)3D feature extraction: extracting relevant features which can influence the 3D visual perception quality according to the binocular parallax and other 3D visual characteristics calculated in the step 3);
5) training and testing a support vector machine: and (3) forming a feature vector by the 2D features and the 3D features obtained in the step (2) and the step (4), carrying out model training by using a support vector machine, and predicting an objective prediction value of the quality of the tested stereo image by using the trained model. By comparing the objective predicted value and the subjective value of the quality of the stereo image, a correlation coefficient which can reflect the effectiveness and the accuracy of the evaluation method of the quality of the stereo image can be obtained.
And step 1) generating a central eye image, and fusing left and right images in the stereoscopic image by using an energy gain control binocular fusion model to generate a central eye image. This step is primarily intended to simulate the binocular fusion characteristics of the human eye.
When the human eyes watch the stereo image, the left and right eyes respectively watch the left image and the right image in the stereo image pair. Before the human visual system processes visual signals, the left and right images generate a single-viewpoint image on the retina, namely a fused image, which is also called a central eye image. Physiologically this phenomenon is called binocular fusion. When the quality evaluation of the stereo image is researched, the accuracy of the objective quality evaluation method can be improved undoubtedly by considering the binocular fusion characteristic.
In view of this, step 1) of the non-reference stereo image quality evaluation method proposed by the present invention is to fuse the left and right images of the tested stereo image into a central eye image, and an energy gain control binocular fusion model is used here.
The expression of the central eye image obtained by controlling the binocular fusion model by using the energy gain is as follows:
I(x,y)=WL.IL(x,y)+WR.IR(x,y) (4)
wherein, WLAnd WRRespectively a left image IL(x, y) and right image IRWeight of (x, y).
Because the Gabor filter can effectively simulate the processing process of simple cells in the primary visual cortex of the human eye visual system to visual signals, the normalized Gabor filter response can be used as the weight of the left and right images.
R1=x cosθ+y sinθ
R2=-x sinθ+y cosθ
Wherein σxAnd σyThe standard deviation, ζ, of the elliptical Gaussian envelope in the x and y directions, respectivelyxAnd ζyIs the spatial frequency parameter of the filter and theta is the directional parameter of the filter.
In consideration of the effect of binocular disparity, the center-eye image I (x, y) may be expressed as:
I(x,y)=WL(x,y).IL(x,y)(6)
+WR(x-D(x,y),y).IR(x-D(x,y),y)
here, GELAnd GERThe left and right images are the sum of the filter responses in all frequencies, all directions, respectively. D (x, y) is the binocular disparity of the stereoscopic image.
The 2D feature extraction in step 2) is to analyze the center-eye image and extract relevant factors that can accurately reflect the 2D image quality, and the steps are as follows:
2-1) features based on neighboring pixel difference values. Obtaining the central eye pattern I (I, j) obtained in the step 1) after MSCN operationThen, the difference values of the adjacent pixels in the four directions (0 °, 45 °, 90 °, 135 °) thereof are respectively calculated, and as shown in fig. 3, the following four expressions can be respectively obtained:
wherein i belongs to {1,2.. M }, j belongs to {1,2.. N }, and M and N are the length and the width of the image respectively.
In experiments, probability density distribution curves of the difference values of the adjacent pixels in the four directions are found to be approximate to generalized Gaussian distribution, so that the four probability density distributions can be fitted by adopting a generalized Gaussian distribution model. The zero-mean generalized gaussian density function can be expressed as:
whereinΓ () is the gamma function, which is expressed as:
α in the above expression controls the envelope, σ, of the distribution of the function2The function distribution is a generalized laplacian distribution when α is 1, and is a gaussian distribution when α is 2.
Model parameters α and σ for a generalized Gaussian distribution2Can be extracted as an effective feature. Thus, 2 features can be extracted per direction and 8 features can be extracted for 4 directions.
2-2) features based on neighboring pixel products. Similar to the operation in step 2-1), the center-eye image after MSCN operation can also be calculatedThe product of adjacent pixels (two adjacent pixels), and thus there are eight directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °), and as shown in fig. 4, the expression for the product of adjacent pixels in these eight directions can be expressed as:
wherein i belongs to {1,2.. M }, j belongs to {1,2.. N }, and M and N are the length and the width of the image respectively.
It was found in experiments that the probability density distribution of the product of adjacent pixels is not strictly symmetric about zero. Therefore, in order to better fit its probability density distribution, an asymmetric generalized gaussian distribution model is used here, whose density function can be expressed as:
wherein,
v in the above expression controls the envelope of the function distribution, σl 2And σr 2Respectively representing the variance of the left and right part of the distribution of the asymmetric generalized gaussian distribution. When sigma isl 2=σr 2The asymmetric generalized gaussian distribution then degenerates to a generalized gaussian distribution.
Model parameters (η, v, σ) of an asymmetric generalized Gaussian distributionl 2,σr 2) Can be extracted as a feature.
Wherein,
thus, 4 features can be extracted per direction, and a total of 32 features can be extracted for 8 directions.
2-3) features based on the gradient of the central eye image. The gradient amplitude of the image can reflect the edge information in the image, and the human eye is sensitive to the edge information of the image, so the gradient information of the image is very useful. Here the Scharr operator is used to calculate the gradient map of the image. The expressions of the Scharr filter in the x-direction and the y-direction can be expressed as:
center eye image I (x) pass and hxAnd hyAfter convolution, a horizontal direction gradient and a vertical direction gradient, GM, can be respectively obtainedX(x) And GMY(y) is carried out. The gradient of the central eye image i (x) may be expressed as:
wherein,representing the convolution symbols.
First, the horizontal gradient GM of the center-eye image obtained as described aboveX(x) And a gradient GM in the vertical directionY(y) fitting the probability density distributions of the two gradient components by using the model of the generalized Gaussian distribution in step 2-1 to obtain model parameters (α, σ) of the generalized Gaussian distribution2). Meanwhile, a generalized Gaussian distribution model is utilized to fit the MSCN operated central eye imageThe model parameters (α, sigma) of the generalized Gaussian distribution can also be obtained from the probability density distribution of the gradient map2) And extracts these parameters as features.
2-4) features based on phase consistency of the central eye image. Phase consistency of an image is a measure of the similarity of the phases of frequency components at various locations in the image, and it calculates the points in the frequency spectrum of the image at which the phases have maxima. Phase consistency can be used to detect edges and texture feature points in an image. The following provides a calculation method with phase-to-phase consistency.
For 2D images, here a multi-directional, multi-scale 2D Gabor filter is used. Suppose Mn eAnd Mn oThe cosine component and sine component of the filter in the dimension n are represented separately and form an orthogonal pair. Let θ be the filter's orientation angle. After the convolution of the central eye image i (x) with the above orthogonal pair, the response of the filter in the position x, the scale n and the direction θ can be obtained, and the expression is:
wherein,representing the convolution symbols.
The local amplitude of the filter response at position x, dimension n and direction θ is calculated as:
the local energy of the filter at position x, dimension n and direction θ is calculated as:
wherein, Fθ(x)=Σnen,θ(x),Hθ(x)=Σnon,θ(x)。
The phase consistency of the central eye image i (x) at position x can be expressed as:
where j is a direction parameter and n is a scale parameter.
Fitting MSCN operated central eye image by generalized Gaussian distribution modelThe probability density distribution of the phase consistency map of (2) can be obtained by extracting model parameters (α, sigma) of the generalized Gaussian distribution2)。
2-5) features based on the Log-Gabor filter response of the central eye image. The Log-Gabor filter can well simulate the direction selectivity and the frequency selectivity of cells on human visual cortex.
The expression of the multi-scale and multi-directional 2D Log-Gabor filter in the frequency domain is as follows:
wherein, thetajJ pi/J, J ═ {0, 1., J-1} is the Log-Gabor directional parameter. J is the total number of directions, ω0Is the center frequency, σ, of the filterrDetermining the radial bandwidth, σ, of the filterθThe angular bandwidth of the filter is determined. After the central eye image I (x) is filtered, the filter is positioned at the position x, the dimension n and the direction thetajThe real and imaginary parts of the upper response are respectivelyAnd
thus, the Log-Gabor filter response of the central eye image i (x) can be expressed as:
in addition, the real and imaginary parts of the Log-Gabor filter response of the central eye image i (x) can be expressed as:
the Log-Gabor phase of the central eye image i (x) may be expressed as:
from above, feature vectors can be extracted from the following three approaches:
a, fitting Gx by using generalized Gaussian distributionlog(x) And Gylog(x) The model parameters (α, sigma) of the generalized Gaussian distribution can be obtained from the probability density distribution2);
b, fitting GP with generalized Gaussian distributionlog(x) The model parameters (α, sigma) of the generalized Gaussian distribution can be obtained from the probability density distribution2);
c, fitting the MSCN operated central eye image by utilizing a generalized Gaussian distribution modelLog-Gabor filter response Glog(x) The model parameters (α, sigma) of the generalized Gaussian distribution can be obtained from the probability density distribution2) As a feature.
And 3) binocular parallax calculation, namely calculating binocular parallax values of the stereo images by using a parallax matching algorithm. In the parallax matching process, a left image is used as a reference image, pixel blocks matched with the left image are searched in a right image one by one, and an SSIM similarity criterion is used for measuring an optimal matching block, namely the pixel block with the largest SSIM value is used as a matching block of the left image in the right image.
The step 4) of extracting the 3D features, extracting the features which can affect the 3D visual perception quality according to the binocular disparity and other 3D visual characteristics calculated in the previous step, and the steps are as follows:
4-1) based on the characteristics of binocular rivalry on the one hand, considering the influence of parallax on human eyes in evaluating the quality of the stereoscopic image, namely the existence of binocular parallax causes binocular rivalry, fitting the probability density distribution of the parallax map after MSCN operation by using the generalized Gaussian distribution model according to the parallax D (x, y) calculated in the step 3), and obtaining model parameters (α, sigma)2)。
On the other hand, the degree of binocular competition also affects the perception process of human eyes, namely, the left image I in the stereo imageL(x, y) and right image IRThe smaller the similarity of the matching regions of (x, y), the greater the degree of binocular competition. Therefore, the similarity of the left and right image matching regions can be used to represent the degree of binocular competition, which is expressed as:
wherein, IL(x, y) and IR(x, y) represent the left and right images, respectively. D (x, y) is the binocular disparity calculated in step 3), C1Is a constant. Fitting MSCN post-operation BR using generalized Gaussian distribution modelD(x, y) probability density distribution, model parameters (α, σ) of the generalized Gaussian distribution can be obtained2)。
4-2) matching the characteristics of the error based on the disparity. In the process of calculating the binocular disparity in the step 3), disparity matching errors can cause points in the left image not to find matching points in the right image. The disparity matching error can be expressed as:
Derror(x,y)=IL(x,y)-IR(x-D(x,y),y) (25)
wherein, IL(x, y) and IR(x, y) represent the left and right images, respectively. D (x, y) is the binocular disparity calculated in step 3). With respect to the above expressions, there are two explanations here. Firstly, supposing that a left image is an image with relatively good quality, when calculating binocular parallax, taking the left image as a reference image, and searching pixel points matched with the left image in a right image; second, in the 3D database used in the experiment, the stereoscopic image has only horizontal direction parallax, and no vertical direction parallax.
Fitting MSCN post-operation D using generalized Gaussian distribution modelerrorProbability density distribution of (x, y) can beExtracting model parameters (α, sigma) of the generalized Gaussian distribution2)。
4-3) features based on binocular disparity consistency. In an experiment, it can be found that, for a non-distortion stereo image, the disparity map calculated by using the disparity matching method in step 3) has high correlation between pixel points of most other areas and surrounding pixel points except for the edge area of the disparity map, that is, continuity exists between pixel values. If the disparity map is calculated from a distorted stereo image, the correlation between pixel points in the disparity map and surrounding pixel points is low. Therefore, the disparity map after low-pass filtering can be used to represent the consistency thereof, and the consistency of the disparity map D (x, y) can be represented as:
fitting D with a generalized Gaussian distribution modelconsist(x, y) probability density distribution, model parameters (α, σ) of the generalized Gaussian distribution can be extracted2)。
And 5) training and testing the support vector machine. And (3) forming a feature vector by the 2D features and the 3D features obtained in the step 2) and the step 4), performing model training by using a support vector machine, and then testing to obtain an objective predicted value of the quality of the stereo image. By comparing the objective predicted value and the subjective value of the quality of the stereo image, a correlation coefficient which can reflect the effectiveness and the accuracy of the evaluation method of the quality of the stereo image can be obtained.
Compared with the prior art, the invention has the following obvious and prominent substantive characteristics and remarkable advantages:
the method is based on natural scene statistics, the brightness probability density function distribution of the image is fitted by utilizing generalized Gaussian distribution, model parameters of the generalized Gaussian distribution are extracted to serve as features, and the quality of the stereo image is predicted through feature vectors. The method not only extracts the relevant characteristics capable of reflecting the 2D image quality, but also extracts the characteristics capable of reflecting the 3D stereoscopic perception quality. In the process of extracting the features, 2D factors influencing the image quality are considered, and 3D visual characteristics of human eyes are also considered. The objective three-dimensional image quality predicted value obtained by the method is very similar to the subjective value thereof, so that the method has higher coincidence with human eyes compared with the prior art.
Drawings
FIG. 1 is a block diagram of a no-reference stereo image quality evaluation method based on natural scene statistics according to the present invention;
FIG. 2(a) is a graph of a luminance probability density function distribution of an image without a preprocessing operation;
FIG. 2(b) is a graph of the luminance probability density function distribution of an image after a preprocessing operation; ref: reference image, five distortion types are as follows, jp2000, jpeg, wn: white gaussian noise, blu: gaussian blur, ff fast fading.
FIG. 3 is a diagram illustrating the calculation of four-direction neighboring pixel difference values;
FIG. 4 is a schematic diagram of computing products of adjacent pixels in eight directions;
Detailed Description
The embodiments of the present invention will be described in further detail with reference to the drawings attached to the specification.
Referring to fig. 1, a method for evaluating quality of a reference-free stereo image based on natural scene statistics includes the following steps:
1) generation of center-eye image: fusing a left image and a right image in the stereoscopic image by using an energy gain control binocular fusion model to generate a central eye image;
2)2D feature extraction: analyzing the central eye image obtained in the step 1), and extracting relevant characteristics capable of accurately reflecting the quality of the 2D image;
3) binocular parallax calculation: calculating binocular parallax of the stereoscopic image by using a parallax matching algorithm;
4)3D feature extraction: extracting relevant features which can influence the 3D visual perception quality according to the binocular parallax and other 3D visual characteristics calculated in the step 3);
5) training and testing a support vector machine: and (3) forming a feature vector by the 2D features and the 3D features obtained in the step (2) and the step (4), carrying out model training by using a support vector machine, and predicting an objective prediction value of the quality of the tested stereo image by using the trained model. By comparing the objective predicted value and the subjective value of the quality of the stereo image, a correlation coefficient which can reflect the effectiveness and the accuracy of the evaluation method of the quality of the stereo image can be obtained.
The generation of the central eye image in the step 1) comprises the following steps:
1-1) respectively calculating Gabor filter responses of left and right images in a stereo image;
1-2) respectively using the Gabor filter responses of the left image and the right image in the step 1-1) as weights thereof, and synthesizing a central eye image by using an energy gain control binocular fusion model;
the 2D feature extraction in the step 2) includes the following steps:
2-1) calculating the central eye image obtained by the calculation in the step 1), calculating the adjacent pixel difference values of the central eye image after MSCN operation in four directions (0 degrees, 45 degrees, 90 degrees and 135 degrees), approximately fitting the probability density function distribution of the adjacent pixel difference values in the four directions by utilizing generalized Gaussian distribution, and extracting the model parameters α and sigma of the generalized Gaussian distribution2As a feature. Thus, 2 features can be extracted per direction, and 8 features can be extracted for 4 directions;
2-2) similar to step 2-1), the center-eye image after MSCN operation is calculated in eight directions (0 °, 22.5 °, 45 °, 67.5 °, 90 °,the product of adjacent pixels (two adjacent pixels) of 112.5 DEG, 135 DEG, 157.5 DEG is approximated to the probability density function distribution of the product of adjacent pixels in the eight directions by using asymmetric generalized Gaussian distribution, and model parameters (η, ν, σ) of the generalized Gaussian distribution are extractedl 2,σr 2) As a feature. Thus, 4 features can be extracted per direction, and 32 features can be extracted per 8 directions;
2-3) calculating the gradient of the central eye image from the central eye image calculated in the step 1), wherein the gradient comprises a horizontal gradient component and a vertical gradient component. On one hand, the generalized Gaussian distribution in the step 2-1) is used for respectively fitting the probability density function distribution of the horizontal direction gradient component and the vertical direction gradient component of the central eye image, and model parameters are extracted as features; on the other hand, the probability density function distribution of the gradient map of the central eye image after MSCN operation is fitted by the same method, and model parameters are extracted as features;
2-4) calculating the phase consistency of the central eye image similarly to the step 2-3), respectively fitting the probability density function distribution of the phase consistency graph of the central eye image after MSCN operation by using the generalized Gaussian distribution in the step 2-1), and extracting model parameters as features;
2-5) similar to steps 2-3) and 2-4), the Log-Gabor filter response of the central eye image is calculated, including the real and imaginary parts of the filter response and its phase. On one hand, the real part and the imaginary part of the Log-Gabor filter response of the central eye image and the probability density function distribution of the phase thereof are respectively fitted by utilizing the generalized Gaussian distribution in the step 2-1), and model parameters are extracted as characteristics; on the other hand, the probability density function distribution of the Log-Gabor filter response of the MSCN-operated central eye image was fitted by the same method, and model parameters were extracted as features.
Calculating the binocular disparity in the step 3), namely calculating the binocular disparity value of the stereo image by using a disparity matching algorithm.
The 3D feature extraction in the step 4) above, which includes the steps of:
4-1) obtaining binocular parallax by calculation in the step 3), on one hand, considering the influence of parallax on the quality of the evaluated stereo image, fitting the probability density distribution of the parallax image after MSCN operation by using a generalized Gaussian distribution model, and obtaining model parameters as characteristics; on the other hand, considering the influence of the degree of binocular competition on the human eye perception process, analyzing and calculating a representation reflecting the degree of binocular competition, fitting the probability density distribution of the MSCN after operation by using a generalized Gaussian distribution model, and extracting model parameters as characteristics;
4-2) considering the condition of parallax matching error possibly occurring when the binocular parallax is calculated in the step 3), analyzing a representation of the parallax matching error, fitting probability density distribution after MSCN operation by using a generalized Gaussian distribution model, and extracting model parameters as features;
4-3) analyzing and calculating a representation formula reflecting binocular parallax consistency, fitting probability density distribution of the representation formula by using a generalized Gaussian distribution model, and extracting model parameters as characteristics.
The training and testing of the support vector machine in the step 5) comprises the steps of forming a feature vector by the 2D features and the 3D features obtained in the step 2) and the step 4), carrying out model training by using the support vector machine, and then carrying out testing to obtain an objective method predicted value of the quality of the stereo image. By comparing the objective predicted value and the subjective value of the quality of the stereo image, a correlation coefficient which can reflect the effectiveness and the accuracy of the evaluation method of the quality of the stereo image can be obtained.
The invention is compared with the current mainstream 3D image quality evaluation method, and the results obtained by comparison are respectively obtained by testing on the LIVE 3D database (I) and the LIVE 3D database (II). In order to better embody the high performance of the invention, the contrast method adopted in the experiment has both a full-reference stereo image quality evaluation method and a non-reference stereo image quality evaluation method. In addition, in order to embody the accuracy of the invention, a method of training and testing for averaging for 1000 times of support vector machines is adopted in the experiment.
Experimental results tables 1-3 detail the results of comparing SROCC, PLCC and RMSE of the present invention with other full-reference and non-reference stereo image quality evaluation methods on LIVE 3D database (one), and tables 4-6 detail the results of comparing SROCC, PLCC and RMSE of the present invention with other full-reference and non-reference stereo image quality evaluation methods on LIVE 3D database (two), respectively. Table 7 shows the comparison results of the comprehensive performance of the present invention and other non-reference stereo image quality evaluation methods on LIVE 3D databases (one) and (two).
TABLE 1 comparison of SROCC on LIVE 3D database (one)
Italics is a full-reference stereo image quality evaluation algorithm, and the other is a non-reference stereo image quality evaluation algorithm
JP2K JPEG2000 JPEG WN Gaussian white noise Blur Gaussian Blur FF fast fading
TABLE 2 comparison of PLCCs on LIVE 3D database (one)
Italics is a full-reference stereo image quality evaluation algorithm, and the other is a non-reference stereo image quality evaluation algorithm
TABLE 3 comparison of RMSE on LIVE 3D database (one)
Italics is a full-reference stereo image quality evaluation algorithm, and the other is a non-reference stereo image quality evaluation algorithm
TABLE 4 comparison of SROCC on LIVE 3D database (two)
Italics is a full-reference stereo image quality evaluation algorithm, and the other is a non-reference stereo image quality evaluation algorithm
TABLE 5 comparison of PLCCs on LIVE 3D database (two)
Italics is a full-reference stereo image quality evaluation algorithm, and the other is a non-reference stereo image quality evaluation algorithm
TABLE 6 comparison of RMES on LIVE 3D database (two)
Italics is a full-reference stereo image quality evaluation algorithm, and the other is a non-reference stereo image quality evaluation algorithm
TABLE 7 comparison of Performance of non-reference stereo image quality evaluation Algorithm on LIVE 3D databases (one) and (two)
The experimental result shows that the comprehensive performance of the invention on the LIVE 3D databases (one) and (two) exceeds the performance of other mainstream stereo image quality evaluation methods at present. Therefore, the method can evaluate the quality of the three-dimensional image more accurately and effectively, and the evaluation result is better consistent with the perception of human eyes.
Claims (8)
1. A no-reference stereo image quality evaluation method based on natural scene statistics is characterized by comprising the following steps:
1) generation of center-eye image: fusing a left image and a right image in the stereoscopic image by using an energy gain control binocular fusion model to generate a central eye image;
2)2D feature extraction: analyzing the central eye image obtained in the step 1), and extracting relevant characteristics capable of accurately reflecting the quality of the 2D image;
3) binocular parallax calculation: calculating binocular parallax of the stereoscopic image by using a parallax matching algorithm;
4)3D feature extraction: extracting relevant features which can influence the 3D visual perception quality according to the binocular parallax and other 3D visual characteristics calculated in the step 3);
5) training and testing a support vector machine: forming a feature vector by the 2D features and the 3D features obtained in the step 2) and the step 4), carrying out model training and testing by using a support vector machine, and predicting an objective prediction value of the quality of the tested stereo image by using the trained model;
the generation of the central eye image in the step 1) comprises the following steps:
1-1) respectively calculating Gabor filter responses of left and right images in a stereo image;
1-2) respectively using the Gabor filter responses of the left image and the right image in the step 1-1) as weights thereof, synthesizing a central eye image by using an energy gain control binocular fusion model, and simulating binocular fusion characteristics of human eyes;
the method for evaluating the quality of the reference-free stereo image comprises the following steps of 1) fusing left and right images of a tested stereo image into a central eye image, and controlling a binocular fusion model by using energy gain; the expression of the central eye image obtained by controlling the binocular fusion model by using the energy gain is as follows:
I(x,y)=WL.IL(x,y)+WR.IR(x,y)
wherein, WLAnd WRRespectively a left image IL(x, y) and right image IRA weight of (x, y);
because the Gabor filter can effectively simulate the processing process of simple cells in the primary visual cortex of the human eye visual system to visual signals, the normalized Gabor filter responses as the weight of the left and right images:
R1=x cosθ+y sinθ
R2=-x sinθ+y cosθ
wherein σxAnd σyThe standard deviation, ζ, of the elliptical Gaussian envelope in the x and y directions, respectivelyxAnd ζyIs the spatial frequency parameter of the filter, theta is the direction parameter of the filter;
in consideration of the effect of binocular disparity, the center-eye image I (x, y) may be expressed as:
I(x,y)=WL(x,y).IL(x,y)
+WR(x-D(x,y),y).IR(x-D(x,y),y)
here, GELAnd GERD (x, y) is the binocular disparity of the stereoscopic image, which is the sum of the responses of the filters in all frequencies and all directions for the left and right images, respectively.
2. The method for evaluating the quality of the non-reference stereo image based on the natural scene statistics as claimed in claim 1, wherein the step 2) extracts the relevant features capable of accurately reflecting the quality of the 2D image, and the specific operation steps are as follows:
2-1) extracting 2D characteristics based on adjacent pixel difference values by performing natural scene statistical analysis on a central eye image;
2-2) extracting 2D characteristics based on adjacent pixel products by carrying out natural scene statistical analysis on the central eye image;
2-3) extracting 2D characteristics based on the gradient of the central eye image by performing natural scene statistical analysis on the central eye image;
2-4) extracting 2D characteristics based on the phase consistency of the central eye image by performing natural scene statistical analysis on the central eye image;
2-5) extracting the 2D characteristics based on the Log-Gabor filter response of the central eye image by performing natural scene statistical analysis on the central eye image.
3. The method of claim 2, wherein the step 2-2) of extracting 2D features based on the product of adjacent pixels comprises calculating the product of adjacent two pixels in eight directions of the preprocessed central eye image, wherein the product is 0 °, 22.5 °, 45 °, 67.5 °, 90 °, 112.5 °, 135 °, 157.5 °, fitting the probability density function distribution of the product of the adjacent pixels in the eight directions to a rough Gaussian distribution, and extracting the model parameters (η, v, σ) of the rough Gaussian distributionl 2,σr 2) As a feature.
4. The method for evaluating the quality of the non-reference stereoscopic image based on the natural scene statistics as claimed in claim 2, wherein the 2D feature of the Log-Gabor filter response based on the central eye image is extracted in the step 2-5), and the specific process is as follows: calculating Log-Gabor filter response of the central eye image, including a real part, an imaginary part and a phase of the filter response, extracting 2D characteristics based on natural scene statistics from two aspects, on one hand, respectively fitting the real part and the imaginary part of the Log-Gabor filter response of the central eye image and probability density function distribution of the phase of the real part and the imaginary part by utilizing generalized Gaussian distribution, and extracting model parameters as characteristics; on the other hand, the Log-Gabor filter response of the preprocessed central eye image is fitted by the same method, and model parameters are extracted as features.
5. The method for evaluating the quality of the non-reference stereo image based on the natural scene statistics as claimed in claim 1, wherein the step 4) extracts the relevant features which can affect the 3D visual perception quality, and the specific operation steps are as follows:
4-1) considering the influence of binocular competition on the human eyes when watching the stereo image, analyzing an expression reflecting the binocular competition, and extracting 3D characteristics based on the binocular competition;
4-2) considering the situation that when binocular parallax is calculated, parallax matching errors occur, so that matching points cannot be found in the right image by points in the left image, analyzing an expression reflecting the parallax matching errors, and extracting 3D features based on the parallax matching errors;
4-3) analyzing the correlation between pixel points and surrounding pixel points in the binocular disparity map, namely the continuity between pixel values, calculating an expression reflecting binocular disparity consistency, and extracting 3D features based on the binocular disparity consistency.
6. The method for evaluating the quality of the non-reference stereoscopic image based on the natural scene statistics as claimed in claim 5, wherein the step 4-1) extracts the binocular competition based 3D features, which operates as follows from two aspects:
on one hand, fitting the probability density distribution of the binocular parallax after the preprocessing operation by using a generalized Gaussian distribution model, and extracting model parameters;
on the other hand, considering the influence of the degree of binocular competition on the human eye perception process, analyzing the expression of the degree of binocular competition, wherein the expression is as follows:
wherein, IL(x, y) and IR(x, y) respectively representing a left image and a right image, wherein x belongs to 1,2.. M and y belongs to 1,2.. N are image pixel coordinates, and M and N are the length and the width of the image respectively; d (x, y) is binocular disparity, C1Is a constant; and fitting the probability density distribution after the preprocessing operation by using a generalized Gaussian distribution model, and extracting model parameters as characteristics.
7. The method for evaluating the quality of the non-reference stereo image based on the natural scene statistics as claimed in claim 5, wherein the 3D features based on the parallax matching error are extracted in the step 4-2), and the method comprises the following steps:
considering the case of parallax matching error occurring when calculating binocular parallax, the expression of parallax matching error is analyzed, and its expression is as follows:
Derror(x,y)=IL(x,y)-IR(x-D(x,y),y)
wherein, IL(x, y) and IR(x, y) represent the left and right images, respectively, and D (x, y) is a binocular disparity; and fitting the probability density distribution after the preprocessing operation by using a generalized Gaussian distribution model, and extracting model parameters as characteristics.
8. The method for evaluating the quality of the non-reference stereoscopic image based on the natural scene statistics as claimed in claim 5, wherein the 3D features based on the binocular disparity consistency are extracted in the step 4-3), and the method comprises the following steps:
analyzing the expression formula of binocular parallax consistency, wherein the expression formula is as follows:
wherein D (x, y) is binocular parallax,is a convolution symbol; and fitting the probability density distribution by using a generalized Gaussian distribution model, and extracting model parameters as features.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610006517.0A CN105654142B (en) | 2016-01-06 | 2016-01-06 | Based on natural scene statistics without reference stereo image quality evaluation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610006517.0A CN105654142B (en) | 2016-01-06 | 2016-01-06 | Based on natural scene statistics without reference stereo image quality evaluation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105654142A CN105654142A (en) | 2016-06-08 |
CN105654142B true CN105654142B (en) | 2019-07-23 |
Family
ID=56491419
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610006517.0A Expired - Fee Related CN105654142B (en) | 2016-01-06 | 2016-01-06 | Based on natural scene statistics without reference stereo image quality evaluation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105654142B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107371016A (en) * | 2017-07-25 | 2017-11-21 | 天津大学 | Based on asymmetric distortion without with reference to 3D stereo image quality evaluation methods |
CN109934078B (en) * | 2017-12-19 | 2021-04-20 | 浙江宇视科技有限公司 | Image processing method and device and electronic equipment |
CN108492275B (en) * | 2018-01-24 | 2020-08-18 | 浙江科技学院 | No-reference stereo image quality evaluation method based on deep neural network |
CN108648223A (en) * | 2018-05-17 | 2018-10-12 | 苏州科技大学 | Scene reconstruction method based on median eye and reconfiguration system |
CN108765414B (en) * | 2018-06-14 | 2021-12-03 | 上海大学 | No-reference stereo image quality evaluation method based on wavelet decomposition and natural scene statistics |
CN109409380B (en) * | 2018-08-27 | 2021-01-12 | 浙江科技学院 | Stereo image visual saliency extraction method based on double learning networks |
CN110246111B (en) * | 2018-12-07 | 2023-05-26 | 天津大学青岛海洋技术研究院 | No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image |
CN114067006B (en) * | 2022-01-17 | 2022-04-08 | 湖南工商大学 | Screen content image quality evaluation method based on discrete cosine transform |
CN115690418B (en) * | 2022-10-31 | 2024-03-12 | 武汉大学 | Unsupervised automatic detection method for image waypoints |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750695A (en) * | 2012-06-04 | 2012-10-24 | 清华大学 | Machine learning-based stereoscopic image quality objective assessment method |
CN103996192A (en) * | 2014-05-12 | 2014-08-20 | 同济大学 | Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model |
-
2016
- 2016-01-06 CN CN201610006517.0A patent/CN105654142B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102750695A (en) * | 2012-06-04 | 2012-10-24 | 清华大学 | Machine learning-based stereoscopic image quality objective assessment method |
CN103996192A (en) * | 2014-05-12 | 2014-08-20 | 同济大学 | Non-reference image quality evaluation method based on high-quality natural image statistical magnitude model |
Non-Patent Citations (2)
Title |
---|
"基于自然场景统计的无参考图像质量评价";薛松 等;《四川兵工学报》;20140430;第35卷(第4期);第119-123页 |
"结合感知特征和自然场景统计的无参考图像质量评价";贾惠珍 等;《中国图象图形学报》;20141231;第0859-0867页 |
Also Published As
Publication number | Publication date |
---|---|
CN105654142A (en) | 2016-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105654142B (en) | Based on natural scene statistics without reference stereo image quality evaluation method | |
CN107578404B (en) | View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images | |
CN107578403B (en) | The stereo image quality evaluation method for instructing binocular view to merge based on gradient information | |
Chen et al. | No-reference quality assessment of natural stereopairs | |
Shao et al. | Blind image quality assessment for stereoscopic images using binocular guided quality lookup and visual codebook | |
CN109523506B (en) | Full-reference stereo image quality objective evaluation method based on visual salient image feature enhancement | |
CN109345502B (en) | Stereo image quality evaluation method based on disparity map stereo structure information extraction | |
CN109255358B (en) | 3D image quality evaluation method based on visual saliency and depth map | |
CN104994375A (en) | Three-dimensional image quality objective evaluation method based on three-dimensional visual saliency | |
CN103426173B (en) | Objective evaluation method for stereo image quality | |
CN110246111B (en) | No-reference stereoscopic image quality evaluation method based on fusion image and enhanced image | |
Shao et al. | Learning receptive fields and quality lookups for blind quality assessment of stereoscopic images | |
Lin et al. | Quality index for stereoscopic images by jointly evaluating cyclopean amplitude and cyclopean phase | |
Shao et al. | Models of monocular and binocular visual perception in quality assessment of stereoscopic images | |
CN107371016A (en) | Based on asymmetric distortion without with reference to 3D stereo image quality evaluation methods | |
CN114648482A (en) | Quality evaluation method and system for three-dimensional panoramic image | |
CN109429051A (en) | Based on multiple view feature learning without reference stereoscopic video quality method for objectively evaluating | |
Liu et al. | Blind stereoscopic image quality assessment accounting for human monocular visual properties and binocular interactions | |
CN104243970A (en) | 3D drawn image objective quality evaluation method based on stereoscopic vision attention mechanism and structural similarity | |
CN109510981A (en) | A kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform | |
Messai et al. | Deep learning and cyclopean view for no-reference stereoscopic image quality assessment | |
Zhou et al. | Quality assessment of 3D synthesized images via disoccluded region discovery | |
CN108648186B (en) | No-reference stereo image quality evaluation method based on primary visual perception mechanism | |
Chen et al. | Full-reference quality assessment of stereoscopic images by modeling binocular rivalry | |
CN106682599A (en) | Stereo image visual saliency extraction method based on sparse representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190723 Termination date: 20220106 |