CN102903107A - Three-dimensional picture quality objective evaluation method based on feature fusion - Google Patents

Three-dimensional picture quality objective evaluation method based on feature fusion Download PDF

Info

Publication number
CN102903107A
CN102903107A CN2012103579568A CN201210357956A CN102903107A CN 102903107 A CN102903107 A CN 102903107A CN 2012103579568 A CN2012103579568 A CN 2012103579568A CN 201210357956 A CN201210357956 A CN 201210357956A CN 102903107 A CN102903107 A CN 102903107A
Authority
CN
China
Prior art keywords
org
dis
pixel
image
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012103579568A
Other languages
Chinese (zh)
Other versions
CN102903107B (en
Inventor
邵枫
段芬芳
蒋刚毅
郁梅
李福翠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Qizhen Information Technology Service Co ltd
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN201210357956.8A priority Critical patent/CN102903107B/en
Publication of CN102903107A publication Critical patent/CN102903107A/en
Application granted granted Critical
Publication of CN102903107B publication Critical patent/CN102903107B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional picture quality objective evaluation method based on feature fusion. The method includes that at first a single-eye image of an original non-distortion three-dimensional image and a single-eye image of a distortion three-dimensional image to be evaluated are calculated respectively, by calculating mean value and a standard difference of each pixel point in the two single-eye images, objective evaluation metric of each pixel point in the single-eye image of the distortion three-dimensional image to be evaluated is obtained, remarkable images of the two single-eye images and a distortion image between the two single-eye images are calculated respectively, the objective evaluation metric of each pixel point in the single-eye image of the distortion three-dimensional image to be evaluated is fused, and an image quality objective evaluation predicating value of the distortion three-dimensional image to be evaluated is obtained. The three-dimensional picture quality objective evaluation method has the advantages that the obtained single-eye images can simulate a double-eye three-dimensional fusion process well, and by fusing the remarkable images and the distortion images, relevance of objective evaluation results and subjective perception can be effectively improved.

Description

Stereo image quality objective evaluation method based on feature fusion
Technical Field
The invention relates to an image quality evaluation method, in particular to a stereo image quality objective evaluation method based on feature fusion.
Background
With the rapid development of image coding technology and stereoscopic display technology, the stereoscopic image technology has received more and more extensive attention and application, and has become a current research hotspot. The stereo image technology utilizes the binocular parallax principle of human eyes, the left and right viewpoint images from the same scene are respectively and independently received by binoculars, and binocular parallax is formed through brain fusion, so that the stereo image with depth feeling and reality feeling is appreciated. Due to the influence of an acquisition system and storage compression and transmission equipment, a series of distortions are inevitably introduced into the stereo image, and compared with a single-channel image, the stereo image needs to ensure the image quality of two channels simultaneously, so that the quality evaluation of the stereo image is of great significance. However, currently, there is no effective objective evaluation method for evaluating the quality of stereoscopic images. Therefore, establishing an effective objective evaluation model of the quality of the stereo image has very important significance.
The existing objective evaluation method for the quality of the stereo image is to directly apply the plane image quality evaluation method to the evaluation of the quality of the stereo image, however, the process of fusing the left and right viewpoint images of the stereo image to generate the stereo effect is not a simple process of superposing the left and right viewpoint images and is difficult to express by a simple mathematical method, so that how to effectively simulate the binocular stereo fusion in the stereo image quality evaluation process and how to extract effective characteristic information to fuse the evaluation result, the objective evaluation result is more in line with the human visual system, and the problem needs to be researched and solved in the process of evaluating the objective quality of the stereo image.
Disclosure of Invention
The invention aims to solve the technical problem of providing a three-dimensional image quality objective evaluation method based on feature fusion, which can effectively improve the correlation between objective evaluation results and subjective perception.
The technical scheme adopted by the invention for solving the technical problems is as follows: a stereo image quality objective evaluation method based on feature fusion is characterized in that the processing process is as follows: firstly, obtaining a single eye diagram of an original undistorted stereo image according to even symmetric frequency response and odd symmetric frequency response of each pixel point in a left viewpoint image and a right viewpoint image of the original undistorted stereo image in different scales and directions and a distorted image between the left viewpoint image and the right viewpoint image of the original undistorted stereo image; obtaining a single eye diagram of the distorted stereo image to be evaluated according to even symmetric frequency response and odd symmetric frequency response of each pixel point in the left viewpoint image and the right viewpoint image of the distorted stereo image to be evaluated in different scales and directions and a distorted image between the left viewpoint image and the right viewpoint image of the original undistorted stereo image; secondly, obtaining an objective evaluation metric value of each pixel point in the single eye diagram of the distorted three-dimensional image to be evaluated according to the mean value and the standard deviation of each pixel point in the two single eye diagrams; thirdly, obtaining a corresponding saliency map according to the amplitude and the phase of the single eye map of the original undistorted stereo image; obtaining a corresponding saliency map according to the amplitude and the phase of the single eye map of the distorted stereo image to be evaluated; then, according to the two saliency maps and the distortion map between the two single eye maps, fusing objective evaluation metric values of each pixel point in the single eye map of the distorted three-dimensional image to be evaluated to obtain an objective evaluation prediction value of the image quality of the distorted three-dimensional image to be evaluated; and finally, obtaining the image quality objective evaluation predicted value of the distorted three-dimensional images with different distortion types and different distortion degrees according to the processing process.
The invention relates to a method for objectively evaluating the quality of a stereo image based on feature fusion, which comprises the following specific steps:
making SorgFor original undistorted stereo image, let SdisFor the distorted stereo image to be evaluated, SorgIs noted as { Lorg(x, y) }, adding SorgIs noted as { Rorg(x, y) }, adding SdisIs noted as { Ldis(x, y) }, adding SdisIs noted as { Rdis(x, y) }, where (x, y) here denotes the coordinate positions of pixel points in the left viewpoint image and the right viewpoint image, x is 1. ltoreq. W, y is 1. ltoreq. H, W denotes the widths of the left viewpoint image and the right viewpoint image, H denotes the heights of the left viewpoint image and the right viewpoint image, Lorg(x, y) represents { L }orgThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rorg(x, y) represents { RorgThe pixel value L of the pixel point with the coordinate position (x, y) in (x, y) } isdis(x, y) represents { L }disThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rdis(x, y) represents { RdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
② according to { Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}、{RdisEven symmetric frequency response and odd symmetric frequency response of each pixel point in (x, y) } in different scales and directions are correspondingly obtained to obtain { L }org(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}、{RdisThe amplitude of each pixel in (x, y) } is then based on { L }org(x, y) } and { R }orgAmplitude of each pixel in (x, y) } and { Lorg(x, y) } and { R }org(x, y) } calculating S from the pixel value of each pixel in the parallax imageorgThe eye-independent diagram of (2), is marked as { CMorg(x, y) }, and according to { L }dis(x, y) } and { R }disAmplitude of each pixel in (x, y) } and { Lorg(x, y) } and { R }org(x, y) } calculating S from the pixel value of each pixel in the parallax imagedisThe eye-independent diagram of (2), is marked as { CMdis(x, y) }, wherein CMorg(x, y) denotes { CMorgThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) }, CMdis(x, y) denotes { CMdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
③ according to { CMorg(x, y) } and { CMdisCalculating the mean value and standard deviation of each pixel point in (x, y) } and calculating the (CM)dis(x, y) } the objective evaluation metric for each pixel point, will be { CMdisThe objective evaluation metric value of the pixel point with the coordinate position (x, y) in (x, y) is recorded as Qimage(x,y);
Fourthly, according to { CMorg(x, y) } amplitude and phase, calculating { CMorg(x, y) } significant graph, denoted as { SMorg(x, y) }, according to { CMdis(x, y) } amplitude and phase, calculating { CMdis(x, y) } significant graph, denoted as { SMdis(x, y) }, wherein, SMorg(x, y) denotes { SMorgThe pixel value of the pixel point whose coordinate position is (x, y), SMdis(x, y) denotes { SMdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
fifthly, calculating { CMorg(x, y) } and { CMdisA distortion map between (x, y) } is recorded as { DM (x, y) }, and the pixel value of a pixel point with a coordinate position (x, y) in { DM (x, y) } is recorded as DM (x, y), and DM (x, y) = (CM)org(x,y)-CMdis(x,y))2
Root of Chinese ThalictrifoliateAccording to { SMorg(x, y) } and { SMdis(x, y) } and { DM (x, y) }, for { CMdisThe objective evaluation metric values of all pixel points in (x, y) are fused to obtain SdisThe image quality objective evaluation predicted value of (1) is marked as Q, Q = [ Σ ( x , y ) ∈ Ω Q image ( x , y ) × SM ( x , y ) Σ ( x , y ) ∈ Ω SM ( x , y ) ] γ × [ Σ ( x , y ) ∈ Ω Q image ( x , y ) × DM ( x , y ) Σ ( x , y ) ∈ Ω DM ( x , y ) ] β , where Ω represents a pixel domain range, SM (x, y) = max (SM)org(x,y),SMdis(x, y)), max () is a function taking the maximum value, and γ and β are weight coefficients;
adopting n original undistorted stereo images, establishing a distorted stereo image set of the undistorted stereo images under different distortion types and different distortion degrees, wherein the distorted stereo image set comprises a plurality of distorted stereo images, respectively obtaining an average subjective score difference value of each distorted stereo image in the distorted stereo image set by using a subjective quality evaluation method, and marking the average subjective score difference value as DMOS, DMOS =100-MOS, wherein MOS represents a subjective score mean value, DMOS belongs to [0,100], and n is more than or equal to 1;
calculating S according to the steps from the first step to the sixth stepdisThe image quality objective evaluation prediction value of each distorted stereo image in the distorted stereo image set is calculated respectively through the operation of the image quality objective evaluation prediction value.
The concrete process of the second step is as follows:
2-1, pair { Lorg(x, y) is filtered to obtain { L }orgEven symmetric frequency response and odd symmetric frequency response of each pixel point in (x, y) } in different scales and directions are converted into { L }orgEven symmetric frequency responses of pixel points with coordinate positions (x, y) in different scales and directions are recorded as eα,θ(x, y) will { LorgThe odd symmetric frequency response of the pixel point with the coordinate position (x, y) in different scales and directions is marked as oα,θ(x, y), wherein alpha represents the scale factor of the filter used for filtering, alpha is more than or equal to 1 and less than or equal to 4, theta represents the direction factor of the filter used for filtering, and theta is more than or equal to 1 and less than or equal to 4;
2 according to { LorgCalculating the even symmetric frequency response and the odd symmetric frequency response of each pixel point in (x, y) in different scales and directions, and calculating the { L }orgThe amplitude of each pixel in (x, y) } will be { LorgThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as GE org L ( x , y ) , GE org L ( x , y ) = Σ θ = 1 4 Σ α = 1 4 e α , θ ( x , y ) 2 + o α , θ ( x , y ) 2 ;
② -3, obtaining { L ] according to the steps from ② -1 to ② -2orgOperation of amplitude of each pixel in (x, y) } acquires { R } in the same mannerorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } the amplitude of each pixel point in the (x, y) } will be { RorgThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as
Figure BDA00002184700900043
Will { LdisThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as
Figure BDA00002184700900044
Will { RdisThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as
Figure BDA00002184700900045
② 4, calculating { L by adopting a block matching methodorg(x, y) } and { R }org(x, y) } parallax images, noted as
Figure BDA00002184700900046
Wherein,
Figure BDA00002184700900047
to represent
Figure BDA00002184700900048
The middle coordinate position is the pixel value of the pixel point of (x, y);
② 5 according to { Lorg(x, y) } and { R }orgAmplitude of each pixel in (x, y) } and
Figure BDA00002184700900049
calculating the pixel value of each pixel point in SorgThe eye-independent diagram of (2), is marked as { CMorg(x, y) }, will { CMorgThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) is marked as CMorg(x,y), CM org ( x , y ) = GE org L ( x , y ) × L org ( x , y ) + GE org R ( x - d org L ( x , y ) , y ) × R org ( x - d org L ( x , y ) , y ) GE org L ( x , y ) + GE org R ( x - d org L ( x , y ) , y ) , Wherein,
Figure BDA00002184700900052
represents { Rorg(x, y) } coordinate position of
Figure BDA00002184700900054
The amplitude of the pixel points of (a) is,
Figure BDA00002184700900055
represents { Rorg(x, y) } coordinate position of
Figure BDA00002184700900056
The pixel value of the pixel point of (1);
② 6 according to { Ldis(x, y) } and { R }disAmplitude of each pixel in (x, y) } and
Figure BDA00002184700900057
calculating the pixel value of each pixel point in SdisThe eye-independent diagram of (2), is marked as { CMdis(x, y) }, will { CMdisThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) is marked as CMdis(x,y), CM dis ( x , y ) = GE dis L ( x , y ) × L dis ( x , y ) + GE dis R ( x - d org L ( x , y ) , y ) × R dis ( x - d org L ( x , y ) , y ) GE dis L ( x , y ) + GE dis R ( x - d org L ( x , y ) , y ) , Wherein,
Figure BDA00002184700900059
represents { Rdis(x, y) } coordinate position of
Figure BDA000021847009000510
The amplitude of the pixel points of (a) is,
Figure BDA000021847009000511
represents { Rdis(x, y) } coordinate position of
Figure BDA000021847009000512
The pixel value of the pixel point of (1).
In the step II-1, the pairs { Lorg(x, y) } the filter used for the filtering process is a log-Garbor filter.
The concrete process of the step III is as follows:
③ 1, calculate { CMorg(x, y) } and { CMdisThe mean and standard deviation of each pixel in (x, y) } will be { CMorgThe coordinate position in (x, y) } is (x)1,y1) The mean value and the standard difference of the pixel points are respectively recorded as muorg(x1,y1) And σorg(x1,y1) Will { CMdisThe coordinate position in (x, y) } is (x)1,y1) The mean value and the standard difference of the pixel points are respectively recorded as mudis(x1,y1) And σdis(x1,y1), μ org ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) CM org ( x 1 , y 1 ) M , σ org ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) ( CM org ( x 1 , y 1 ) - μ org ( x 1 , y 1 ) ) 2 M , μ dis ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) CM dis ( x 1 , y 1 ) M , σ dis ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) ( CM dis ( x 1 , y 1 ) - μ dis ( x 1 , y 1 ) ) 2 M , Wherein x is more than or equal to 11≤W,1≤y1≤H,N(x1,y1) The coordinate position is shown as (x)1,y1) The pixel point of (a) is an 8 x 8 neighborhood window with the center, M represents N (x)1,y1) Number of inner pixels, CMorg(x1,y1) Representation { CMorgThe coordinate position in (x, y) } is (x)1,y1) Pixel value of the pixel point, CMdis(x1,y1) Representation { CMdisThe coordinate position in (x, y) } is (x)1,y1) The pixel value of the pixel point of (1);
③ 2 according to { CMorg(x, y) } and { CMdisCalculating the mean value and standard deviation of each pixel point in (x, y) } and calculating the (CM)dis(x, y) } the objective evaluation metric for each pixel point, will be { CMdisThe coordinate position in (x, y) } is (x)1,y1) The objective evaluation metric value of the pixel point is recorded as Qimage(x1,y1), Q image ( x 1 , y 1 ) = 4 × ( μ org ( x 1 , y 1 ) × μ dis ( x 1 , y 1 ) ) × ( σ org ( x 1 , y 1 ) × σ dis ( x 1 , y 1 ) ) + C ( μ org ( x 1 + y 1 ) 2 + μ dis ( x 1 , y 1 ) 2 ) × ( σ org ( x 1 , y 1 ) 2 + σ dis ( x 1 , y 1 ) 2 ) + C , Wherein C is a control parameter.
The specific process of the step IV is as follows:
tetra-1, pair { CMorg(x, y) performing discrete Fourier transform to obtain (CM)orgThe amplitude and phase of (x, y) } are denoted as { M, respectivelyorg(u, v) } and { Aorg(u, v) }, where u denotes the width of the amplitude or phase of the transform domain, v denotes the height of the amplitude or phase of the transform domain, 1. ltoreq. u.ltoreq.W, 1. ltoreq. v.ltoreq.H, Morg(u, v) represents { M }org(u, v) } the amplitude value of the pixel point with the coordinate position of (u, v), Aorg(u, v) represents { A }org(u, v) } the coordinate position is the phase value of the pixel point with (u, v);
fourthly-2, calculating { Morg(u, v) } amplitude of the high frequency component, denoted as { R }org(u, v) }, will { RorgThe amplitude value of the high-frequency component of the pixel point with the coordinate position (u, v) in (u, v) is recorded as Rorg(u,v),Rorg(u,v)=log(Morg(u,v))-hm(u,v)*log(Morg(u, v)), wherein log () is a logarithmic function based on e, e =2.718281828, "+" is the convolution operator symbol, hm(u, v) represents an m × m mean filtering;
fourthly-3, according to { Rorg(u, v) } and { Aorg(u, v) } inverse discrete Fourier transform, and the obtained inverse transform image is taken as { CMorg(x, y) } significant graph, denoted as { SMorg(x, y) }, wherein, SMorg(x, y) denotes { SMorgThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
fourthly-4, obtaining the { CM according to the steps from the fourth step-1 to the fourth step-3orgOperation of the saliency map of (x, y) } acquisition of { CM in the same mannerdis(x, y) } significant graph, denoted as { SMdis(x, y) }, wherein, SMdis(x, y) denotes { SMdisAnd the coordinate position in the (x, y) is the pixel value of the pixel point of (x, y).
Compared with the prior art, the invention has the advantages that:
1) the method of the invention respectively calculates the single eye diagram of the original undistorted stereo image and the single eye diagram of the distorted stereo image to be evaluated, and directly evaluates the single eye diagram of the distorted stereo image, thereby effectively simulating the binocular stereo fusion process and avoiding the process of linearly weighting the objective evaluation metric values of the left viewpoint image and the right viewpoint image.
2) The method can ensure that the evaluation result is more perceptually accordant with the human visual system by calculating the single eye diagram of the original undistorted stereo image, the salient diagram of the single eye diagram of the distorted stereo image to be evaluated and the distortion diagram between the two single eye diagrams and fusing the objective evaluation metric value of each pixel point in the single eye diagram of the distorted stereo image to be evaluated, thereby effectively improving the correlation between the objective evaluation result and the subjective perception.
Drawings
FIG. 1 is a block diagram of an overall implementation of the method of the present invention;
fig. 2a is a left viewpoint image of Akko (640 × 480 size) stereo image;
fig. 2b is a right viewpoint image of an Akko (size 640 × 480) stereoscopic image;
fig. 3a is a left viewpoint image of an altmobit (size 1024 × 768) stereoscopic image;
fig. 3b is a right view image of an altmobit (size 1024 × 768) stereoscopic image;
fig. 4a is a left viewpoint image of a balloon (size 1024 × 768) stereoscopic image;
fig. 4b is a right viewpoint image of a balloon (size 1024 × 768) stereoscopic image;
fig. 5a is a left viewpoint image of a Doorflower (size 1024 × 768) stereoscopic image;
fig. 5b is a right viewpoint image of a Doorflower (size 1024 × 768) stereoscopic image;
fig. 6a is a left view image of a Kendo (size 1024 × 768) stereoscopic image;
fig. 6b is a right view image of a Kendo (size 1024 × 768) stereoscopic image;
fig. 7a is a left view image of a LeaveLaptop (size 1024 × 768) stereoscopic image;
fig. 7b is a right view image of a LeaveLaptop (size 1024 × 768) stereoscopic image;
fig. 8a is a left viewpoint image of a lovedual 1 (size 1024 × 768) stereoscopic image;
fig. 8b is a right viewpoint image of a lovedual 1 (size 1024 × 768) stereoscopic image;
fig. 9a is a left view image of a newsapper (size 1024 × 768) stereoscopic image;
fig. 9b is a right view image of a newsapper (size 1024 × 768) stereoscopic image;
FIG. 10a is a left viewpoint image of Puppy (size 720 × 480) stereo image;
FIG. 10b is a right viewpoint image of Puppy (size 720 × 480) stereoscopic image;
fig. 11a is a left viewpoint image of a Soccer2 (size 720 × 480) stereoscopic image;
fig. 11b is a right viewpoint image of a Soccer2 (size 720 × 480) stereoscopic image;
fig. 12a is a left viewpoint image of a Horse (size 720 × 480) stereoscopic image;
fig. 12b is a right view image of a Horse (size 720 × 480) stereoscopic image;
fig. 13a is a left viewpoint image of an Xmas (size 640 × 480) stereoscopic image;
fig. 13b is a right view image of an Xmas (size 640 × 480) stereoscopic image;
fig. 14 is a scatter plot of the difference between the objective evaluation prediction value of image quality and the average subjective score for each distorted stereoscopic image in the set of distorted stereoscopic images.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
The invention provides a method for objectively evaluating the quality of a stereo image based on feature fusion, the overall implementation block diagram of which is shown in figure 1, and the processing process is as follows: firstly, obtaining a single eye diagram of an original undistorted stereo image according to even symmetric frequency response and odd symmetric frequency response of each pixel point in a left viewpoint image and a right viewpoint image of the original undistorted stereo image in different scales and directions and a distorted image between the left viewpoint image and the right viewpoint image of the original undistorted stereo image; obtaining a single eye diagram of the distorted stereo image to be evaluated according to even symmetric frequency response and odd symmetric frequency response of each pixel point in the left viewpoint image and the right viewpoint image of the distorted stereo image to be evaluated in different scales and directions and a distorted image between the left viewpoint image and the right viewpoint image of the original undistorted stereo image; secondly, obtaining an objective evaluation metric value of each pixel point in the single eye diagram of the distorted three-dimensional image to be evaluated according to the mean value and the standard deviation of each pixel point in the two single eye diagrams; thirdly, obtaining a corresponding saliency map according to the amplitude and the phase of the single eye map of the original undistorted stereo image; obtaining a corresponding saliency map according to the amplitude and the phase of the single eye map of the distorted stereo image to be evaluated; then, according to the two saliency maps and the distortion map between the two single eye maps, fusing objective evaluation metric values of each pixel point in the single eye map of the distorted three-dimensional image to be evaluated to obtain an objective evaluation prediction value of the image quality of the distorted three-dimensional image to be evaluated; and finally, obtaining the image quality objective evaluation predicted value of the distorted three-dimensional images with different distortion types and different distortion degrees according to the processing process.
The method specifically comprises the following steps:
making SorgFor original undistorted stereo image, let SdisFor the distorted stereo image to be evaluated, SorgIs noted as { Lorg(x, y) }, adding SorgIs noted as { Rorg(x, y) }, adding SdisIs noted as { Ldis(x, y) }, adding SdisIs noted as { Rdis(x, y) }, where (x, y) here denotes the coordinate positions of pixel points in the left viewpoint image and the right viewpoint image, x is 1. ltoreq. W, y is 1. ltoreq. H, W denotes the widths of the left viewpoint image and the right viewpoint image, H denotes the heights of the left viewpoint image and the right viewpoint image, Lorg(x, y) represents { L }orgThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rorg(x, y) represents { RorgThe pixel value L of the pixel point with the coordinate position (x, y) in (x, y) } isdis(x, y) represents { L }disThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rdis(x, y) represents { RdisAnd the coordinate position in the (x, y) is the pixel value of the pixel point of (x, y).
② according to { Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}、{RdisEven symmetric frequency response and odd symmetric frequency response of each pixel point in (x, y) } in different scales and directions are correspondingly obtained to obtain { L }org(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}、{RdisThe amplitude of each pixel in (x, y) } is then based on { L }org(x, y) } and { R }orgAmplitude of each pixel in (x, y) } and { Lorg(x, y) } and { R }org(x, y) } calculating S from the pixel value of each pixel in the parallax imageorgIs denoted as { CM (cyclopean map) }org(x, y) }, and according to { L }dis(x, y) } and { R }disAmplitude of each pixel in (x, y) } and { Lorg(x, y) } and { R }org(x, y) } calculating S from the pixel value of each pixel in the parallax imagedisThe eye-independent diagram of (2), is marked as { CMdis(x, y) }, wherein CMorg(x, y) denotes { CMorgThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) }, CMdis(x, y) denotes { CMdisAnd the coordinate position in the (x, y) is the pixel value of the pixel point of (x, y).
In this embodiment, the specific process of step two is:
2-1, pair { Lorg(x, y) is filtered to obtain { L }orgEven symmetric frequency response and odd symmetric frequency response of each pixel point in (x, y) } in different scales and directions are converted into { L }orgEven symmetric frequency responses of pixel points with coordinate positions (x, y) in different scales and directions are recorded as eα,θ(x, y) will { LorgThe odd symmetric frequency response of the pixel point with the coordinate position (x, y) in different scales and directions is marked as oα,θ(x, y), wherein alpha represents the scale factor of the filter used for filtering, alpha is more than or equal to 1 and less than or equal to 4, theta represents the direction factor of the filter used for filtering, and theta is more than or equal to 1 and less than or equal to 4.
Here, for { Lorg(x, y) } the filter used for the filtering process is a log-Garbor filter.
2 according to { LorgCalculating the even symmetric frequency response and the odd symmetric frequency response of each pixel point in (x, y) in different scales and directions, and calculating the { L }orgThe amplitude of each pixel in (x, y) } will be { LorgThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as GE org L ( x , y ) , GE org L ( x , y ) = Σ θ = 1 4 Σ α = 1 4 e α , θ ( x , y ) 2 + o α , θ ( x , y ) 2 .
② -3, obtaining { L ] according to the steps from ② -1 to ② -2orgOperation of amplitude of each pixel in (x, y) } acquires { R } in the same mannerorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } the amplitude of each pixel point in the (x, y) } will be { RorgThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as
Figure BDA00002184700900103
Will { LdisThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as
Figure BDA00002184700900104
Will { RdisThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as
Figure BDA00002184700900105
If obtain { LdisThe operation process of the amplitude of each pixel point in (x, y) } is as follows: 1) for { Ldis(x, y) is filtered to obtain { L }disEven symmetric frequency response and odd symmetric frequency response of each pixel point in (x, y) } in different scales and directions are converted into { L }disEven symmetric frequency responses of pixel points with coordinate positions (x, y) in different scales and directions are recorded as eα,θ' (x, y), will { LdisThe odd symmetric frequency response of the pixel point with the coordinate position (x, y) in different scales and directions is marked as oα,θ' (x, y), wherein alpha represents the scale factor of the filter used for filtering, 1 is less than or equal to alpha and less than or equal to 4, theta represents the direction factor of the filter used for filtering, and 1 is less than or equal to theta and less than or equal to 4; 2) according to { LdisCalculating the even symmetric frequency response and the odd symmetric frequency response of each pixel point in (x, y) in different scales and directions, and calculating the { L }disThe amplitude of each pixel in (x, y) } will be { LdisThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as
Figure BDA00002184700900106
GE dis L ( x , y ) = Σ θ = 1 4 Σ α = 1 4 e α , θ ′ ( x , y ) 2 + o α , θ ′ ( x , y ) 2 .
② 4, calculating { L by adopting a block matching methodorg(x, y) } and { R }org(x, y) } parallax images, noted as
Figure BDA00002184700900108
Wherein,
Figure BDA00002184700900109
to represent
Figure BDA000021847009001010
The middle coordinate position is the pixel value of the pixel point of (x, y).
② 5 according to { Lorg(x, y) } and { R }orgAmplitude of each pixel in (x, y) } andcalculating the pixel value of each pixel point in SorgThe eye-independent diagram of (2), is marked as { CMorg(x, y) }, will { CMorgThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) is marked as CMorg(x,y), CM org ( x , y ) = GE org L ( x , y ) × L org ( x , y ) + GE org R ( x - d org L ( x , y ) , y ) × R org ( x - d org L ( x , y ) , y ) GE org L ( x , y ) + GE org R ( x - d org L ( x , y ) , y ) , Wherein,
Figure BDA00002184700900112
represents { Rorg(x, y) } coordinate position of
Figure BDA00002184700900113
The amplitude of the pixel points of (a) is,
Figure BDA00002184700900114
represents { Rorg(x, y) } coordinate position of
Figure BDA00002184700900115
The pixel value of the pixel point of (1).
② 6 according to { Ldis(x, y) } and { R }disAmplitude of each pixel in (x, y) } and
Figure BDA00002184700900116
calculating the pixel value of each pixel point in SdisThe eye-independent diagram of (2), is marked as { CMdis(x, y) }, will { CMdisThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) is marked as CMdis(x,y), CM dis ( x , y ) = GE dis L ( x , y ) × L dis ( x , y ) + GE dis R ( x - d org L ( x , y ) , y ) × R dis ( x - d org L ( x , y ) , y ) GE dis L ( x , y ) + GE dis R ( x - d org L ( x , y ) , y ) , Wherein,
Figure BDA00002184700900118
represents { Rdis(x, y) } coordinate position of
Figure BDA00002184700900119
The amplitude of the pixel points of (a) is,
Figure BDA000021847009001110
represents { Rdis(x, y) } coordinate position of
Figure BDA000021847009001111
The pixel value of the pixel point of (1).
③ according to { CMorg(x, y) } and { CMdisCalculating the mean value and standard deviation of each pixel point in (x, y) } and calculating the (CM)dis(x, y) } the objective evaluation metric for each pixel point, will be { CMdisThe objective evaluation metric value of the pixel point with the coordinate position (x, y) in (x, y) is recorded as Qimsge(x, y), will { CMdisThe objective evaluation metric values of all the pixel points in (x, y) are collectively expressed as { Q }image(x,y)}。
In this embodiment, the specific process of step three is:
③ 1, calculate { CMorg(x, y) } and { CMdisThe mean and standard deviation of each pixel in (x, y) } will be { CMorgThe coordinate position in (x, y) } is (x)1,y1) The mean value and the standard difference of the pixel points are respectively recorded as muorg(x1,y1) And σorg(x1,y1) Will { CMdisThe coordinate position in (x, y) } is (x)1,y1) The mean value and the standard difference of the pixel points are respectively recorded as mudis(x1,y1) And σdis(x1,y1), μ org ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) CM org ( x 1 , y 1 ) M , σ org ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) ( CM org ( x 1 , y 1 ) - μ org ( x 1 , y 1 ) ) 2 M , μ dis ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) CM dis ( x 1 , y 1 ) M , σ dis ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) ( CM dis ( x 1 , y 1 ) - μ dis ( x 1 , y 1 ) ) 2 M , Wherein x is more than or equal to 11≤W,1≤y1≤H,N(x1,y1) The coordinate position is shown as (x)1,y1) The pixel point of (a) is an 8 x 8 neighborhood window with the center, M represents N (x)1,y1) Number of inner pixels, CMorg(x1,y1) Representation { CMorgThe coordinate position in (x, y) } is (x)1,y1) Pixel value of the pixel point, CMdis(x1,y1) Representation { CMdisThe coordinate position in (x, y) } is (x)1,y1) The pixel value of the pixel point of (1).
③ 2 according to { CMorg(x, y) } and { CMdisCalculating the mean value and standard deviation of each pixel point in (x, y) } and calculating the (CM)dis(x, y) } the objective evaluation metric for each pixel point, will be { CMdisThe coordinate position in (x, y) } is (x)1,y1) The objective evaluation metric value of the pixel point is recorded as Qimage(x1,y1), Q image ( x 1 , y 1 ) = 4 × ( μ org ( x 1 , y 1 ) × μ dis ( x 1 , y 1 ) ) × ( σ org ( x 1 , y 1 ) × σ dis ( x 1 , y 1 ) ) + C ( μ org ( x 1 + y 1 ) 2 + μ dis ( x 1 , y 1 ) 2 ) × ( σ org ( x 1 , y 1 ) 2 + σ dis ( x 1 , y 1 ) 2 ) + C , Where C is a control parameter, in this embodiment, C =0.01 is taken.
Fourthly, according to { CMorgSpectral redundancy characteristics of (x, y) }, i.e. according to { CMorg(x, y) } amplitude and phase, calculating { CMorg(x, y) } saliency map, denoted as { SM }org(x, y) }, according to { CMdisSpectral redundancy characteristics of (x, y) }, i.e. according to { CMdis(x, y) } amplitude and phase, calculating { CMdis(x, y) } significant graph, denoted as { SMdis(x, y) }, wherein, SMorg(x, y) denotes { SMorgThe pixel value of the pixel point whose coordinate position is (x, y), SMdis(x, y) denotes { SMdisAnd the coordinate position in the (x, y) is the pixel value of the pixel point of (x, y).
In this embodiment, the specific process of step iv is:
tetra-1, pair { CMorg(x, y) performing discrete Fourier transform to obtain (CM)orgThe amplitude and phase of (x, y) } are denoted as { M, respectivelyorg(u, v) } and { Aorg(u, v) }, where u denotes the width of the amplitude or phase of the transform domain, v denotes the height of the amplitude or phase of the transform domain, 1. ltoreq. u.ltoreq.W, 1. ltoreq. v.ltoreq.H, Morg(u, v) represents { M }org(u, v) } the amplitude value of the pixel point with the coordinate position of (u, v), Aorg(u, v) represents { A }orgAnd (u, v) the coordinate position in the (u, v) is the phase value of the pixel point of (u, v).
Fourthly-2, calculating { Morg(u, v) } amplitude of the high frequency component, denoted as { R }org(u, v) }, will { RorgThe amplitude value of the high-frequency component of the pixel point with the coordinate position (u, v) in (u, v) is recorded as Rorg(u,v),Rorg(u,v)=log(Morg(u,v))-hm(u,v)*log(Morg(u, v)), wherein log () is a logarithmic function based on e, e =2.718281828, "+" is the convolution operator symbol, hm(u, v) represents m × m mean filtering, and in the present embodiment, m =3 is taken.
Fourthly-3, according to { Rorg(u, v) } and { Aorg(u, v) } inverse discrete Fourier transform, and the obtained inverse transform image is taken as { CMorg(x, y) } significant graph, denoted as { SMorg(x, y) }, wherein, SMorg(x, y) denotes { SMorgAnd the coordinate position in the (x, y) is the pixel value of the pixel point of (x, y).
Fourthly-4, obtaining the { CM according to the steps from the fourth step-1 to the fourth step-3orgOperation of the saliency map of (x, y) } acquisition of { CM in the same mannerdis(x, y) } significant graph, denoted as { SMdis(x, y) }, wherein, SMdis(x, y) denotes { SMdisAnd the coordinate position in the (x, y) is the pixel value of the pixel point of (x, y).
Fifthly, calculating { CMorg(x, y) } and { CMdisA distortion map (distortion map) between (x, y) } is recorded as { DM (x, y) }, and a pixel value of a pixel point with a coordinate position (x, y) in { DM (x, y) } is recorded as DM (x, y), DM (x, y) = (CM)org(x,y)-CMdis(x,y))2
According to { SMorg(x, y) } and { SMdis(x, y) } and { DM (x, y) }, for { CMdisThe objective evaluation metric values of all pixel points in (x, y) are fused to obtain SdisThe image quality objective evaluation predicted value of (1) is marked as Q, Q = [ Σ ( x , y ) ∈ Ω Q image ( x , y ) × SM ( x , y ) Σ ( x , y ) ∈ Ω SM ( x , y ) ] γ × [ Σ ( x , y ) ∈ Ω Q image ( x , y ) × DM ( x , y ) Σ ( x , y ) ∈ Ω DM ( x , y ) ] β , where Ω represents a pixel domain range, SM (x, y) = max (SM)org(x,y),SMdis(x, y)), max () is a function taking the maximum value, γ and β are weight coefficients, and in the present embodiment, γ =1.601 and β =0.501 are taken.
Adopting n original undistorted stereo images, establishing a distorted stereo image set of the undistorted stereo images under different distortion types and different distortion degrees, wherein the distorted stereo image set comprises a plurality of distorted stereo images, respectively obtaining an average subjective score difference value of each distorted stereo image in the distorted stereo image set by using a subjective quality evaluation method, and marking the average subjective score difference value as DMOS, DMOS =100-MOS, wherein MOS represents a subjective score mean value, DMOS belongs to [0,100], and n is more than or equal to 1.
In the present embodiment, a set of distorted stereoscopic images at different distortion degrees of different distortion types is established by using the stereoscopic images composed of fig. 2a and 2b, fig. 3a and 3b, fig. 4a and 4b, fig. 5a and 5b, fig. 6a and 6b, fig. 7a and 7b, fig. 8a and 8b, fig. 9a and 9b, fig. 10a and 10b, fig. 11a and 11b, fig. 12a and 12b, fig. 13a and 13b, and n =12, the set of distorted stereoscopic images including 252 distorted stereoscopic images of 4 distortion types, the total number of JPEG-compressed distorted stereoscopic images is 60, JPEG 2000-compressed distorted stereoscopic images is 60, Gaussian Blur (Gaussian Blur) distorted stereoscopic images is 60, and h.264-encoded distorted stereoscopic images is 72.
Calculating S according to the steps from the first step to the sixth stepdisThe image quality objective evaluation prediction value of each distorted stereo image in the distorted stereo image set is calculated respectively through the operation of the image quality objective evaluation prediction value.
The correlation between the objective evaluation prediction value of the image quality of the distorted stereo image obtained in the present embodiment and the average subjective score difference value is analyzed by using 252 distorted stereo images of 12 undistorted stereo images shown in fig. 2a to 13b under different degrees of JPEG compression, JPEG2000 compression, gaussian blur and h.264 coding distortion. Here, 4 common objective parameters of the evaluation method for evaluating image quality are used as evaluation indexes, that is, Pearson correlation coefficient (PLCC), Spearman correlation coefficient (SROCC), Kendall correlation coefficient (KROCC), mean square error (RMSE), stereo image evaluation objective model accuracy in which PLCC and RMSE reflect distortion, and SROCC and KROCC reflect monotonicity thereof under nonlinear regression conditions. The image quality objective evaluation predicted value of the distorted stereo image calculated according to the method is subjected to five-parameter Logistic function nonlinear fitting, and the higher the PLCC, SROCC and KROCC values are, the lower the RMSE value is, the better the correlation between the objective evaluation method and the average subjective score difference is. The Pearson correlation coefficient, the Spearman correlation coefficient, the Kendall correlation coefficient and the mean square error between the objective evaluation predicted value of the image quality and the subjective score of the distorted three-dimensional image obtained by respectively adopting the method and not adopting the method are compared, the comparison results are shown in tables 1, 2, 3 and 4, and as can be seen from tables 1, 2, 3 and 4, the correlation between the final objective evaluation predicted value of the image quality and the average subjective score difference value of the distorted three-dimensional image obtained by adopting the method is very high, which indicates that the objective evaluation result is more consistent with the result of human eye subjective perception, and the effectiveness of the method is enough to explain.
Fig. 14 shows a scatter diagram of the difference between the objective evaluation prediction value of the image quality of each distorted stereoscopic image in the distorted stereoscopic image set and the average subjective score, and the more concentrated the scatter is, the better the consistency between the objective evaluation result and the subjective perception is. As can be seen from fig. 14, the scatter diagram obtained by the method of the present invention is more concentrated, and the goodness of fit with the subjective evaluation data is higher.
TABLE 1 Pearson correlation coefficient comparison between objective evaluation prediction value and subjective score of image quality for distorted stereoscopic images obtained without and with the method of the present invention
Figure BDA00002184700900151
TABLE 2 comparison of Spearman correlation coefficient between objective evaluation prediction value and subjective score of image quality for distorted stereo images obtained without and with the method of the present invention
Figure BDA00002184700900152
TABLE 3 comparison of Kendall correlation coefficients between objective evaluation prediction values and subjective scores for image quality of distorted stereo images obtained without using the method of the present invention
TABLE 4 comparison of mean square error between objective evaluation prediction and subjective score of image quality for distorted stereoscopic images obtained with and without the method of the present invention
Figure BDA00002184700900154

Claims (6)

1. A stereo image quality objective evaluation method based on feature fusion is characterized in that the processing process is as follows: firstly, obtaining a single eye diagram of an original undistorted stereo image according to even symmetric frequency response and odd symmetric frequency response of each pixel point in a left viewpoint image and a right viewpoint image of the original undistorted stereo image in different scales and directions and a distorted image between the left viewpoint image and the right viewpoint image of the original undistorted stereo image; obtaining a single eye diagram of the distorted stereo image to be evaluated according to even symmetric frequency response and odd symmetric frequency response of each pixel point in the left viewpoint image and the right viewpoint image of the distorted stereo image to be evaluated in different scales and directions and a distorted image between the left viewpoint image and the right viewpoint image of the original undistorted stereo image; secondly, obtaining an objective evaluation metric value of each pixel point in the single eye diagram of the distorted three-dimensional image to be evaluated according to the mean value and the standard deviation of each pixel point in the two single eye diagrams; thirdly, obtaining a corresponding saliency map according to the amplitude and the phase of the single eye map of the original undistorted stereo image; obtaining a corresponding saliency map according to the amplitude and the phase of the single eye map of the distorted stereo image to be evaluated; then, according to the two saliency maps and the distortion map between the two single eye maps, fusing objective evaluation metric values of each pixel point in the single eye map of the distorted three-dimensional image to be evaluated to obtain an objective evaluation prediction value of the image quality of the distorted three-dimensional image to be evaluated; and finally, obtaining the image quality objective evaluation predicted value of the distorted three-dimensional images with different distortion types and different distortion degrees according to the processing process.
2. The method for objectively evaluating the quality of a stereoscopic image based on feature fusion according to claim 1, characterized in that it specifically includes the following steps:
making SorgFor original undistorted stereo image, let SdisFor the distorted stereo image to be evaluated, SorgIs noted as { Lorg(x, y) }, adding SorgIs noted as { Rorg(x, y) }, adding SdisIs noted as { Ldis(x, y) }, adding SdisIs noted as { Rdis(x, y) }, where (x, y) here denotes the coordinate positions of pixel points in the left viewpoint image and the right viewpoint image, x is 1. ltoreq. W, y is 1. ltoreq. H, W denotes the widths of the left viewpoint image and the right viewpoint image, H denotes the heights of the left viewpoint image and the right viewpoint image, Lorg(x, y) represents { L }orgThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rorg(x, y) represents { Rorg(x, y) the middle coordinate position is (x, y)Pixel value of the pixel point of (1), Ldis(x, y) represents { L }disThe coordinate position in (x, y) } is the pixel value of the pixel point with (x, y), Rdis(x, y) represents { RdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
② according to { Lorg(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}、{RdisEven symmetric frequency response and odd symmetric frequency response of each pixel point in (x, y) } in different scales and directions are correspondingly obtained to obtain { L }org(x,y)}、{Rorg(x,y)}、{Ldis(x,y)}、{RdisThe amplitude of each pixel in (x, y) } is then based on { L }org(x, y) } and { R }orgAmplitude of each pixel in (x, y) } and { Lorg(x, y) } and { R }org(x, y) } calculating S from the pixel value of each pixel in the parallax imageorgThe eye-independent diagram of (2), is marked as { CMorg(x, y) }, and according to { L }dis(x, y) } and { R }disAmplitude of each pixel in (x, y) } and { Lorg(x, y) } and { R }org(x, y) } calculating S from the pixel value of each pixel in the parallax imagedisThe eye-independent diagram of (2), is marked as { CMdis(x, y) }, wherein CMorg(x, y) denotes { CMorgThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) }, CMdis(x, y) denotes { CMdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
③ according to { CMorg(x, y) } and { CMdisCalculating the mean value and standard deviation of each pixel point in (x, y) } and calculating the (CM)dis(x, y) } the objective evaluation metric for each pixel point, will be { CMdisThe objective evaluation metric value of the pixel point with the coordinate position (x, y) in (x, y) is recorded as Qimage(x,y);
Fourthly, according to { CMorg(x, y) } amplitude and phase, calculating { CMorg(x, y) } significant graph, denoted as { SMorg(x, y) }, according to { CMdis(x, y) } amplitude and phase, calculating { CMdis(x, y) } significant graph, denoted as { SMdis(x, y) }, wherein, SMorg(x, y) denotes { SMorg(x, y) of pixel points with (x, y) as coordinate positionsPixel value, SMdis(x, y) denotes { SMdisThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
fifthly, calculating { CMorg(x, y) } and { CMdisA distortion map between (x, y) } is recorded as { DM (x, y) }, and the pixel value of a pixel point with a coordinate position (x, y) in { DM (x, y) } is recorded as DM (x, y), and DM (x, y) = (CM)org(x,y)-CMdis(x,y))2
According to { SMorg(x, y) } and { SMdis(x, y) } and { DM (x, y) }, for { CMdisThe objective evaluation metric values of all pixel points in (x, y) are fused to obtain SdisThe image quality objective evaluation predicted value of (1) is marked as Q, Q = [ Σ ( x , y ) ∈ Ω Q image ( x , y ) × SM ( x , y ) Σ ( x , y ) ∈ Ω SM ( x , y ) ] γ × [ Σ ( x , y ) ∈ Ω Q image ( x , y ) × DM ( x , y ) Σ ( x , y ) ∈ Ω DM ( x , y ) ] β , where Ω represents a pixel domain range, SM (x, y) = max (SM)org(x,y),SMdis(x, y)), max () is a function taking the maximum value, and γ and β are weight coefficients;
adopting n original undistorted stereo images, establishing a distorted stereo image set of the undistorted stereo images under different distortion types and different distortion degrees, wherein the distorted stereo image set comprises a plurality of distorted stereo images, respectively obtaining an average subjective score difference value of each distorted stereo image in the distorted stereo image set by using a subjective quality evaluation method, and marking the average subjective score difference value as DMOS, DMOS =100-MOS, wherein MOS represents a subjective score mean value, DMOS belongs to [0,100], and n is more than or equal to 1;
calculating S according to the steps from the first step to the sixth stepdisThe image quality objective evaluation prediction value of each distorted stereo image in the distorted stereo image set is calculated respectively through the operation of the image quality objective evaluation prediction value.
3. The objective evaluation method for stereo image quality based on feature fusion as claimed in claim 2, wherein the specific process of the step (II) is as follows:
2-1, pair { Lorg(x, y) is filtered to obtain { L }orgEven symmetric frequency response and odd symmetric frequency response of each pixel point in (x, y) } in different scales and directions are converted into { L }orgEven symmetric frequency responses of pixel points with coordinate positions (x, y) in different scales and directions are recorded as eα,θ(x, y) will { LorgThe odd symmetric frequency response of the pixel point with the coordinate position (x, y) in different scales and directions is marked as oα,θ(x, y), wherein alpha represents the scale factor of the filter used for filtering, alpha is more than or equal to 1 and less than or equal to 4, theta represents the direction factor of the filter used for filtering, and theta is more than or equal to 1 and less than or equal to 4;
2 according to { LorgCalculating the even symmetric frequency response and the odd symmetric frequency response of each pixel point in (x, y) in different scales and directions, and calculating the { L }orgThe amplitude of each pixel in (x, y) } will be { LorgThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as GE org L ( x , y ) , Σ θ = 1 4 Σ α = 1 4 e α , θ ( x , y ) 2 + o α , θ ( x , y ) 2 ;
② -3, obtaining { L ] according to the steps from ② -1 to ② -2orgOperation of amplitude of each pixel in (x, y) } acquires { R } in the same mannerorg(x,y)}、{Ldis(x, y) } and { R }dis(x, y) } the amplitude of each pixel point in the (x, y) } will be { RorgThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as
Figure FDA00002184700800033
Will { LdisThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as
Figure FDA00002184700800034
Will { RdisThe amplitude of the pixel point with the coordinate position (x, y) in (x, y) is recorded as
Figure FDA00002184700800035
② 4, calculating { L by adopting a block matching methodorg(x, y) } and { R }org(x, y) } parallax images, noted as
Figure FDA00002184700800036
Wherein,
Figure FDA00002184700800037
to represent
Figure FDA00002184700800038
The middle coordinate position is the pixel value of the pixel point of (x, y);
② 5 according to { Lorg(x, y) } and { R }orgAmplitude of each pixel in (x, y) } and
Figure FDA00002184700800041
calculating the pixel value of each pixel point in SorgThe eye-independent diagram of (2), is marked as { CMorg(x, y) }, will { CMorgThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) is marked as CMorg(x,y), CM org ( x , y ) = GE org L ( x , y ) × L org ( x , y ) + GE org R ( x - d org L ( x , y ) , y ) × R org ( x - d org L ( x , y ) , y ) GE org L ( x , y ) + GE org R ( x - d org L ( x , y ) , y ) , Wherein,represents { Rorg(x, y) } coordinate position of
Figure FDA00002184700800044
The amplitude of the pixel points of (a) is,represents { Rorg(x, y) } coordinate position of
Figure FDA00002184700800046
The pixel value of the pixel point of (1);
② 6 according to { Ldis(x, y) } and { R }disAmplitude of each pixel in (x, y) } andcalculating the pixel value of each pixel point in SdisThe eye-independent diagram of (2), is marked as { CMdis(x, y) }, will { CMdisThe pixel value of the pixel point with the coordinate position (x, y) in (x, y) is marked as CMdis(x,y), CM dis ( x , y ) = GE dis L ( x , y ) × L dis ( x , y ) + GE dis R ( x - d org L ( x , y ) , y ) × R dis ( x - d org L ( x , y ) , y ) GE dis L ( x , y ) + GE dis R ( x - d org L ( x , y ) , y ) , Wherein,represents { Rdis(x, y) } coordinate position of
Figure FDA000021847008000410
The amplitude of the pixel points of (a) is,
Figure FDA000021847008000411
represents { Rdis(x, y) } coordinate position of
Figure FDA000021847008000412
The pixel value of the pixel point of (1).
4. The method according to claim 3, wherein the step (1) is performed by matching { L } -Lorg(x, y) } the filter used for the filtering process is a log-Garbor filter.
5. The objective evaluation method for stereo image quality based on feature fusion according to any one of claims 2 to 4, characterized in that the specific process of the third step is as follows:
③ 1, calculate { CMorg(x, y) } and { CMdisThe mean and standard deviation of each pixel in (x, y) } will be { CMorgThe coordinate position in (x, y) } is (x)1,y1) The mean value and the standard difference of the pixel points are respectively recorded as muorg(x1,y1) And σorg(x1,y1) Will { CMdisThe coordinate position in (x, y) } is (x)1,y1) The mean value and the standard difference of the pixel points are respectively recorded as mudis(x1,y1) And σdis(x1,y1), μ org ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) CM org ( x 1 , y 1 ) M , σ org ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) ( CM org ( x 1 , y 1 ) - μ org ( x 1 , y 1 ) ) 2 M , μ dis ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) CM dis ( x 1 , y 1 ) M , σ dis ( x 1 , y 1 ) = Σ ( x 1 , y 1 ) ∈ N ( x 1 , y 1 ) ( CM dis ( x 1 , y 1 ) - μ dis ( x 1 , y 1 ) ) 2 M , Wherein x is more than or equal to 11≤W,1≤y1≤H,N(x1,y1) The coordinate position is shown as (x)1,y1) The pixel point of (a) is an 8 x 8 neighborhood window with the center, M represents N (x)1,y1) Number of inner pixels, CMorg(x1,y1) Representation { CMorgThe coordinate position in (x, y) } is (x)1,y1) Pixel value of the pixel point, CMdis(x1,y1) Representation { CMdisThe coordinate position in (x, y) } is (x)1,y1) The pixel value of the pixel point of (1);
③ 2 according to { CMorg(x, y) } and { CMdisThe sum of the mean values of each pixel in (x, y) } isStandard deviation, calculate { CMdis(x, y) } the objective evaluation metric for each pixel point, will be { CMdisThe coordinate position in (x, y) } is (x)1,y1) The objective evaluation metric value Q of the pixel pointimage(x1,y1), Q image ( x 1 , y 1 ) = 4 × ( μ org ( x 1 , y 1 ) × μ dis ( x 1 , y 1 ) ) × ( σ org ( x 1 , y 1 ) × σ dis ( x 1 , y 1 ) ) + C ( μ org ( x 1 + y 1 ) 2 + μ dis ( x 1 , y 1 ) 2 ) × ( σ org ( x 1 , y 1 ) 2 + σ dis ( x 1 , y 1 ) 2 ) + C , Wherein C is a control parameter.
6. The objective evaluation method for the quality of the stereo image based on the feature fusion as claimed in claim 5, wherein the specific process of the step (iv) is as follows:
tetra-1, pair { CMorg(x, y) performing discrete Fourier transform to obtain (CM)orgThe amplitude and phase of (x, y) } are denoted as { M, respectivelyorg(u, v) } and { Aorg(u, v) }, where u denotes the width of the amplitude or phase of the transform domain, v denotes the height of the amplitude or phase of the transform domain, 1. ltoreq. u.ltoreq.W, 1. ltoreq. v.ltoreq.H, Morg(u, v) represents { M }org(u, v) } the amplitude value of the pixel point with the coordinate position of (u, v), Aorg(u, v) represents { A }org(u, v) } the coordinate position is the phase value of the pixel point with (u, v);
fourthly-2, calculating { Morg(u, v) } amplitude of the high frequency component, denoted as { R }org(u, v) }, will { RorgThe amplitude value of the high-frequency component of the pixel point with the coordinate position (u, v) in (u, v) is recorded as Rorg(u,v),Rorg(u,v)=log(Morg(u,v))-hm(u,v)*log(Morg(u, v)), wherein log () is a logarithmic function based on e, e =2.718281828, "+" is the convolution operator symbol, hm(u, v) represents an m × m mean filtering;
fourthly-3, according to { Rorg(u, v) } and { Aorg(u, v) } inverse discrete Fourier transform, and the obtained inverse transform image is taken as { CMorg(x, y) } significant graph, denoted as { SMorg(x, y) }, wherein, SMorg(x, y) denotes { SMorgThe coordinate position in (x, y) is the pixel value of the pixel point of (x, y);
fourthly-4, obtaining the { CM according to the steps from the fourth step-1 to the fourth step-3orgOperation of the saliency map of (x, y) } acquisition of { CM in the same mannerdis(x, y) } significant graph, denoted as { SMdis(x, y) }, wherein, SMdis(x, y) denotes { SMdisAnd the coordinate position in the (x, y) is the pixel value of the pixel point of (x, y).
CN201210357956.8A 2012-09-24 2012-09-24 Three-dimensional picture quality objective evaluation method based on feature fusion Expired - Fee Related CN102903107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210357956.8A CN102903107B (en) 2012-09-24 2012-09-24 Three-dimensional picture quality objective evaluation method based on feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210357956.8A CN102903107B (en) 2012-09-24 2012-09-24 Three-dimensional picture quality objective evaluation method based on feature fusion

Publications (2)

Publication Number Publication Date
CN102903107A true CN102903107A (en) 2013-01-30
CN102903107B CN102903107B (en) 2015-07-08

Family

ID=47575320

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210357956.8A Expired - Fee Related CN102903107B (en) 2012-09-24 2012-09-24 Three-dimensional picture quality objective evaluation method based on feature fusion

Country Status (1)

Country Link
CN (1) CN102903107B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103200420A (en) * 2013-03-19 2013-07-10 宁波大学 Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention
CN103281556A (en) * 2013-05-13 2013-09-04 宁波大学 Objective evaluation method for stereo image quality on the basis of image decomposition
CN103369348A (en) * 2013-06-27 2013-10-23 宁波大学 Three-dimensional image quality objective evaluation method based on regional importance classification
CN106960432A (en) * 2017-02-08 2017-07-18 宁波大学 One kind is without with reference to stereo image quality evaluation method
CN107945151A (en) * 2017-10-26 2018-04-20 宁波大学 A kind of reorientation image quality evaluating method based on similarity transformation
CN108694705A (en) * 2018-07-05 2018-10-23 浙江大学 A kind of method multiple image registration and merge denoising
CN109903273A (en) * 2019-01-30 2019-06-18 武汉科技大学 A kind of objective evaluation method for quality of stereo images based on DCT domain feature

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080025400A1 (en) * 2006-07-31 2008-01-31 Kddi Corporation Objective perceptual video quality evaluation apparatus
CN101378519A (en) * 2008-09-28 2009-03-04 宁波大学 Method for evaluating quality-lose referrence image quality base on Contourlet transformation
CN101610425A (en) * 2009-07-29 2009-12-23 清华大学 A kind of method and apparatus of evaluating stereo image quality
CN102170581A (en) * 2011-05-05 2011-08-31 天津大学 Human-visual-system (HVS)-based structural similarity (SSIM) and characteristic matching three-dimensional image quality evaluation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080025400A1 (en) * 2006-07-31 2008-01-31 Kddi Corporation Objective perceptual video quality evaluation apparatus
CN101378519A (en) * 2008-09-28 2009-03-04 宁波大学 Method for evaluating quality-lose referrence image quality base on Contourlet transformation
CN101610425A (en) * 2009-07-29 2009-12-23 清华大学 A kind of method and apparatus of evaluating stereo image quality
CN102170581A (en) * 2011-05-05 2011-08-31 天津大学 Human-visual-system (HVS)-based structural similarity (SSIM) and characteristic matching three-dimensional image quality evaluation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALEXANDRE BENOIT,ET AL: "Quality Assessment of Stereoscopic Images", 《QUALITY ASSESSMENT OF STEREOSCOPIC IMAGES》 *
周武杰等: "基于小波图像融合的非对称失真", 《光电工程》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103200420A (en) * 2013-03-19 2013-07-10 宁波大学 Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention
CN103200420B (en) * 2013-03-19 2015-03-25 宁波大学 Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention
CN103281556A (en) * 2013-05-13 2013-09-04 宁波大学 Objective evaluation method for stereo image quality on the basis of image decomposition
CN103369348A (en) * 2013-06-27 2013-10-23 宁波大学 Three-dimensional image quality objective evaluation method based on regional importance classification
CN103369348B (en) * 2013-06-27 2015-03-25 宁波大学 Three-dimensional image quality objective evaluation method based on regional importance classification
CN106960432A (en) * 2017-02-08 2017-07-18 宁波大学 One kind is without with reference to stereo image quality evaluation method
CN107945151A (en) * 2017-10-26 2018-04-20 宁波大学 A kind of reorientation image quality evaluating method based on similarity transformation
CN108694705A (en) * 2018-07-05 2018-10-23 浙江大学 A kind of method multiple image registration and merge denoising
CN109903273A (en) * 2019-01-30 2019-06-18 武汉科技大学 A kind of objective evaluation method for quality of stereo images based on DCT domain feature
CN109903273B (en) * 2019-01-30 2023-03-17 武汉科技大学 Stereo image quality objective evaluation method based on DCT domain characteristics

Also Published As

Publication number Publication date
CN102903107B (en) 2015-07-08

Similar Documents

Publication Publication Date Title
CN102903107B (en) Three-dimensional picture quality objective evaluation method based on feature fusion
CN103581661B (en) Method for evaluating visual comfort degree of three-dimensional image
CN104036501B (en) A kind of objective evaluation method for quality of stereo images based on rarefaction representation
CN103413298B (en) A kind of objective evaluation method for quality of stereo images of view-based access control model characteristic
CN102708567B (en) Visual perception-based three-dimensional image quality objective evaluation method
CN102843572B (en) Phase-based stereo image quality objective evaluation method
CN103136748B (en) The objective evaluation method for quality of stereo images of a kind of feature based figure
CN103517065B (en) Method for objectively evaluating quality of degraded reference three-dimensional picture
CN104658001A (en) Non-reference asymmetric distorted stereo image objective quality assessment method
CN105282543B (en) Total blindness three-dimensional image quality objective evaluation method based on three-dimensional visual perception
CN104036502B (en) A kind of without with reference to fuzzy distortion stereo image quality evaluation methodology
CN105407349A (en) No-reference objective three-dimensional image quality evaluation method based on binocular visual perception
CN105357519B (en) Quality objective evaluation method for three-dimensional image without reference based on self-similarity characteristic
CN104954778A (en) Objective stereo image quality assessment method based on perception feature set
CN103354617B (en) Boundary strength compressing image quality objective evaluation method based on DCT domain
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN103200420B (en) Three-dimensional picture quality objective evaluation method based on three-dimensional visual attention
CN102999912B (en) A kind of objective evaluation method for quality of stereo images based on distortion map
CN103369348B (en) Three-dimensional image quality objective evaluation method based on regional importance classification
CN102999911B (en) Three-dimensional image quality objective evaluation method based on energy diagrams
CN103841411B (en) A kind of stereo image quality evaluation method based on binocular information processing
CN104361583A (en) Objective quality evaluation method of asymmetrically distorted stereo images
CN102737380B (en) Stereo image quality objective evaluation method based on gradient structure tensor
CN105488792B (en) Based on dictionary learning and machine learning without referring to stereo image quality evaluation method
CN103745457A (en) Stereo image objective quality evaluation method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20191218

Address after: Room 1,020, Nanxun Science and Technology Pioneering Park, No. 666 Chaoyang Road, Nanxun District, Huzhou City, Zhejiang Province, 313000

Patentee after: Huzhou You Yan Intellectual Property Service Co.,Ltd.

Address before: 315211 Zhejiang Province, Ningbo Jiangbei District Fenghua Road No. 818

Patentee before: Ningbo University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201229

Address after: 213001 3rd floor, Jinhu innovation center, No.8 Taihu Middle Road, Xinbei District, Changzhou City, Jiangsu Province

Patentee after: Jiangsu Qizhen Information Technology Service Co.,Ltd.

Address before: 313000 room 1020, science and Technology Pioneer Park, 666 Chaoyang Road, Nanxun Town, Nanxun District, Huzhou, Zhejiang.

Patentee before: Huzhou You Yan Intellectual Property Service Co.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150708