CN109510981A - A kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform - Google Patents

A kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform Download PDF

Info

Publication number
CN109510981A
CN109510981A CN201910063073.8A CN201910063073A CN109510981A CN 109510981 A CN109510981 A CN 109510981A CN 201910063073 A CN201910063073 A CN 201910063073A CN 109510981 A CN109510981 A CN 109510981A
Authority
CN
China
Prior art keywords
parallax
dct transform
comfort level
block
stereo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910063073.8A
Other languages
Chinese (zh)
Other versions
CN109510981B (en
Inventor
周洋
尉婉丽
周辉
谢菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201910063073.8A priority Critical patent/CN109510981B/en
Publication of CN109510981A publication Critical patent/CN109510981A/en
Application granted granted Critical
Publication of CN109510981B publication Critical patent/CN109510981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to technical field of video image processing, disclose a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform, comprising the following steps: step S01:, which dividing the block that disparity map does several scales, and carries out two-dimensional dct transform obtains each piece of dct transform result;Step S02: feature extraction is carried out to dct transform result;Step S03: the result of feature extraction is normalized in an identical dynamic range;Step S04: the result after normalization is input to random forests algorithm, is obtained a result.Model prediction result proposed by the present invention has good correlation with subjective evaluation result, can accurately reflect the viewing comfort level of stereo-picture.The comfort level prediction model can be directly applied in the engineerings such as the predictions of quality such as 3D rendering or video and improvement.

Description

A kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform
Technical field
The present invention relates to technical field of video image processing, in particular to a kind of perspective view based on multiple dimensioned dct transform As comfort level prediction technique.
Background technique
Along with the development of 3D film and virtual reality, the application of 3D video and image in real life is increasingly It is more.More and more researchers begin to focus on influence of the 3D imaging pattern to human eye health simultaneously.Existing research shows that at present Stereo-picture and video be likely to result in many uncomfortable of viewer, such as dizzy, nausea etc., while human eye health is made At harm.To improve stereoscopic image/video comfort level, comfort level prediction must be carried out to it, but time-consuming by the viewing of human eye subjectivity And it is laborious, the objective method of stereoscopic image/video human eye comfort level prediction must be sought.
It is directed to the objective comfort level forecasting research method of stereo-picture and video at present, it is most of to use some simple biographies Characteristic value of the parallax feature of uniting as prediction stereo-picture comfort level, such as parallax mean value, parallax variance, gradient of disparity etc..? The some researchs having are weighted conventional parallax using human eye concern model on the basis of conventional parallax vertical as prediction The characteristic value of body image comfort level.Also method directly using parallax information, does not propose that a kind of new characteristic pattern does not link picture The percentage of plain (PUP), is not required to practical calculating parallax value, is mentioned by predicting not linking the percentage of pixel in corresponding retina Take the patch of PUP characteristic image pair.In face of machine learning and deep learning method, directly using machine learning method to perspective view The comfort level of picture is predicted, according to absolute classification grading (ACR) method traditional in subjective research, from study ranking (L2R) Angle propose a kind of stereo image vision comfort level appraisal procedure, and have and potentially encoded by VCA depth network Stereo-picture and the parallax magnitude based on human attention and the vision difference between gradient information, while being come using a network Vision difference feature is extracted from left view and right view, is obtained most by two individual depth convolutional neural networks (DCNN) Whole comfort level predicted value.Also there is method according to the stereo-picture target visual comfort level assessment side classified based on scene mode Method constructs target VCA model using some features in various scene modes.However these features all only reflect in spatial domain The relationship of itself and stereoscopic vision comfort level, could not be in conjunction with other transform domain informations in image procossing to the comfort level of stereo-picture Make more accurate prediction.
A kind of stereo image vision comfort level method for objectively evaluating of the disclosure of the invention of Authorization Notice No. CN104811693B, Its right anaglyph for extracting stereo-picture first is used to reflect that the low-level visual features vector sum of visual comfort is advanced Visual signature vector, and merge and obtain the characteristic vector of stereo-picture, recycle support vector regression in three-dimensional image collection The characteristic vectors of all stereo-pictures be trained, the support vector regression training pattern finally obtained using training is to solid The characteristic vector of every width stereo-picture in image collection is tested, and the objective visual comfort for obtaining every width stereo-picture is commented Valence predicted value.
Method in above-mentioned documents is more unilateral to the evaluation of comfort level, enters and leaves in any case with subjective assessment It is larger.
Summary of the invention
Standard cannot be made to the comfort level of stereo-picture in conjunction with other transform domain informations in image procossing for the prior art Really the problem of prediction, the stereo-picture comfort level prediction technique based on multiple dimensioned dct transform that the present invention provides a kind of is utilized More characteristic value predicts comfort level, can relatively accurately reflect the viewing comfort level of stereo-picture, Ke Yizhi It scoops out and is used in the predictions of quality and improved engineering such as 3D rendering, prediction result has related well to subjective evaluation result Property.
Technical scheme is as follows:
A kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform, comprising the following steps:
Step S01:, which dividing the block that disparity map does several scales, and carries out two-dimensional dct transform obtains each piece of dct transform knot Fruit;
Step S02: feature extraction is carried out to dct transform result;Step S03: the result of feature extraction is normalized to a phase In same dynamic range;Step S04: the result after normalization is input to random forests algorithm, is obtained a result.
Preferably, feature extraction includes: basic parallax strength characteristic extraction, gradient of disparity energy in the step S02 Feature extraction and parallax Texture complication feature extraction.
Preferably, the detailed process that the basis parallax strength characteristic extracts are as follows: by all pieces of the sum of DC coefficient As the basic parallax characteristic strength value of current scale, that is, use a=[a (i, j)]N×NIndicate that block size is the disparity map of N × N, Block after indicating to carry out two-dimensional dct transform with Α,
Wherein
K-th of underlying strength featureIt can calculate as follows:
K indicate discrete cosine transform block scale, I be in disparity map the number of the block andW and H are respectively The width and height of disparity map;DC () represents DC coefficient.Above-mentioned basis parallax strength characteristic (BDI) is used to characterize in disparity map Parallax intensity, use to disparity map carry out two-dimensional dct transform after DC DC coefficient, i.e., positioned at block the upper left corner coefficient make For basic parallax characteristic strength value.At this time to disparity map carry out different scale block segmentation after, by all pieces of DC coefficient it With the basic parallax characteristic strength value as current scale.
Preferably, the detailed process that the parallax energy Gradient Features extract are as follows: calculate each piece and surrounding adjacent blocks Parallax capacity volume variance, the parallax energy gradient value of the block is obtained after normalization
Wherein EA (m, n) indicates that a spatial position is the DCT block A of (m, n)mnAndk Indicate the scale of DGE, KBIt is the block of DCT in disparity map.Above-mentioned gradient of disparity energy feature (DGE) is for part in disparity map The characteristic value of capacity volume variance variation and definition.It, should by calculating after carrying out block dividing processing to disparity map when calculating this feature value The parallax capacity volume variance of block and surrounding adjacent blocks obtains the parallax energy gradient value of the block after normalization.The energy ladder of disparity map Degree characteristic value after the normalization of block number by obtaining.
Preferably, the detailed process of the parallax Texture complication feature extraction are as follows: to being removed line after dct transform It is special to extract Texture complication for reason operation, the value after only taking the dct transform greater than certain threshold value, the part zero setting to threshold value is less than The calculating process of sign is as follows:
Wherein T is that the threshold value of selection carries out different tile sizes different settings;Then by carrying out two dimension to B Inverse DCT, the block b of available removal medium-high frequency:
B=[b (i, j)]M×N=IDCT ([B (u, v)]M×N)
Wherein IDCT () indicates two-dimentional inverse DCT transformation, then makes the difference the disparity map of original disparity map and removal medium-high frequency, obtains Texture result is obtained,
C=[c (i, j)]M×N=| [a (i, j)]M×N-[b(i,j)]M×N|
Wherein c indicates the medium-high frequency information of block a, cp,qIt is the medium-high frequency information of the block on (p, q);M × N is block cp,qSize, Final multiple dimensioned parallax Texture complication feature is indicated by the following formula:
In formulaIt is parallax texture eigenvalue on the position (i, j);KpThe number of block in parallax textural characteristics figure;K is block Scale factor.Above-mentioned parallax Texture complication feature (DTC) is directed to when there are excessive texture informations will cause human eye in image Generated when merging stereoscopic vision uncomfortable, although removal texture information will lead to image quality decrease, but comfort level can be The theoretical proposition of promotion.Parallax Texture complication feature is indicated using high-frequency ac coefficients AC value after dct transform, for anti- The difference of disparity map and former disparity map after reflecting removal high-frequency information.Specific calculating process is removed after being directed to dct transform Texture operation, we use threshold method, the value after only taking the dct transform greater than certain threshold value, less than the part zero setting of threshold value. It is made the difference after the complete disparity map of progress DCT inverse transformation acquisition with original disparity map later and obtains parallax Texture complication characteristic pattern, Parallax Texture complication characteristic value is obtained after the figure all pixels value is added.
Preferably, normalized dynamic range is between 0 to 1 in the step S03.In order to avoid due to not Tongfang Method generates the amplitude difference that different parallax features generate in fusion process, first that the parallax of a variety of different scales obtained is special Sign normalizes in an identical dynamic range.
Preferably, detailed process are as follows: A01: using random forests algorithm in the step S04, several characteristic values being made For input value, corresponding mos value is trained and tests as output, and training set and test set ratio are 4:1, will carry out 1000 The mean value of secondary trained test result is as final result under this condition;A02: it is trained and surveys using decision tree number as variable Examination obtains optimum decision tree number, uses random forests algorithm to above-mentioned several input values in optimum decision tree number It is merged, obtains the comfort level predicted value of stereo-picture.
Preferably, using 8 different scales in the basis parallax strength characteristic extraction.
Preferably, using 7 different scales in the gradient of disparity power feature extraction.
Preferably, using 8 different scales in the parallax Texture complication feature extraction.
The present invention is broadly divided into parallax feature extraction and fusion two large divisions, and wherein parallax characteristic extraction part proposes more A parallax feature, including basic parallax strength characteristic, gradient of disparity energy feature and parallax Texture complication feature, for every A parallax feature has carried out Multi-Scale Calculation respectively, and several characteristic values are used in the prediction of stereo-picture comfort level.By being based on The random forest of self-service sampling, the comfort level prediction result obtained after being merged to Analysis On Multi-scale Features.
Model prediction result proposed by the present invention has good correlation with subjective evaluation result, can be accurate Reflect the viewing comfort level of stereo-picture.It is pre- that the comfort level prediction model can be directly applied to the quality such as 3D rendering or video In the engineerings such as survey and improvement.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is that basic parallax strength characteristic extracts schematic diagram;
Fig. 3 is gradient of disparity power feature extraction schematic diagram.
Specific embodiment
Set specific embodiment is further elaborated the technical program below.
A kind of embodiment: stereo-picture comfort level prediction technique based on multiple dimensioned dct transform, comprising the following steps:
Step S01:, which dividing the block that disparity map does several scales, and carries out two-dimensional dct transform obtains each piece of dct transform knot Fruit;
Step S02: feature extraction is carried out to dct transform result;Step S03: the result of feature extraction is normalized to a phase In same dynamic range;Step S04: the result after normalization is input to random forests algorithm, is obtained a result.
In the present embodiment, in the step S02, feature extraction includes: basic parallax strength characteristic extraction, gradient of disparity energy Measure feature extracts and parallax Texture complication feature extraction.
As shown in Fig. 2, giving the position of DC DC coefficient by taking 8 × 8 block as an example.Disparity map is carried out at this time different After the block segmentation of scale, the basic parallax characteristic strength value by the sum of all pieces DC coefficient as current scale.This implementation In example, the detailed process of the basis parallax strength characteristic extraction are as follows: regard the sum of all pieces DC coefficient as current scale Basic parallax characteristic strength value, that is, use a=[a (i, j)]N×NIt indicates that block size is the disparity map of N × N, indicates to carry out with Α Block after two-dimensional dct transform,
Wherein
K-th of underlying strength featureIt can calculate as follows:
K indicates the scale of discrete cosine transform block, in the present embodiment, using 8 different scales, I be in disparity map the number of the block andW and H is the width and height of disparity map respectively;DC () represents DC coefficient.Above-mentioned basis parallax is strong Degree feature (BDI) is used to characterize the parallax intensity in disparity map, using to the DC direct current system after disparity map progress two-dimensional dct transform Number, i.e., the coefficient positioned at the upper left corner of block is as basic parallax characteristic strength value.The block of different scale is carried out to disparity map at this time Basic parallax characteristic strength value after segmentation, by the sum of all pieces DC coefficient as current scale.
It is illustrated in figure 3 gradient of disparity power feature extraction schematic diagram, in the present embodiment, the parallax energy Gradient Features The detailed process of extraction are as follows: calculate each piece and the parallax capacity volume variance of surrounding adjacent blocks, the parallax of the block is obtained after normalization Energy gradient value
Wherein EA (m, n) indicates that a spatial position is the DCT block A of (m, n)mnAndk The scale for indicating DGE in the present embodiment, uses 7 different scales, KBIt is the block of DCT in disparity map.Above-mentioned gradient of disparity energy Feature (DGE) is the characteristic value for being directed to local energy the change of divergence in disparity map and defining.When calculating this feature value, by view The parallax capacity volume variance that the block Yu surrounding adjacent blocks are calculated after difference figure progress block dividing processing, obtains the parallax of the block after normalization Energy gradient value.The energy gradient characteristic value of disparity map obtains after being normalized by block number.
In the present embodiment, the detailed process of the parallax Texture complication feature extraction are as follows: to being removed after dct transform Texture complication is extracted in texture operation, the value after only taking the dct transform greater than certain threshold value, the part zero setting to threshold value is less than The calculating process of feature is as follows:
Wherein T is that the threshold value of selection carries out different tile sizes different settings;Then by carrying out two dimension to B Inverse DCT, the block b of available removal medium-high frequency:
B=[b (i, j)]M×N=IDCT ([B (u, v)]M×N)
Wherein IDCT () indicates two-dimentional inverse DCT transformation, then makes the difference the disparity map of original disparity map and removal medium-high frequency, obtains Texture result is obtained,
C=[c (i, j)]M×N=| [a (i, j)]M×N-[b(i,j)]M×N|
Wherein c indicates the medium-high frequency information of block a, cp,qIt is the medium-high frequency information of the block on (p, q);M × N is block cp,qSize, Final multiple dimensioned parallax Texture complication feature is indicated by the following formula:
In formulaIt is parallax texture eigenvalue on the position (i, j);KpThe number of block in parallax textural characteristics figure;The ruler of k block Degree factor.In the present embodiment, 8 different scales are used in the parallax Texture complication feature extraction.Above-mentioned parallax texture is multiple Miscellaneous degree feature (DTC), which is directed to, not to be relaxed when will cause human eye there are excessive texture information in image and generate when merging stereoscopic vision Suitable, although removal texture information will lead to image quality decrease, but theoretical proposing of being promoted of comfort level.Using DCT High-frequency ac coefficients AC value indicates parallax Texture complication feature after transformation, for reflecting the disparity map after removal high-frequency information With the difference of former disparity map.Specific calculating process is to be removed texture operation after being directed to dct transform, we use threshold value Method, the value after only taking the dct transform greater than certain threshold value, less than the part zero setting of threshold value.DCT inverse transformation is carried out later to have obtained It is made the difference after whole disparity map with original disparity map and obtains parallax Texture complication characteristic pattern, which is added Parallax Texture complication characteristic value is obtained afterwards.
In the present embodiment, in the step S03, normalized dynamic range is between 0 to 1.In order to avoid due to difference Method generates the amplitude difference that different parallax features generate in fusion process, first that the parallax of the kind different scale obtained is special Sign normalizes in an identical dynamic range.
In the present embodiment, in the step S04, detailed process are as follows: A01: random forests algorithm is used, by 23 characteristic values As input value, corresponding mos value is trained and tests as output, and training set and test set ratio are 4:1, will carry out The mean value of 1000 trained test results is as final result under this condition;A02: it is trained using decision tree number as variable And test, optimum decision tree number is obtained, random forest is used to above-mentioned 23 input values in optimum decision tree number Algorithm is merged, and the comfort level predicted value of stereo-picture is obtained.
The present embodiment is broadly divided into parallax feature extraction and fusion two large divisions, and wherein parallax characteristic extraction part proposes Multiple parallax features, including basic parallax strength characteristic, gradient of disparity energy feature and parallax Texture complication feature, for Each parallax feature has carried out Multi-Scale Calculation respectively, and 23 characteristic values are used in the prediction of stereo-picture comfort level.Pass through base In the random forest of self-service sampling, the comfort level prediction result that is obtained after being merged to Analysis On Multi-scale Features.
The prediction result of the present embodiment has good correlation with subjective evaluation result, can accurately reflect The viewing comfort level of stereo-picture.The comfort level prediction model can be directly applied to the predictions of quality such as 3D rendering or video and change In the engineerings such as kind.
It should be noted that the specific embodiment is only used for that technical solution is further described, it is not used in and limits the skill The range of art scheme, any modifications, equivalent substitutions and improvements etc. based on this technical solution are regarded as in protection of the invention In range.

Claims (10)

1. a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform, which comprises the following steps:
Step S01:, which dividing the block that disparity map does several scales, and carries out two-dimensional dct transform obtains each piece of dct transform knot Fruit;
Step S02: feature extraction is carried out to dct transform result;
Step S03: the result of feature extraction is normalized in an identical dynamic range;
Step S04: the result after normalization is input to random forests algorithm, is obtained a result.
2. a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform according to claim 1, feature It is, in the step S02, feature extraction includes: basic parallax strength characteristic extraction, gradient of disparity power feature extraction and view Poor Texture complication feature extraction.
3. a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform according to claim 2, feature It is, the detailed process that the basis parallax strength characteristic extracts are as follows: regard the sum of all pieces DC coefficient as current scale Basic parallax characteristic strength value, that is, use a=[a (i, j)]N×NIt indicates that block size is the disparity map of N × N, indicates to carry out with Α Block after two-dimensional dct transform,
Wherein
K-th of underlying strength featureIt can calculate as follows:
K indicate discrete cosine transform block scale, I be in disparity map the number of the block andW and H are respectively The width and height of disparity map;DC () represents DC coefficient.
4. a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform according to claim 3, feature It is, the detailed process that the parallax energy Gradient Features extract are as follows: calculate the parallax energy difference of each piece with surrounding adjacent blocks It is different, the parallax energy gradient value of the block is obtained after normalization
Wherein EA (m, n) indicates that a spatial position is the DCT block A of (m, n)mnAndK table Show the scale of DGE, KBIt is the block of DCT in disparity map.
5. a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform according to claim 4, feature It is, the detailed process of the parallax Texture complication feature extraction are as follows: to texture operation is removed after dct transform, only take The calculating of parallax Texture complication feature is extracted to the part zero setting less than threshold value greater than the value after the dct transform of certain threshold value Process is as follows:
Wherein T is that the threshold value of selection carries out different tile sizes different settings;Then by carrying out two dimension to B Inverse DCT, the block b of available removal medium-high frequency:
B=[b (i, j)]M×N=IDCT ([B (u, v)]M×N)
Wherein IDCT () indicates two-dimentional inverse DCT transformation, then makes the difference the disparity map of original disparity map and removal medium-high frequency, obtains Texture result is obtained,
C=[c (i, j)]M×N=| [a (i, j)]M×N-[b(i,j)]M×N|
Wherein c indicates the medium-high frequency information of block a, cp,qIt is the medium-high frequency information of the block on (p, q);M × N is block cp,qSize, Final multiple dimensioned parallax Texture complication feature is indicated by the following formula:
In formulaIt is parallax texture eigenvalue on the position (i, j);KpThe number of block in parallax textural characteristics figure;K is block Scale factor.
6. a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform according to claim 1 or 2, It is characterized in that, in the step S03, normalized dynamic range is between 0 to 1.
7. a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform according to claim 1 or 2, It is characterized in that, in the step S04, detailed process are as follows:
A01: using random forests algorithm, and several characteristic values are used as to input value, corresponding mos value as output be trained with Test, training set and test set ratio are 4:1, will carry out the mean values of 1000 trained test results as finally under this condition As a result;
A02: being trained and test using decision tree number as variable, optimum decision tree number is obtained, in optimum decision tree number In the case of above-mentioned several input values are merged with random forests algorithm, obtain the comfort level predicted value of stereo-picture.
8. a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform according to claim 3, feature It is, uses 8 different scales in the basis parallax strength characteristic extraction.
9. a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform according to claim 4, feature It is, 7 different scales is used in the gradient of disparity power feature extraction.
10. a kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform according to claim 5, special Sign is, 8 different scales are used in the parallax Texture complication feature extraction.
CN201910063073.8A 2019-01-23 2019-01-23 Stereo image comfort degree prediction method based on multi-scale DCT Active CN109510981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910063073.8A CN109510981B (en) 2019-01-23 2019-01-23 Stereo image comfort degree prediction method based on multi-scale DCT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910063073.8A CN109510981B (en) 2019-01-23 2019-01-23 Stereo image comfort degree prediction method based on multi-scale DCT

Publications (2)

Publication Number Publication Date
CN109510981A true CN109510981A (en) 2019-03-22
CN109510981B CN109510981B (en) 2020-05-05

Family

ID=65758158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910063073.8A Active CN109510981B (en) 2019-01-23 2019-01-23 Stereo image comfort degree prediction method based on multi-scale DCT

Country Status (1)

Country Link
CN (1) CN109510981B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111405264A (en) * 2020-01-20 2020-07-10 杭州电子科技大学 3D video comfort level improving method based on depth adjustment
CN111526354A (en) * 2019-11-25 2020-08-11 杭州电子科技大学 Stereo video comfort prediction method based on multi-scale spatial parallax information
CN111696076A (en) * 2020-05-07 2020-09-22 杭州电子科技大学 Novel stereo image comfort degree prediction method
CN115880212A (en) * 2021-09-28 2023-03-31 北京三快在线科技有限公司 Binocular camera evaluation method and device, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005094479A (en) * 2003-09-18 2005-04-07 Nippon Hoso Kyokai <Nhk> Device and program for supporting evaluation of video image quality
CN101389045A (en) * 2008-10-23 2009-03-18 北京中星微电子有限公司 Image quality evaluation method and device
US20130293725A1 (en) * 2012-05-07 2013-11-07 Futurewei Technologies, Inc. No-Reference Video/Image Quality Measurement with Compressed Domain Features
CN103533344A (en) * 2013-10-09 2014-01-22 上海大学 Compressed image quality non-parameter evaluation method on basis of multiscale decomposition
CN104123693A (en) * 2014-06-26 2014-10-29 宁波大学 Multi-functional digital watermarking method for three-dimensional picture
CN104243974A (en) * 2014-09-12 2014-12-24 宁波大学 Stereoscopic video quality objective evaluation method based on three-dimensional discrete cosine transformation
CN104994375A (en) * 2015-07-08 2015-10-21 天津大学 Three-dimensional image quality objective evaluation method based on three-dimensional visual saliency
CN105208369A (en) * 2015-09-23 2015-12-30 宁波大学 Method for enhancing visual comfort of stereoscopic image
CN106210710A (en) * 2016-07-25 2016-12-07 宁波大学 A kind of stereo image vision comfort level evaluation methodology based on multi-scale dictionary
CN108769671A (en) * 2018-06-13 2018-11-06 天津大学 Stereo image quality evaluation method based on adaptive blending image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005094479A (en) * 2003-09-18 2005-04-07 Nippon Hoso Kyokai <Nhk> Device and program for supporting evaluation of video image quality
CN101389045A (en) * 2008-10-23 2009-03-18 北京中星微电子有限公司 Image quality evaluation method and device
US20130293725A1 (en) * 2012-05-07 2013-11-07 Futurewei Technologies, Inc. No-Reference Video/Image Quality Measurement with Compressed Domain Features
CN103533344A (en) * 2013-10-09 2014-01-22 上海大学 Compressed image quality non-parameter evaluation method on basis of multiscale decomposition
CN104123693A (en) * 2014-06-26 2014-10-29 宁波大学 Multi-functional digital watermarking method for three-dimensional picture
CN104243974A (en) * 2014-09-12 2014-12-24 宁波大学 Stereoscopic video quality objective evaluation method based on three-dimensional discrete cosine transformation
CN104994375A (en) * 2015-07-08 2015-10-21 天津大学 Three-dimensional image quality objective evaluation method based on three-dimensional visual saliency
CN105208369A (en) * 2015-09-23 2015-12-30 宁波大学 Method for enhancing visual comfort of stereoscopic image
CN106210710A (en) * 2016-07-25 2016-12-07 宁波大学 A kind of stereo image vision comfort level evaluation methodology based on multi-scale dictionary
CN108769671A (en) * 2018-06-13 2018-11-06 天津大学 Stereo image quality evaluation method based on adaptive blending image

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111526354A (en) * 2019-11-25 2020-08-11 杭州电子科技大学 Stereo video comfort prediction method based on multi-scale spatial parallax information
CN111405264A (en) * 2020-01-20 2020-07-10 杭州电子科技大学 3D video comfort level improving method based on depth adjustment
CN111405264B (en) * 2020-01-20 2022-04-12 杭州电子科技大学 3D video comfort level improving method based on depth adjustment
CN111696076A (en) * 2020-05-07 2020-09-22 杭州电子科技大学 Novel stereo image comfort degree prediction method
CN111696076B (en) * 2020-05-07 2023-07-07 杭州电子科技大学 Novel stereoscopic image comfort degree prediction method
CN115880212A (en) * 2021-09-28 2023-03-31 北京三快在线科技有限公司 Binocular camera evaluation method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN109510981B (en) 2020-05-05

Similar Documents

Publication Publication Date Title
CN107578404B (en) View-based access control model notable feature is extracted complete with reference to objective evaluation method for quality of stereo images
CN109510981A (en) A kind of stereo-picture comfort level prediction technique based on multiple dimensioned dct transform
Md et al. Full-reference stereo image quality assessment using natural stereo scene statistics
CN102750695B (en) Machine learning-based stereoscopic image quality objective assessment method
CN104867138A (en) Principal component analysis (PCA) and genetic algorithm (GA)-extreme learning machine (ELM)-based three-dimensional image quality objective evaluation method
CN106462771A (en) 3D image significance detection method
CN105654142B (en) Based on natural scene statistics without reference stereo image quality evaluation method
CN110060236B (en) Stereoscopic image quality evaluation method based on depth convolution neural network
CN109360178A (en) Based on blending image without reference stereo image quality evaluation method
CN106303507B (en) Video quality evaluation without reference method based on space-time united information
Shao et al. Blind image quality assessment for stereoscopic images using binocular guided quality lookup and visual codebook
KR101393621B1 (en) Method and system for analyzing a quality of three-dimensional image
CN109831664B (en) Rapid compressed stereo video quality evaluation method based on deep learning
CN103426173B (en) Objective evaluation method for stereo image quality
Yue et al. Blind stereoscopic 3D image quality assessment via analysis of naturalness, structure, and binocular asymmetry
CN104994375A (en) Three-dimensional image quality objective evaluation method based on three-dimensional visual saliency
CN103780895B (en) A kind of three-dimensional video quality evaluation method
CN104866864A (en) Extreme learning machine for three-dimensional image quality objective evaluation
CN101976444A (en) Pixel type based objective assessment method of image quality by utilizing structural similarity
CN110246111A (en) Based on blending image with reinforcing image without reference stereo image quality evaluation method
Geng et al. A stereoscopic image quality assessment model based on independent component analysis and binocular fusion property
CN109788275A (en) Naturality, structure and binocular asymmetry are without reference stereo image quality evaluation method
CN107371016A (en) Based on asymmetric distortion without with reference to 3D stereo image quality evaluation methods
Karimi et al. Blind stereo quality assessment based on learned features from binocular combined images
CN107360416A (en) Stereo image quality evaluation method based on local multivariate Gaussian description

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant