CN103108187B - The coded method of a kind of 3 D video, coding/decoding method, encoder - Google Patents

The coded method of a kind of 3 D video, coding/decoding method, encoder Download PDF

Info

Publication number
CN103108187B
CN103108187B CN201310059094.5A CN201310059094A CN103108187B CN 103108187 B CN103108187 B CN 103108187B CN 201310059094 A CN201310059094 A CN 201310059094A CN 103108187 B CN103108187 B CN 103108187B
Authority
CN
China
Prior art keywords
rec
view
reference frame
coding
rebuilding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310059094.5A
Other languages
Chinese (zh)
Other versions
CN103108187A (en
Inventor
戴琼海
马茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201310059094.5A priority Critical patent/CN103108187B/en
Publication of CN103108187A publication Critical patent/CN103108187A/en
Application granted granted Critical
Publication of CN103108187B publication Critical patent/CN103108187B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention proposes the coded method of a kind of 3 D video, coding/decoding method and encoder thereof, wherein coded method includes step: depth map and texture maps to reference view encode, and obtains reference view coding and rebuilding depth map and reference view coding and rebuilding texture maps;According to reference view coding and rebuilding depth map and reference view coding and rebuilding texture maps and corresponding camera parameter, by three-dimension intensity, obtain target view synthesized reference frame;Obtain reference view original texture figure and as primary signal, using target view synthesized reference frame as adding the signal after making an uproar, carry out Wiener filtering, obtain optimization aim View Synthesis reference frame, and solve wiener filter coefficients;Optimization aim View Synthesis reference frame is added set of reference frames, and wiener filter coefficients is write code stream.The present invention has raising code efficiency, the advantage improving video quality.

Description

The coded method of a kind of 3 D video, coding/decoding method, encoder
Technical field
The present invention relates to 3 d video encoding technical field, particularly propose a kind of 3 d video encoding/solution based on Wiener filtering Code method and coding/decoding device.
Background technology
Along with the development of multimedia communication technology, traditional two-dimensional image video, the even 3-D view of fixed view Video, can not meet the visually-perceptible demand of people again.In recent years, in numerous applications such as medical science, military affairs, amusements The most all occur in that the demand for free viewpoint video and 3 D video.For instance, it is possible to freely switch the free view-point at viewing visual angle Display device, and the three-dimensional television of different wide viewing angle video is shown to the beholder of diverse location.In order to realize these application, Efficient multiple view video coding technology is particularly important.
In multiple view video coding, due to the difference of shooting angle, multiple video cameras when shooting Same Scene, regarding of generation Certain geometric distortion is there is between point.View Synthesis prediction (VSP) technology proposes for compensating geometric distortion, and it is main Thought is: at texture video information one visual point image of synthesis of coding side recycling depth information and coding and rebuilding, and will It is used as the reference picture of current encoded image, and this algorithm makes the visual point image generated be more nearly than interview reference image Current encoded image, such that it is able to the data redundancy greatly reduced between viewpoint.Saying further, View Synthesis Predicting Technique is at figure As the other realization of frame level can be briefly described for: utilize the geological information of reference view and scene to synthesize the image of virtual view, And using the composograph of these virtual views as present encoding viewpoint reference frame be used for predictive coding.Therefore, View Synthesis The picture quality of reference frame largely effects on precision and the accuracy of coding prediction.If the picture quality of View Synthesis reference frame can be improved, Raising coding efficiency that then can be certain.
The View Synthesis Predicting Technique of prior art disadvantageously, the quality of View Synthesis reference frame is not high enough, thus directly shadow Ring the coding efficiency of relevant video sequence.The present invention is by the method utilizing Wiener filtering, the View Synthesis reference that will have generated Frame is filtered optimizing, and improves the quality of View Synthesis reference frame, and then improves precision and the coding efficiency of coding.
Summary of the invention
It is contemplated that solve one of above-mentioned technical problem the most to a certain extent or provide at a kind of useful business selection. To this end, it is an object of the present invention to propose 3 d video encoding/coding/decoding method that a kind of code efficiency is high, video quality is good. Further object is that and propose 3 d video encoding/decoding apparatus that a kind of code efficiency is high, video quality is good.
The coded method of 3 D video according to embodiments of the present invention, including: the coded method of a kind of 3 D video, its feature exists In, including: depth map and the texture maps of reference view are encoded by S1., obtain reference view coding and rebuilding depth map D_rec With reference view coding and rebuilding texture maps T_rec;S2. according to described reference view coding and rebuilding depth map D_rec and reference view Coding and rebuilding texture maps T_rec and corresponding camera parameter, by three-dimension intensity, obtain target view synthesized reference frame VS_rec;S3. reference view original texture figure T_orig is obtained and as primary signal, by described target view synthesized reference frame VS_rec, as adding the signal after making an uproar, carries out Wiener filtering, obtains optimization aim View Synthesis reference frame VS_rec_wiener, And solve wiener filter coefficients;And described optimization aim View Synthesis reference frame VS_rec_wiener is added reference by S4. Frame collection, and described wiener filter coefficients is write code stream.
Alternatively, by solving the Wiener Hopf equation described wiener filter coefficients of calculating.
Alternatively, in described Wiener filter: definition input pixel xkWith Wiener filter output pixel zk, wherein wiener filter The output z of ripple devicekBy wave filter support { reconstruction pixel y in S}iComposition, support size is L+1, and weights are ci, then wiener Filter function is:Input pixel xkWith pixel z after Wiener filteringkBetween residual signals C be defined as: errork=zk-xk, there is filter tap { c by makingiMean square deviation minimize and optimize Wiener filter: In order to findMinima, to ciDifferentiation also derives filter tap by making derivative be equal to zero: ∂ ∂ c i E [ error k 2 ] = 2 ( Σ j ∈ { S } E { ( y i ) ( y j ) } c j ) - 2 E [ ( y i ) ( x k ) ] = 0 , Wherein i=0 ..., L, note { auto-correlation function of y} and { y} and { x} Cross-correlation function be respectively ryy(i)=E[yk yk+1] and rxy(i)=E[xk yk+1], then it is rewritten as with matrix form:
Thus Wiener filtering coefficient C} can be derived as with matrix form:
The coding/decoding method of 3 D video according to embodiments of the present invention, it is characterised in that including: S1. receives and is wanted by according to right The code stream asking the coded method of the 3 D video described in any one of 1-3 to obtain, prepares Decoded Reference two field picture frame by frame;S2. institute is judged State the type of reference frame image, if target view synthesized reference frame, then perform S31-S34, if independent view reference Frame, then perform S4;S31. from code stream, extract reference view coding and rebuilding depth map D_rec and reference view coding and rebuilding texture Figure T_rec;S32. from code stream, extract corresponding camera parameter, in conjunction with described reference view coding and rebuilding depth map D_rec and Reference view coding and rebuilding texture maps T_rec, by three-dimension intensity, obtains target view synthesized reference frame VS_rec;S33. From code stream, extract described Wiener filtering coefficient, described target view synthesized reference frame VS_rec is carried out noise reduction filtering, obtains Final goal View Synthesis reference frame VS_rec_final;Final goal View Synthesis reference frame VS_rec_final is read with S34. Information, complete the decoding process of video image;S4. directly read the information of described reference frame image, complete video image Decoding process.
The encoder of 3 D video according to embodiments of the present invention, including: S1. coding and rebuilding module, described reconstruction module is used for Depth map and texture maps to reference view encode, and obtain reference view coding and rebuilding depth map D_rec and reference view is compiled Code rebuilds texture maps T_rec;S2. geometric transformation module, is that set transform module is for deep according to described reference view coding and rebuilding Degree schemes D_rec and reference view coding and rebuilding texture maps T_rec and corresponding camera parameter, by three-dimension intensity, To target view synthesized reference frame VS_rec;S3. Wiener filtering computing module, described filtration module is with reference view original texture Figure T_orig, as primary signal, using described target view synthesized reference frame VS_rec as adding the signal after making an uproar, carries out wiener Filtering, obtains optimization aim View Synthesis reference frame VS_rec_wiener, and solves wiener filter coefficients;And S4. code stream Sending module, described optimization aim View Synthesis reference frame VS_rec_wiener is added set of reference frames by described coding sending module, And described wiener filter coefficients is write code stream, send subsequently.
Alternatively, in described Wiener filtering computing module, calculate described wiener filter coefficients by solving Wiener Hopf equation.
Alternatively, in described Wiener filter: definition input pixel xkWith Wiener filter output pixel zk, wherein Wiener filtering The output z of devicekBy wave filter support { reconstruction pixel y in S}iComposition, support size is L+1, and weights are ci, then wiener filter Ripple device function is:Input pixel xkWith pixel z after Wiener filteringkBetween residual signals C be defined as: errork=zk-xk, there is filter tap { c by makingiMean square deviation minimize and optimize Wiener filter: In order to findMinima, to ciDifferentiation also derives filter tap by making derivative be equal to zero: ∂ ∂ c i E [ error k 2 ] = 2 ( Σ j ∈ { S } E { ( y i ) ( y j ) } c j ) - 2 E [ ( y i ) ( x k ) ] = 0 , Wherein i=0 ..., L, note { auto-correlation function of y} and { y} and { x} Cross-correlation function be respectively ryy(i)=E[yk yk+1] and rxy(i)=E[xk yk+1], then it is rewritten as with matrix form:
Thus Wiener filtering coefficient C} can be derived as with matrix form:
The decoder of 3 D video according to embodiments of the present invention, including: code stream receiver module, described code stream receiver module is used for Receive by the encoder transmitted stream according to the 3 D video described in any one of claim 5-7, prepare decoded reference frame frame by frame Image;Judge module, described judge module is for judging the type of described reference frame image, if target view synthesized reference Frame, then sequentially enter reconstruction module, geometric transformation module, Wiener filtering computing module and decoding read module, if independent View reference frame, then be directly entered described decoding read module;Described reconstruction module, described reconstruction module is for carrying from code stream Take reference view coding and rebuilding depth map D_rec and reference view coding and rebuilding texture maps T_rec;Described geometric transformation module, Described geometric transformation module is for extracting corresponding camera parameter, in conjunction with described reference view coding and rebuilding depth map from code stream D_rec and reference view coding and rebuilding texture maps T_rec, by three-dimension intensity, obtain target view synthesized reference frame VS_rec;Described Wiener filtering computing module, described Wiener filtering computing module is for extracting described Wiener filtering system from code stream Number, carries out noise reduction filtering to described target view synthesized reference frame VS_rec, obtains final goal View Synthesis reference frame VS_rec_final, and it is sent to described decoding read module as the reference frame image after updating;Described decoding read module, For reading the information of described reference frame image, complete the decoding process of video image.
In the present invention, View Synthesis reference frame generates in video encoding-decoding process, owing to using enhancing based on Wiener filtering Visual point synthesizing method, is filtered View Synthesis reference frame, thus is had good inhibiting effect to noise, as volume Reference frame during decoding, can improve the accuracy of estimation, reduces forecast error, improves code efficiency, change simultaneously The subjective quality of video is rebuild after being apt to compression coding.Additionally, the View Synthesis reference frame in the present invention can be in video decoders Rebuild and generate and dynamically update, from the video data the most extra without transmission, as long as transmission is corresponding in video present frame The Wiener filtering coefficient of View Synthesis reference frame, saved resource.
The additional aspect of the present invention and advantage will part be given in the following description, and part will become apparent from the description below, Or recognized by the practice of the present invention.
Accompanying drawing explanation
Above-mentioned and/or the additional aspect of the present invention and advantage the accompanying drawings below description to embodiment will be apparent from from combining and Easy to understand, wherein:
Fig. 1 is the flow chart of the coded method of 3 D video according to embodiments of the present invention;
Fig. 2 is the flow chart of the coding/decoding method of 3 D video according to embodiments of the present invention;
Fig. 3 is the structure chart of the encoder of 3 D video according to embodiments of the present invention;
Fig. 4 is the structure chart of the decoder of 3 D video according to embodiments of the present invention.
Detailed description of the invention
Embodiments of the invention are described below in detail, and the example of described embodiment is shown in the drawings, the most identical or Similar label represents same or similar element or has the element of same or like function.Describe below with reference to accompanying drawing Embodiment is exemplary, it is intended to is used for explaining the present invention, and is not considered as limiting the invention.
In describing the invention, it is to be understood that term " " center ", " longitudinally ", " laterally ", " length ", " width ", " thickness Degree ", " on ", D score, "front", "rear", "left", "right", " vertically ", " level ", " top ", " end " " interior ", " outward ", " suitable Hour hands ", the orientation of the instruction such as " counterclockwise " or position relationship be based on orientation shown in the drawings or position relationship, merely to just Describe in the description present invention and simplification rather than indicate or imply that the device of indication or element must have specific orientation, Yi Te Fixed azimuth configuration and operation, be therefore not considered as limiting the invention.
Additionally, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance or The implicit quantity indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or implicit Ground includes one or more this feature.In describing the invention, " multiple " are meant that two or more, unless Separately there is the most concrete restriction.
In the present invention, unless otherwise clearly defined and limited, term " install ", " being connected ", " connection ", the art such as " fixing " Language should be interpreted broadly, and connects for example, it may be fixing, it is also possible to be to removably connect, or be integrally connected;It can be machine Tool connects, it is also possible to be electrical connection;Can be to be joined directly together, it is also possible to be indirectly connected to by intermediary, can be two units Connection within part.For the ordinary skill in the art, can understand that above-mentioned term is in the present invention as the case may be In concrete meaning.
In the present invention, unless otherwise clearly defined and limited, fisrt feature second feature it " on " or D score permissible Directly contact including the first and second features, it is also possible to include that the first and second features are not directly contact but by them Between other characterisation contact.And, fisrt feature second feature " on ", " top " and " above " include first Feature is directly over second feature and oblique upper, or is merely representative of fisrt feature level height higher than second feature.First is special Levy second feature " under ", " lower section " and " below " include fisrt feature immediately below second feature and obliquely downward, or only Only represent that fisrt feature level height is less than second feature.
For making those skilled in the art be more fully understood that the present invention, the existing framework to View Synthesis Predicting Technique is described further.
In the coding stage of View Synthesis Predicting Technique, encoder is based on encoded depth image associated and texture image, profit With three-dimension intensity, generate the View Synthesis reference frame of current encoded image.Owing to View Synthesis reference frame also may be used in decoding end Based on pertinent image information, by three-dimension intensity, reproduction generates.So, based on regarding that View Synthesis Predicting Technique generates Point synthesized reference frame is not required to coding write code stream, thus greatly reduces encoding code stream, improves code efficiency.If it is current Be coded frame place viewpoint be absolute coding viewpoint, then there is not View Synthesis reference frame in it;If being currently coded frame place viewpoint For forward predictive coded viewpoint, then its View Synthesis reference frame adjacent viewpoint in the same time is encoded texture maps and depth map are carried out View Synthesis;If be currently coded frame place viewpoint be bi-directional predictive coding viewpoint, then its View Synthesis reference frame is by the same time two Texture maps and depth map that individual adjacent viewpoint is encoded carry out View Synthesis, then are weighted averagely merging by two picture frames, If the pixel value P of forward prediction viewpoint dummy synthesis imagefRepresenting, the pixel value of back forecast viewpoint dummy synthesis image is used PbRepresent, then the pixel value P of the reference frame of final View SynthesisrefIt is represented by, Pref=(1-α)Pf+αPb, 0 < α < 1(α according to Distance between viewpoint and different.Distance is the nearest, and the value of α is the biggest).As can be seen here, the picture quality of the reference frame of View Synthesis is big Affect greatly precision and the accuracy of coding prediction.If the picture quality of View Synthesis reference frame can be improved, then raising that can be certain Coding efficiency.
The present invention utilizes encoded image information to be predicted encoding to the picture frame of follow-up coding as reference frame, wherein reference Frame comprises View Synthesis reference frame based on three-dimension intensity.This View Synthesis reference frame is encoded by adjacent viewpoint in the same time Texture maps and depth map carry out View Synthesis.Then, use Wiener filter to be filtered the View Synthesis reference frame generated, Improve the quality of view.View Synthesis reference frame after renewal is predicted when follow-up picture frame encodes.Correspondingly, The decoding process of the present invention has also used same principle to improve code efficiency and to improve video quality.
As it is shown in figure 1, be the flow chart of the coded method of the 3 D video according to the embodiment of the present invention, including: a kind of three-dimensional regards The coded method of frequency, it is characterised in that including: depth map and the texture maps of reference view are encoded by S1., obtain reference Viewpoint coding and rebuilding depth map D_rec and reference view coding and rebuilding texture maps T_rec;S2. deep according to reference view coding and rebuilding Degree schemes D_rec and reference view coding and rebuilding texture maps T_rec and corresponding camera parameter, by three-dimension intensity, To target view synthesized reference frame VS_rec;S3. reference view original texture figure T_orig is obtained and as primary signal, by mesh Mark View Synthesis reference frame VS_rec, as adding the signal after making an uproar, carries out Wiener filtering, obtains optimization aim View Synthesis reference Frame VS_rec_wiener, and solve wiener filter coefficients;And S4. is by optimization aim View Synthesis reference frame VS_rec_wiener adds set of reference frames, and wiener filter coefficients is write code stream.
Alternatively, by solving the Wiener Hopf equation described wiener filter coefficients of calculating.
Alternatively, in described Wiener filter: definition input pixel xkWith Wiener filter output pixel zk, wherein wiener filter The output z of ripple devicekBy wave filter support { reconstruction pixel y in S}iComposition, support size is L+1, and weights are ci, then wiener Filter function is:Input pixel xkWith pixel z after Wiener filteringkBetween residual signals C be defined as: errork=zk-xk, there is filter tap { c by makingiMean square deviation minimize and optimize Wiener filter: In order to findMinima, to ciDifferentiation also derives filter tap by making derivative be equal to zero: &PartialD; &PartialD; c i E [ error k 2 ] = 2 ( &Sigma; j &Element; { S } E { ( y i ) ( y j ) } c j ) - 2 E [ ( y i ) ( x k ) ] = 0 , Wherein i=0 ..., L, note { auto-correlation function of y} and { y} and { x} Cross-correlation function be respectively ryy(i)=E[yk yk+1] and rxy(i)=E[xk yk+1], then it is rewritten as with matrix form:
Thus Wiener filtering coefficient C} can be derived as with matrix form:
As in figure 2 it is shown, be the flow chart of the coding/decoding method of the 3 D video according to the embodiment of the present invention, it is characterised in that including: S1. receive by the code stream obtained according to the coded method of the 3 D video described in any one of claim 1-3, prepare to decode ginseng frame by frame Examine two field picture;S2. judge the type of reference frame image, if target view synthesized reference frame, then perform S31-S34, if It is independent view reference frame, then performs S4;S31. from code stream, reference view coding and rebuilding depth map D_rec is extracted and with reference to regarding Point coding and rebuilding texture maps T_rec;S32. from code stream, corresponding camera parameter is extracted, in conjunction with the reference view coding and rebuilding degree of depth Figure D_rec and reference view coding and rebuilding texture maps T_rec, by three-dimension intensity, obtain target view synthesized reference frame VS_rec;S33. from code stream, extract Wiener filtering coefficient, target view synthesized reference frame VS_rec carried out noise reduction filtering, Obtain final goal View Synthesis reference frame VS_rec_final;Final goal View Synthesis reference frame is read with S34. The information of VS_rec_final, completes the decoding process of video image;S4. directly read the information of reference frame image, complete video The decoding process of image.
As it is shown on figure 3, be the structure chart of the encoder 1000 of the 3 D video according to the embodiment of the present invention, including: S1. encodes Rebuild module 1100, for depth map and the texture maps of reference view are encoded, obtain the reference view coding and rebuilding degree of depth Figure D_rec and reference view coding and rebuilding texture maps T_rec;S2. geometric transformation module 1200, for encoding according to reference view Rebuild depth map D_rec and reference view coding and rebuilding texture maps T_rec and corresponding camera parameter, become by three-dimensional geometry Change, obtain target view synthesized reference frame VS_rec;S3. Wiener filtering computing module 1300, with reference view original texture figure T_orig, as primary signal, using target view synthesized reference frame VS_rec as adding the signal after making an uproar, carries out Wiener filtering, Obtain optimization aim View Synthesis reference frame VS_rec_wiener, and solve wiener filter coefficients;And S4. code stream sends mould Block 1400, adds set of reference frames by optimization aim View Synthesis reference frame VS_rec_wiener, and by described Wiener filter system Number write code stream, sends subsequently.
Alternatively, in described Wiener filtering computing module 1300, calculate described Wiener filter by solving Wiener Hopf equation Coefficient.
Alternatively, in described Wiener filter: definition input pixel xkWith Wiener filter output pixel zk, wherein Wiener filtering The output z of devicekBy wave filter support { reconstruction pixel y in S}iComposition, support size is L+1, and weights are ci, then wiener filter Ripple device function is:Input pixel xkWith pixel z after Wiener filteringkBetween residual signals C be defined as: errork=zk-xk, there is filter tap { c by makingiMean square deviation minimize and optimize Wiener filter: In order to findMinima, to ciDifferentiation also derives filter tap by making derivative be equal to zero: &PartialD; &PartialD; c i E [ error k 2 ] = 2 ( &Sigma; j &Element; { S } E { ( y i ) ( y j ) } c j ) - 2 E [ ( y i ) ( x k ) ] = 0 , Wherein i=0 ..., L, note { auto-correlation function of y} and { y} and { x} Cross-correlation function be respectively ryy(i)=E[yk yk+1] and rxy(i)=E[xk yk+1], then it is rewritten as with matrix form:
Thus Wiener filtering coefficient C} can be derived as with matrix form:
As shown in Figure 4, for the structure chart of the decoder 2000 of the 3 D video according to the embodiment of the present invention, including: code stream connects Receive module 2100, for receiving by encoder 1000 transmitted stream of the 3 D video according to any one of claim 5-7, Prepare Decoded Reference two field picture frame by frame;Judge module 2200, for judging the type of reference frame image, if target view Synthesized reference frame, then sequentially enter reconstruction module 2300, geometric transformation module 2400, Wiener filtering computing module 2500 are conciliate Code read module 2600, if independent view reference frame, is then directly entered described decoding read module 2600;Rebuild module 2300, for extracting reference view coding and rebuilding depth map D_rec and reference view coding and rebuilding texture maps T_rec from code stream; Geometric transformation module 2400, for extracting corresponding camera parameter, in conjunction with reference view coding and rebuilding depth map from code stream D_rec and reference view coding and rebuilding texture maps T_rec, by three-dimension intensity, obtain target view synthesized reference frame VS_rec;Wiener filtering computing module 2500, for extracting Wiener filtering coefficient, to target view synthesized reference from code stream Frame VS_rec carries out noise reduction filtering, obtains final goal View Synthesis reference frame VS_rec_final, and is sent to decoding reading mould Block 2600 is as the reference frame image after updating;Decoding read module 2600, for reading the information of described reference frame image, Complete the decoding process of video image.
In the present invention, View Synthesis reference frame generates in video encoding-decoding process, owing to using enhancing based on Wiener filtering Visual point synthesizing method, is filtered View Synthesis reference frame, thus is had good inhibiting effect to noise, as volume Reference frame during decoding, can improve the accuracy of estimation, reduces forecast error, improves code efficiency, change simultaneously The subjective quality of video is rebuild after being apt to compression coding.Additionally, the View Synthesis reference frame in the present invention can be in the solution of 3 D video Code device 2000 is rebuild and generates and dynamically update, from the video data the most extra without transmission, if current at video Frame transmits the Wiener filtering coefficient of corresponding View Synthesis reference frame.Although it should be noted that obtaining excellent in coding stage Change target view synthesized reference frame VS_rec_winer, but the decoding stage does not use it, but only with transmitting a wiener filter Reference frame used when wave system number just can go back original encoding, thus saved resource.
Without loss of generality, in order to make technical staff be more fully understood that the present invention, applicant is existing as a example by two-way stereo scopic video coding It is described.The explanation of technical scheme that the embodiment of the present invention provide is given below.The enforcement step of specific coding/decoding is as follows Shown in:
1. depth map and the texture maps of reference view are encoded by the coding side at three-dimensional video system, and obtain coding and rebuilding Depth map D_rec and texture maps T_rec.
2., according to depth map D_rec, texture maps T_rec and corresponding camera parameter, by three-dimension intensity, obtain target The View Synthesis image VS_rec of viewpoint.
3. the texture maps without compression taking target view the most in the same time is T_orig.
4., using T_orig as primary signal, VS_rec is to add the signal after making an uproar.VS_rec is carried out Wiener filtering, obtains VS_rec_wiener, and it is calculated corresponding wiener filter coefficients by solving Wiener Hopf equation.
Consider input pixel xkWith Wiener filter output pixel zk, the wherein output z of Wiener filterkBy wave filter support { reconstruction pixel y in S}iComposition, support size is L+1, and weights are ci.Then Wiener filter function is:
z k = &Sigma; i &Element; { S } y i &CenterDot; c i
Input pixel xkWith pixel z after Wiener filteringkBetween residual signals C be defined as:
errork=zk-xk
There is filter tap { c by makingiMean square deviation minimize and optimize Wiener filter:
c i = arg min E [ error k 2 ]
In order to findMinima, to ciDifferentiation also derives filter tap by making derivative be equal to zero:
&PartialD; &PartialD; c i E [ error k 2 ] = 2 ( &Sigma; j &Element; { S } E { ( y i ) ( y j ) } c j ) - 2 E [ ( y i ) ( x k ) ] = 0 Wherein i=0 ..., L
Note { auto-correlation function of y} and { y} and { cross-correlation function of x} is respectively
ryy(i)=E[yk yk+1] and rxy(i)=E[xk yk+1]
Then above formula can be rewritten as with matrix form:
Thus Wiener filtering coefficient C} can derive as follows with matrix form:
R yy &CenterDot; C = R xy &DoubleRightArrow; C = R yy - 1 &CenterDot; R xy
5. VS_rec_wiener is added the set of reference frames of this moment target view coded image, and wiener filter coefficients is write Code stream is sent to the coding of three-dimensional video system.
6. in the decoding stage, if the reference frame of current decoded frame is View Synthesis reference frame, use the method identical with decoding end to generate View Synthesis reference frame, then utilizes the Wiener filtering coefficient in code stream that described reference frame is carried out noise reduction filtering, obtains final View Synthesis reference frame.
It can be seen that embodiments provide a kind of introducing enhancing based on Wiener filtering viewpoint from above-mentioned specific embodiment The video coding-decoding method of synthesized reference frame and device, in order to improve compression coding efficiency and the quality of 3 D video.The present invention is real Execute example uses Wiener filter to be filtered optimizing to the reference frame after View Synthesis, improves the quality of virtual visual point image.
It should be noted that in flow chart or at this any process described otherwise above or method describe it is understood that For, represent and include one or more code for the executable instruction of the step that realizes specific logical function or process Module, fragment or part, and the scope of the preferred embodiment of the present invention includes other realization, wherein can not press Order that is shown or that discuss, including according to involved function by basic mode simultaneously or in the opposite order, holds Row function, this should be understood by embodiments of the invention person of ordinary skill in the field.
In the description of this specification, reference term " embodiment ", " some embodiments ", " example ", " concrete example ", Or specific features, structure, material or the feature that the description of " some examples " etc. means to combine this embodiment or example describes comprises In at least one embodiment or example of the present invention.In this manual, the schematic representation to above-mentioned term not necessarily refers to It is identical embodiment or example.And, the specific features of description, structure, material or feature can at any one or Multiple embodiments or example combine in an appropriate manner.
Although above it has been shown and described that embodiments of the invention, it is to be understood that above-described embodiment is exemplary, Being not considered as limiting the invention, those of ordinary skill in the art is in the case of without departing from the principle of the present invention and objective Above-described embodiment can be changed within the scope of the invention, revise, replace and modification.

Claims (6)

1. the coded method of a 3 D video, it is characterised in that including:
S1. depth map and texture maps to reference view encode, and obtain reference view coding and rebuilding depth map D_rec and ginseng Examine viewpoint coding and rebuilding texture maps T_rec;
S2. according to described reference view coding and rebuilding depth map D_rec and reference view coding and rebuilding texture maps T_rec and phase The camera parameter answered, by three-dimension intensity, obtains target view synthesized reference frame VS_rec;
S3. reference view original texture figure T_orig is obtained and as primary signal, by described target view synthesized reference frame VS_rec, as adding the signal after making an uproar, carries out Wiener filtering, obtains optimization aim View Synthesis reference frame VS_rec_wiener, And solve wiener filter coefficients;And
S4. described optimization aim View Synthesis reference frame VS_rec_wiener is added set of reference frames, and by described Wiener filtering Device coefficient write code stream.
2. the coded method of 3 D video as claimed in claim 1, it is characterised in that by solving Wiener Hopf equation meter Calculate described wiener filter coefficients.
3. the coding/decoding method of a 3 D video, it is characterised in that including:
S1. receiving comprises by the code stream obtained according to the coded method of the 3 D video described in any one of claim 1-2, prepare by Frame decoding reference frame image;
S2. judge the type of described reference frame image, if target view synthesized reference frame, then perform S31-S34, if It is independent view reference frame, then performs S4;
S31. from code stream, extract reference view coding and rebuilding depth map D_rec and reference view coding and rebuilding texture maps T_rec;
S32. from code stream, corresponding camera parameter is extracted, in conjunction with described reference view coding and rebuilding depth map D_rec and reference Viewpoint coding and rebuilding texture maps T_rec, by three-dimension intensity, obtains target view synthesized reference frame VS_rec;
S33. from code stream, extract described wiener filter coefficients, described target view synthesized reference frame VS_rec is carried out noise reduction Filtering, obtains final goal View Synthesis reference frame VS_rec_final;With
S34. read the information of final goal View Synthesis reference frame VS_rec_final, complete the decoding process of video image;
S4. directly read the information of described reference frame image, complete the decoding process of video image.
4. the encoder of a 3 D video, it is characterised in that including:
S1. coding and rebuilding module, described reconstruction module, for encoding depth map and the texture maps of reference view, is joined Examine viewpoint coding and rebuilding depth map D_rec and reference view coding and rebuilding texture maps T_rec;
S2. geometric transformation module, described geometric transformation module for according to described reference view coding and rebuilding depth map D_rec and Reference view coding and rebuilding texture maps T_rec and corresponding camera parameter, by three-dimension intensity, obtain target view Synthesized reference frame VS_rec;
S3. Wiener filtering computing module, described Wiener filtering computing module is using reference view original texture figure T_orig as original Signal, using described target view synthesized reference frame VS_rec as adding the signal after making an uproar, carries out Wiener filtering, obtains optimizing mesh Mark View Synthesis reference frame VS_rec_wiener, and solve wiener filter coefficients;And
S4. code stream sending module, described code stream sending module is by described optimization aim View Synthesis reference frame VS_rec_wiener Add set of reference frames, and described wiener filter coefficients is write code stream, send subsequently.
5. the encoder of 3 D video as claimed in claim 4, it is characterised in that in described Wiener filtering computing module, Described wiener filter coefficients is calculated by solving Wiener Hopf equation.
6. the decoder of a 3 D video, it is characterised in that including:
Code stream receiver module, described code stream receiver module is for receiving by according to the 3 D video described in any one of claim 4-5 Encoder transmitted stream, prepare Decoded Reference two field picture frame by frame;
Judge module, described judge module is for judging the type of described reference frame image, if target view synthesized reference Frame, then sequentially enter reconstruction module, geometric transformation module, Wiener filtering computing module and decoding read module, if solely Vertical view reference frame, then be directly entered described decoding read module;
Described reconstruction module, described reconstruction module is for extracting reference view coding and rebuilding depth map D_rec and ginseng from code stream Examine viewpoint coding and rebuilding texture maps T_rec;
Described geometric transformation module, described geometric transformation module is for extracting corresponding camera parameter from code stream, in conjunction with described Reference view coding and rebuilding depth map D_rec and reference view coding and rebuilding texture maps T_rec, by three-dimension intensity, To target view synthesized reference frame VS_rec;
Described Wiener filtering computing module, described Wiener filtering computing module is for extracting described Wiener filter system from code stream Number, carries out noise reduction filtering to described target view synthesized reference frame VS_rec, obtains final goal View Synthesis reference frame VS_rec_final, and it is sent to described decoding read module as the reference frame image after updating;
Described decoding read module, for reading the information of described reference frame image, completes the decoding process of video image.
CN201310059094.5A 2013-02-25 2013-02-25 The coded method of a kind of 3 D video, coding/decoding method, encoder Expired - Fee Related CN103108187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310059094.5A CN103108187B (en) 2013-02-25 2013-02-25 The coded method of a kind of 3 D video, coding/decoding method, encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310059094.5A CN103108187B (en) 2013-02-25 2013-02-25 The coded method of a kind of 3 D video, coding/decoding method, encoder

Publications (2)

Publication Number Publication Date
CN103108187A CN103108187A (en) 2013-05-15
CN103108187B true CN103108187B (en) 2016-09-28

Family

ID=48315715

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310059094.5A Expired - Fee Related CN103108187B (en) 2013-02-25 2013-02-25 The coded method of a kind of 3 D video, coding/decoding method, encoder

Country Status (1)

Country Link
CN (1) CN103108187B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105474643A (en) * 2013-07-19 2016-04-06 联发科技(新加坡)私人有限公司 Method of simplified view synthesis prediction in 3d video coding
CN103428499B (en) * 2013-08-23 2016-08-17 清华大学深圳研究生院 The division methods of coding unit and the multi-view point video encoding method of use the method
CN104768013B (en) * 2014-01-02 2018-08-28 浙江大学 A kind of candidate pattern queue processing method and device
CN104202612B (en) * 2014-04-15 2018-11-02 清华大学深圳研究生院 The division methods and method for video coding of coding unit based on quaternary tree constraint
CN104284195B (en) * 2014-10-11 2018-12-25 华为技术有限公司 Depth map prediction technique, device, encoder and decoder in 3 D video
CN109076200B (en) 2016-01-12 2021-04-23 上海科技大学 Method and device for calibrating panoramic stereo video system
CN107770511A (en) * 2016-08-15 2018-03-06 中国移动通信集团山东有限公司 A kind of decoding method of multi-view point video, device and relevant device
CN109413421B (en) * 2018-10-26 2021-01-19 张豪 Video encoding method, video encoding apparatus, video decoding method, and video decoding apparatus
FR3096538A1 (en) * 2019-06-27 2020-11-27 Orange Multi-view video data processing method and device
CN111988597B (en) * 2020-08-23 2022-06-14 咪咕视讯科技有限公司 Virtual viewpoint synthesis method and device, electronic equipment and readable storage medium
CN118235392A (en) * 2021-12-31 2024-06-21 Oppo广东移动通信有限公司 Filtering coefficient generation and filtering method, video encoding and decoding method, device and system
CN114079779B (en) * 2022-01-12 2022-05-17 深圳传音控股股份有限公司 Image processing method, intelligent terminal and storage medium
WO2024212233A1 (en) * 2023-04-14 2024-10-17 Oppo广东移动通信有限公司 Encoding method, decoding method, and bitstream, encoder, decoder and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1515561B1 (en) * 2003-09-09 2007-11-21 Mitsubishi Electric Information Technology Centre Europe B.V. Method and apparatus for 3-D sub-band video coding
CN101146227A (en) * 2007-09-10 2008-03-19 中国科学院研究生院 Build-in gradual flexible 3D wavelet video coding algorithm
CN101420618B (en) * 2008-12-02 2011-01-05 西安交通大学 Adaptive telescopic video encoding and decoding construction design method based on interest zone

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
System Design of Free Viewpoint Video Communication;Hideaki Kimata等;《IEEE》;20041231;全文 *
惯性约束聚变中环孔编码图像恢复的改进维纳滤波方法;刘晓辉等;《光学学报》;20040831;全文 *

Also Published As

Publication number Publication date
CN103108187A (en) 2013-05-15

Similar Documents

Publication Publication Date Title
CN103108187B (en) The coded method of a kind of 3 D video, coding/decoding method, encoder
CN102934451B (en) Three-dimensional parallax figure
CN103514580B (en) For obtaining the method and system of the super-resolution image that visual experience optimizes
CN101627635B (en) 3d video encoding
CN104885470B (en) Content Adaptive Partitioning for Prediction and Coding of Next Generation Video
CN103945208B (en) A kind of parallel synchronous zooming engine for multiple views bore hole 3D display and method
CN102939763B (en) Calculating disparity for three-dimensional images
CN104854866B (en) Content adaptive, feature compensated prediction for next generation video
CN106791927A (en) A kind of video source modeling and transmission method based on deep learning
CN103562958B (en) The unrelated figure of yardstick
Stefanoski et al. Automatic view synthesis by image-domain-warping
CN103037214A (en) Video compression method
CN101729892B (en) Coding method of asymmetric stereoscopic video
CN101543081A (en) Dynamic image encoding device, dynamic image encoding method, dynamic image encoding program, dynamic image decoding device, dynamic image decoding method, and dynamic image decoding program
CN104247427A (en) Effective prediction using partition coding
MY143068A (en) Implicit weighting of reference pictures in a video encoder
CN107277550A (en) Multi-view signal codec
CN102291579B (en) Rapid fractal compression and decompression method for multi-cast stereo video
CN102685532A (en) Coding method for free view point four-dimensional space video coding system
CN102812716A (en) Image processor, image processing method, and program
EP3343923A1 (en) Motion vector field coding method and decoding method, and coding and decoding apparatuses
MX2008002391A (en) Method and apparatus for encoding multiview video.
CN101198061A (en) Solid video stream encoding method based on sight point image mapping
CN102413353A (en) Code rate allocation method for multi-view video and depth map in stereo video coding process
CN106791768A (en) A kind of depth map frame per second method for improving that optimization is cut based on figure

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160928