CN104408710B - Global parallax estimation method and system - Google Patents
Global parallax estimation method and system Download PDFInfo
- Publication number
- CN104408710B CN104408710B CN201410604055.3A CN201410604055A CN104408710B CN 104408710 B CN104408710 B CN 104408710B CN 201410604055 A CN201410604055 A CN 201410604055A CN 104408710 B CN104408710 B CN 104408710B
- Authority
- CN
- China
- Prior art keywords
- points
- point
- image
- matching
- taking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000005070 sampling Methods 0.000 claims abstract description 44
- 238000004364 calculation method Methods 0.000 claims abstract description 42
- 230000000903 blocking effect Effects 0.000 claims description 23
- 238000004422 calculation algorithm Methods 0.000 claims description 16
- 239000003086 colorant Substances 0.000 claims description 16
- 238000012216 screening Methods 0.000 claims description 13
- 238000000638 solvent extraction Methods 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims 1
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 6
- 238000012360 testing method Methods 0.000 description 4
- 239000012634 fragment Substances 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000001953 sensory effect Effects 0.000 description 2
- 208000006440 Open Bite Diseases 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a global parallax estimation method and system. When matching space calculation is carried out, a sampling point is selected from an image according to a preset rule; and calculation of a first matching space unit and a second matching space unit is carried out based on constraint conditions. The employed constraint conditions include a linear constraint condition and a sampling-point space wire harness constraint condition; the linear constrain condition expresses the constraint of the color Euclidean distance between a current pixel point and a searching point; and the space wire harness constraint condition expresses the constraint of the color Euclidean distance between the searching point and the sampling point. Because of simultaneous utilization of the two constraint conditions, the calculated matching space is closer to the edge of the object in the image, thereby improving accuracy of the matching space calculation and guaranteeing the accuracy of the final parallax calculation.
Description
Technical Field
The application relates to the field of stereo matching image processing, in particular to a global parallax estimation method and system.
Background
In a traditional Video system, a user can only passively watch pictures shot by a camera, but cannot watch pictures of different viewpoints from other viewpoints, and a Multi-View Video (Multi-View Video) allows the user to watch the pictures from multiple viewpoints, so that the interactivity and 3D (three-dimensional) sensory effect are enhanced, and the Video system has wide application prospects in the fields of stereoscopic televisions, Video conferences, automatic navigation, virtual reality and the like. However, the data volume of the video is increased due to the stronger interactivity and sensory effect, and the burden is increased on the storage and transmission of the video, so how to solve the problems becomes a research hotspot at present.
Stereo matching, also known as disparity estimation, estimates the geometric relationship between pixels in corresponding images from multi-view image data (typically two views) acquired by a front-end camera. By using disparity estimation, information of a corresponding viewpoint can be obtained from information of one viewpoint and depth (disparity) information thereof, so that the original data volume is reduced, and convenience is provided for transmission and storage of a multi-view video.
Depending on the specific implementation details, stereo matching methods can be broadly divided into local stereo matching algorithms and global stereo matching algorithms (see Scharstein D, Szeliski R. Ataxonomy and evaluation of den-frame o correlation algorithms, algorithms [ J ]. International journal of computing, 2002,47(1-3): 7-42.). The accuracy of the local stereo matching algorithm is not high, but the speed is high, so that the method is not beneficial to practical application; the global stereo matching algorithm is based on optimizing the global energy function to obtain the parallax result, and its accuracy is higher, but its speed is slower, however, some improved global stereo matching algorithms have been developed to generate the speed equivalent to the local stereo matching algorithm, such as fast belief propagation algorithm (see, for example, Pehydro F. Felzenzwanwalb, DanielP. Huttenlocher. effective belief propagation for Early Vision. International journal of Computer Vision October 2006, Volume 70, Issue 1, pp 41-54).
As can be seen from the above description, stereo matching has attracted much attention as an important link in multi-view video, and a large number of stereo matching algorithms emerge. However, stereo matching still has many problems, especially accuracy and stability, which need to be further improved.
Disclosure of Invention
According to a first aspect of the present application, there is provided a global disparity estimation method, comprising:
reading in a first viewpoint image and a second viewpoint image, wherein the first viewpoint image is an image of a target acquired from a first viewpoint, and the second viewpoint image is an image of the target acquired from a second viewpoint;
selecting sampling points on the first viewpoint image according to a preset rule;
sequentially selecting pixel points on a first viewpoint image as current pixel points, taking the current pixel points as original points, and searching by taking the pixel points one by one as search points along the positive direction and the negative direction of a first axis until the point which does not meet the preset constraint condition is searched, and taking all the searched points which meet the constraint condition as first matching points; respectively taking each first matching point as an origin, searching by taking pixel points one by one as search points along the positive direction and the negative direction of the second axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as second matching points; taking the first matching point and the second matching point as a first matching space of the current pixel point;
taking the current pixel point as an origin, searching by taking the pixel points one by one as search points along the positive direction and the negative direction of the second axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as third matching points; respectively taking each third matching point as an origin, searching by taking pixel points one by one as a searching point along the positive direction and the negative direction of the first axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as fourth matching points; taking the third matching point and the fourth matching point as a second matching space of the current pixel point;
the constraint conditions comprise linear constraint conditions and space constraint conditions based on the sampling points, the linear constraint conditions are constraints of Euclidean distances between the current pixel point and the search point on the color, the space constraint conditions are constraints of Euclidean distances between the search point and the sampling points on the color, and the first axis is perpendicular to the second axis;
calculating the sum of the matching costs of all the points in the first matching space, and calculating the sum of the matching costs of all the points in the second matching space;
calculating initial parallax according to the sum of the matching costs of all the points in the first matching space and the sum of the matching costs of all the points in the second matching space, and screening to obtain reliable points;
performing image blocking on the first viewpoint image and the second viewpoint image;
and respectively calculating the final parallax of each pixel point in the first viewpoint image according to the initial parallax of the reliable point based on the image blocks.
In one embodiment, the constraint is:
wherein l1Is the distance from the pixel point p to the search point q, the pixel point p is the current pixel point l2For pixel point p to sampling point eiA distance of (A), Olab(p, q) is the Euclidean distance between the pixel point p and the search point q in color, Olab(q,ei) For searching point q and sampling point eiEuclidean distance, k, in color1、k2、k3、k4、w1、w2Is a custom parameter, and k1>k2、k4>k3、w2>w1。
In an embodiment, the predetermined rule is that the distance between each sampling point and the four adjacent sampling points on the upper, lower, left and right sides is a predetermined distance.
In an embodiment, after reading in the first viewpoint image and the second viewpoint image, before selecting the sample points, the method further includes: and performing epipolar line correction on the first viewpoint image and the second viewpoint image.
In an embodiment, after image blocking is performed on the first viewpoint image and the second viewpoint image, and before the final parallax is calculated, the method further includes marking an occlusion region in the image, specifically: taking a first reliable point L (p) of each block of the first viewpoint image from the left end in each line and according to the parallax d of the point L (p)pCalculates a point R (p-d) corresponding to the second view imagep) (ii) a From point R (p-d) in the second viewpoint imagep-1) starting to find the first reliable point R (q) to the left and finding its disparity dqThe calculated point r (q) corresponds to the point L (q + d) in the first viewpoint imageq) Two horizontal points L (p) and L (q + d)q) The points in between are the occlusion points.
In an embodiment, the initial disparity is calculated by using a fast belief propagation global algorithm according to the sum of the matching costs of all the points in the first matching space and the sum of the matching costs of all the points in the second matching space.
In an embodiment, image blocking the first view image and the second view image includes:
dividing the first viewpoint image and the second viewpoint image into a plurality of image blocks;
merging image blocks according to colors: merging the image blocks with the pixel points less than the preset value with the image blocks with the closest colors in the adjacent image blocks; and/or when the colors of two adjacent image blocks are close and the sum of the pixel points of the two image blocks is smaller than a preset value, combining the two image blocks;
merging the image blocks according to the parallax: merging the image blocks with the reliable points with the quantity less than the preset value with the image blocks with the closest colors in the adjacent image blocks, wherein the reliable points are obtained by screening according to the initial parallax of each pixel point in the original image; and/or judging whether the parallax change of two adjacent image blocks is smooth or not, and if so, merging the two image blocks.
In an embodiment, the dividing the first view image and the second view image into a plurality of image blocks specifically includes: the image is divided into image blocks based on superpixel color blocking.
In an embodiment, the determining whether the disparity change of two adjacent image blocks is smooth includes:
finding out current image block S and its adjacent image block SkIs paired with the boundary neighboring point PS(i)、PSk(i),PS(i) And PSk(i) Is a block S and a block SkThe ith neighbor point pair of (a);
with PS(i) Searching a rectangular box of a b for the center, calculating the mean value V of the disparity of the reliable points belonging to the block S in the boxS(i) With PSk(i) Searching a rectangular box of a and b for the center, calculating the block S belonging to the boxkOf the reliable points of (1) mean value V of the disparity of the reliable pointsSk(i) Wherein a and b are preset pixel widths;
When max | VS(i)-VSk(i)|<j, it is determined as the current image block S and the adjacent image block SkIs smoothed, wherein i ∈ WS,Sk,WS,SkIs a block S and a block SkAnd f, collecting subscripts of all adjacent point pairs of the boundary, wherein j is a preset value.
According to a second aspect of the present application, there is also provided a global disparity estimation system, comprising:
the image reading module is used for reading a first viewpoint image and a second viewpoint image, wherein the first viewpoint image is an image of a target acquired from a first viewpoint, and the second viewpoint image is an image of the target acquired from a second viewpoint;
the matching space calculation module is used for selecting sampling points on the first viewpoint image according to a preset rule, then sequentially selecting pixel points on the first viewpoint image as current pixel points, taking the current pixel points as original points, searching in the positive direction and the negative direction of the first axis by taking the pixel points one by one as search points until the point which does not meet the preset constraint condition is searched, and taking all the searched points which meet the constraint condition as first matching points; respectively taking each first matching point as an origin, searching by taking pixel points one by one as search points along the positive direction and the negative direction of the second axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as second matching points; taking the first matching point and the second matching point as a first matching space of the current pixel point;
taking the current pixel point as an origin, searching by taking the pixel points one by one as search points along the positive direction and the negative direction of the second axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as third matching points; respectively taking each third matching point as an origin, searching by taking pixel points one by one as a searching point along the positive direction and the negative direction of the first axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as fourth matching points; taking the third matching point and the fourth matching point as a second matching space of the current pixel point;
the constraint conditions comprise linear constraint conditions and space constraint conditions based on the sampling points, the linear constraint conditions are constraints of Euclidean distances between the current pixel point and the search point on the color, the space constraint conditions are constraints of Euclidean distances between the search point and the sampling points on the color, and the first axis is perpendicular to the second axis;
the matching cost calculation module is used for calculating the sum of the matching costs of all the points in the first matching space and calculating the sum of the matching costs of all the points in the second matching space;
the initial parallax calculation module is used for calculating the initial parallax according to the sum of the matching costs of all the points in the first matching space and the sum of the matching costs of all the points in the second matching space, and screening to obtain reliable points;
an image blocking module for performing image blocking on the first viewpoint image and the second viewpoint image;
and the final parallax calculation module is used for calculating the final parallax of each pixel point in the first viewpoint image according to the initial parallax of the reliable point based on the image blocks.
According to the global parallax estimation method and system, when the matching space is calculated, sampling points are selected on an image according to a preset rule, and then calculation of a first matching space and a second matching space is carried out according to a constraint condition. The adopted constraint conditions comprise linear constraint conditions and space wiring harness conditions based on sampling points, the linear constraint conditions are constraints of Euclidean distances between the current pixel point and the search point on the color, the space constraint conditions are constraints of Euclidean distances between the search point and the sampling points on the color, and due to the fact that the two constraint conditions are adopted at the same time, the calculated matching space is closer to the edge of an object in the image, therefore, the accuracy of matching space calculation can be improved, and the accuracy of final parallax calculation is guaranteed.
Drawings
Fig. 1 is a schematic flowchart of a global disparity estimation method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating sampling points selected in a matching space calculation method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a calculation of a first matching space in a matching space calculation method according to an embodiment of the present application;
fig. 4 is a block diagram of a global disparity estimation system according to an embodiment of the present disclosure;
fig. 5 is a test result of the global disparity estimation method on the Middlebury test platform according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail below with reference to the accompanying drawings by way of specific embodiments.
Referring to fig. 1, the present embodiment provides a global disparity estimation method, including the following steps:
s00: a first viewpoint image and a second viewpoint image are read, the first viewpoint image being an image of a target acquired from a first viewpoint, and the second viewpoint image being an image of the target acquired from a second viewpoint. For convenience of explanation of the present application, the first view image is a left view image (hereinafter, referred to as a left image) and the second view image is a right view image (hereinafter, referred to as a right image) as an example. The left and right images may be images in a binocular sequence captured by a binocular camera or two images captured by a monocular camera at a certain horizontal displacement. Typically, the left and right images are color images, and in some embodiments may be non-color images.
In some embodiments, the read-in left image and right image are images that have undergone epipolar line correction, that is, the epipolar lines of the two images are horizontally parallel, so as to facilitate the subsequent matching cost calculation.
S10: and calculating initial parallax and screening to obtain reliable points.
When calculating the initial parallax, firstly, a matching space of a pixel point in an image needs to be calculated, in this embodiment, the matching space includes a first matching space and a second matching space, and the calculation method is as follows:
and selecting sampling points according to a preset rule. Firstly, a sampling point e is selected in the left image space, specifically, the distance between each sampling point and four adjacent sampling points on the upper, lower, left and right sides of each sampling point is a preset distance d, and all the sampling points form a grid shape as shown in fig. 2. In other embodiments, the selection of the sampling points may also adopt other agreed modes, that is, the preset rule for selecting the sampling points may be formulated according to actual requirements.
And calculating a first matching space and a second matching space according to constraint conditions, wherein the constraint conditions comprise linear constraint conditions and space constraint conditions based on the sampling points, the linear constraint conditions are constraints of Euclidean distances between the current pixel point and the search point on the color, and the space constraint conditions are constraints of Euclidean distances between the search point and the sampling points on the color.
For a certain point p in the left image, a distance is extended from the point p to the two directions of the X axis (first axis) and the Y axis (second axis) according to the color difference respectively, and the distance is used for calculating the matching space.
Sequentially selecting pixel points on the left image as current pixel points p, taking the point p as an original point, and searching by taking the pixel points one by one as search points along the positive direction and the negative direction of the X axis until the search point does not meet the preset valueStopping when the point of the constraint condition is reached, and taking all searched points meeting the constraint condition as first matching points; respectively taking each first matching point as an origin, searching by taking pixel points one by one as search points along the positive direction and the negative direction of the Y axis until a point which does not meet the preset constraint condition is searched, and taking all the searched points which meet the constraint condition as second matching points; a first matching space S having the first matching point and the second matching point as a point p1. As shown in fig. 3, is a first matching space S1Schematic diagram of the calculation process of (1).
Then, taking the point p as an origin, searching by taking pixel points one by one as search points along the positive direction and the negative direction of the Y axis until a point which does not meet the preset constraint condition is searched, and taking all the searched points which meet the constraint condition as third matching points; respectively taking each third matching point as an origin, searching by taking pixel points one by one as a searching point along the positive direction and the negative direction of the X axis until a point which does not meet the preset constraint condition is searched, and taking all the searched points which meet the constraint condition as fourth matching points; a second matching space S having the third matching point and the fourth matching point as a point p2。
And (3) with the point p as an origin, searching points meeting constraint conditions, namely a right arm, a left arm, an upper arm and a lower arm shown in the figure 2, along the positive direction of the X axis, the negative direction of the X axis, the positive direction of the Y axis and the negative direction of the Y axis respectively.
In a specific embodiment, the constraints are:
wherein l1Is the distance from the pixel point p to the search point q, the pixel point p is the current pixel point l2For pixel point p to sampling point eiA distance of l1And l2Is selected by the condition k3*l1<l2<k4*l1Determination of Olab(p, q) is the Euclidean distance between the pixel point p and the search point q in color, Olab(q,ei) For searching point q and sampling point eiEuclidean distance, k, in color1、k2、k3、k4、w1、w2Is a custom parameter, and k1>k2、k4>k3、w2>w1. E.g. k1=15、k2=5、k3=1.5、k4=3、w1=10、w2100. In this example, Olab(p, q) is the Euclidean distance between the pixel point p and the search point q on the lab color, Olab(q,ei) For searching point q and sampling point eiEuclidean distance over lab color. It should be noted that the ith sampling point eiWherein the value of i is set to be proper k3Value sum k4And (3) making the value of i unique to determine a unique sampling point.
In the constraint condition (1), the condition (i) belongs to linear constraint and the condition (iii) belongs to spatial constraint based on sampling points. When the matching space is calculated, the color change speed on different pictures is different, and the color change speed of different areas of the same picture is different, so that the algorithm is difficult to be stable on a single linear constraint. In this embodiment, the introduced spatial constraint is mainly used to improve the boundary region point of the object in the image, so that the calculated matching space is closer to the edge of the object in the image, and the stability of the algorithm is enhanced by referring to more reasonable color information. Therefore, on the premise of linear constraint, the accuracy and stability of stereo matching can be better ensured by combining spatial constraint based on sampling points. In other embodiments, the above constraint conditions may be appropriately changed according to actual needs.
After the matching space of the points is calculated, the method also comprises the step of calculating the matching cost of the points.
For a certain point L in the left imagepMatching is carried out in the specified range omega of the right graph, and all points and points L in the range are calculatedpThe range Ω is a search range, that is, a value range of the disparity value, and the search range is a value range corresponding to the point LpOn the same scanning line (epipolar line), since the left and right images have undergone epipolar rectification and the epipolar lines are horizontally parallel, the search range Ω is a horizontal line segment. For each point's disparity d in the range Ω, a point L is usedpFirst matching space S1To match R in the right graphw+dAnd points, the calculation of the matching cost of each adjacent point pair is obtained by a mixed cost function, and the final matching cost is the sum C of the matching costs of all the adjacent point pairs1. By point LpSecond matching space S2The sum of matching costs C is calculated in the same manner2。
The matching cost function of each neighboring point pair consists of three parts: a gray space census transform (center transform), a color space absolute value difference (denoted as AD), a bi-directional gradient, and the specific calculation of each part is as follows:
(1) the scene of census conversion is performed on a gray map, the color map is firstly converted into the gray map, the gray value of p points in the original map is represented by GS (p), and the census value x (p, q) generated by removing all points q and p in a 7x9 window with p as the center is calculated, and the calculation formula is as follows:
connecting x (p, q) into a binary string B (p) according to the relative positions of p and q. After the left and right graphs are respectively calculated, two corresponding bit strings can be obtained, the difference between the two corresponding bit strings is described by a hamming distance, and the obtained cost value is as follows:
h(p,d)=Ham(BL(p),BR(p-d))…………(3)
wherein d represents the parallax between the corresponding pixels.
(2) AD value
The absolute value difference is a more common method for measuring the similarity between two points, in this embodiment, the AD value of the two points in the color space is adopted, and the cost value obtained according to the AD value is as follows:
wherein,the RGB colors for point p in the left image,the RGB colors of the points in the right image corresponding to the left image p point by parallax d,representing the euclidean distance of the two colors.
(3) Gradient of gradient
The gradient is selected as a cost term, and a bidirectional gradient, namely, a gradient in the horizontal and vertical directions, is adopted in the embodiment. Wherein N isxAnd NyThe derivatives (gradients), I, in the x and y directions, respectivelyL(p) is the gray value of the point to be calculated (left image), IR(p-d) is the gray value of the corresponding point in another picture (right picture), d is the parallax between two points, then
CGDx(p,d)=||Nx(IL(p))-Nx(IR(p-d))||
CGDy(p,d)=||Ny(IL(p))-Ny(IR(p-d))||…………(5)
CGD=CGDx+CGDy
(4) Hybrid cost function
The final cost function is formed by weighting and mixing the three cost terms, as shown in formula (6), wherein a, b, and g are weights of the terms to represent the contribution of the terms to the final cost function value.
C(x,y,d)=aCcensus+bCAD(p,d)+gCGD…………(6)
Wherein x and y represent coordinate values, and d represents the parallax of the point (x, y). CcensusThat is, the value of h (p, d) at the corresponding point obtained in the formula (3).
Preferably, the sum of matching costs C of all points in the first matching space is calculated1And the sum C of the matching costs of all points in the second matching space2Then, according to C1And C2The method adopts a rapid belief propagation global algorithm to calculate the initial parallax so as to improve the accuracy and stability of stereo matching, and the specific calculation mode is as follows:
the relationship between the confidence B and the energy function E is:
B=e-E…………(7)
at this time, the process of maximizing the confidence B is equivalent to the process of minimizing the energy function E, and the point P parallax dpThe energy function of (a) may be expressed as:
n (p) is a set of 4 points up, down, left, and right adjacent to the point p,the parallax of point p is d for the energy transmitted from point r to point p after T iterationspLocal matching cost of time is Dp(dp):
Dp(dp)=[c1(p,dp)+c2(p,dp)]/2…………(9)
And the calculation formula of the energy transmitted after t iterations from point p to point q can be as follows:
n (p) \\ q is a set of 4 points adjacent to the point p except for the q point.
Optimum parallax d of point p* p(i.e., initial disparity) can be obtained by minimizing an energy function E, which is formulated as follows:
and omega is the value range of parallax.
S20: further screening for reliable points
Since many points of the calculated initial disparity are unreliable and affect the final calculation result, in this embodiment, the matching of left and right disparity maps is used to perform further screening of reliable points, dL(p) represents the parallax of the point p in the left image. The screening formula is:
match (p) equal to 1 indicates p-point reliable, and equal to 0 indicates p-point unreliable.
In the global disparity estimation, a step of blocking an image is further included. When the image is partitioned, the image is firstly partitioned into a plurality of extremely small-sized fragments (image blocks), preferably, the image is partitioned into a plurality of image blocks based on the superpixel color partitioning, and then the image blocks are respectively merged according to the color and the parallax on the basis. The color blocking based on the Superpixel means that a plurality of (usually large number) super pixel points are taken in space, and then the pixel point closest to each super pixel point is judged by utilizing space information and color information. Each super pixel point and the pixel point closest to the super pixel point form a block, and the number of the super pixel points is equal to the number of the generated blocks. The color blocking based on the Superpixel has a good effect of dividing the boundary of an object under the condition that super pixel points are enough, but the negative influence is brought to the calculation due to the fact that the number of generated blocks is too large under the condition.
S30: merging image blocks according to colors: merging the image blocks with the pixel points less than the preset value with the image blocks with the closest colors in the adjacent image blocks; and/or when the colors of two adjacent image blocks are close and the sum of the pixel points of the two image blocks is smaller than a preset value, combining the two image blocks.
In this embodiment, it is assumed that for the image block s, the number of pixels is p(s), and the number of reliable points is r(s).
(1) The size of the fragments obtained by division is extremely small, so that the number of the fragments is large, the memory required by the subsequent processing is extremely large, and the blocks with extremely small pixel points are merged with the surrounding blocks. When p(s)<k1(k1A preset value), the block is merged with the block with the color closest to the block, and the proximity of the color can be judged by any method in the prior art.
(2) If the colors of two adjacent blocks are close enough, they are also merged to improve the stability of the partitioning. At the same time, it is ensured that the merged block is not too large, for block s1And s2When p(s)1)+p(s2)<k2(k2Preset value), s are combined1And s2。
S40: merging the image blocks according to the parallax: merging the image blocks with the reliable points with the quantity less than the preset value with the image blocks with the closest colors in the adjacent image blocks, wherein the reliable points are obtained by screening according to the initial parallax of each pixel point in the original image; and/or judging whether the parallax change of two adjacent image blocks is smooth or not, and if so, merging the two image blocks.
Since the image blocks are used for the final disparity estimation (calculation of final disparity), the initial disparity has already been calculated in the previous step. Therefore, merging the blocks according to the disparity helps to make the last block more suitable for disparity estimation, and improves the accuracy.
(1) According to the previous step of reliable point screening, since some blocks have a small number of reliable points, so that merging according to the disparity will affect the accuracy, it is necessary to merge these blocks with other blocks. In the present embodiment, when r(s)<k3(k3A preset value), the block is merged with the block having the closest color. To find the block with the color closest to the current block, any one of the methods in the prior art can be used, for example, the color of the current block is compared with the surrounding blocks.
(2) According to the characteristics of disparity estimation, the place where disparity changes smoothly needs to be classified into one block, so whether the two blocks are combined or not can be decided by judging whether the disparity changes smoothly between adjacent blocks, if so, the two blocks are combined, and if not, the two blocks are not combined.
In this embodiment, when determining whether the parallax change of two adjacent image blocks is smooth, the current image block S and the adjacent image block S thereof are found firstkIs paired with the boundary neighboring point PS(i)、PSk(i),PS(i) And PSk(i) Is a block S and a block SkThe ith neighbor point pair of (a); then with PS(i) Searching a rectangular box of a b for the center, calculating the mean value V of the disparity of the reliable points belonging to the block S in the boxS(i) With PSk(i) Searching a rectangular box of a and b for the center, calculating the block S belonging to the boxkOf the reliable points of (1) mean value V of the disparity of the reliable pointsSk(i) Wherein a and b are preset pixel widths; when max | VS(i)-VSk(i)|<j, it is determined as the current image block S and the adjacent image block SkIs smoothed, wherein i ∈ WS,Sk,WS,SkIs a block S and a block SkAnd f, collecting subscripts of all adjacent point pairs of the boundary, wherein j is a preset value.
The following equation can be defined:
when th [ s ]][sk]<j, blocks s and skAnd (6) merging.
In the image blocking method for global disparity estimation provided by this embodiment, not only color information is used for blocking, but also disparity information is introduced, so that the accuracy of the final disparity calculated finally can be further improved.
Since the left and right images are images observed from different viewing angles, some parts are not shown in the right image in the left image, some parts are not shown in the left image in the right image, and all the parts belong to the occlusion area. Since these regions exist in only one image, the results calculated by the disparity calculation performed by the foregoing method are basically all errors, and these errors affect the final estimation result, so that it is necessary to find out the occlusion region by color segmentation and mark it as an unreliable point, so as to improve the final accuracy.
Taking the left image as an example, by observing with the left and right eyes of a person, it can be known that the occlusion region exists in a part of each block in the color block where the right end is adjacent to other blocks, and a part adjacent to the left end is non-occluded. For the right image occlusion region, there is a portion of each block in the color patches where the left end is adjacent to other blocks, and a portion where the right end is adjacent is non-occluded.
In this embodiment, after the image color blocking is performed and before the final parallax is calculated, the method further includes marking an occlusion region in the image, specifically: taking the first reliable point L (p) of each block of the left image in each line from the left end, and according to the parallax d of the point L (p)pCalculated as the point R (p-d) corresponding to the right graphp);From point R (p-d) in the right diagramp-1) starting to find the first reliable point R (q) to the left and finding its disparity dqThe point R (q) is calculated to correspond to the point L (q + d) in the left imageq) Two horizontal points L (p) and L (q + d)q) The points in between are the occlusion points.
In order to further improve the accuracy, the present embodiment further includes the steps of: and performing median filtering on the existing reliable points based on the color blocks, and removing part of the reliable points again. That is, when S20 is performed after S30, S20 may utilize the information in S30 in further screening for reliable points. It should be noted that, some steps in fig. 1 do not limit the strict execution order, and the execution order may be determined according to specific requirements.
Taking the left image as an example, for each reliable point p in the left image, its gradient in the X-axis and the Y-axis is first estimated. The estimation method is that some reliable points in a color block with the p point are selected on the X axis, the gradient formed by the reliable points and the points is calculated, and finally the median value, namely the gradient derivationX (p) estimated by the p point on the X axis is taken. Deriivationy (p) is obtained in the same way in the Y direction. Then, for each point p in the left graph, all reliable points q in the same block as the point p in the surrounding a × b box are takeniUsing its parallax d (q)i) Gradient derivantex (q) in the X-directioni) Gradient derivition Y (q) in Y-directioni) Estimating the disparity d (p) of the point pi). The specific formula is as follows:
d(pi)=d(qi)+(x[p]-x[qi])*derivationX[qi]+(y[p]-y[qi])*derivationY[qi]
…………(14)
for all d (p)i) Sorting takes the median and rounds the value to see if it is equal to d (p), and if not, filters the point.
S50: the final disparity is calculated.
In this embodiment, taking the left image as an example, for each point in the left imagep, taking all reliable points q in the same block as p in e x f box around piWherein e and f are preset pixel widths, and the parallax d (q) is usedi) (i.e., the initial disparity calculated in the previous step), gradient derivitix (q) in the X-directioni) Gradient derivition Y (q) in Y-directioni) Estimating the disparity d (q) of the point pi) The calculation formula is as follows:
d(pi)=d(qi)+(x[p]-x[qi])*derivationX[qi]+(y[p]-y[qi])*derivationY[qi]
for all d (p)i) The value obtained by sorting and taking the median and rounding the value is the final disparity d (p) of the point p. In other embodiments, any one of the methods in the prior art may be used to obtain the final parallax.
Referring to fig. 4, for the global disparity estimation method provided in the present embodiment, the present embodiment further provides a global disparity estimation system, which includes an image reading module 1000, a matching space calculation module 1001, a matching cost calculation module 1002, an initial disparity calculation module 1003, an image blocking module 1004, and a final disparity calculation module 1005.
The image reading module 1000 is configured to read a first viewpoint image and a second viewpoint image, where the first viewpoint image is an image of a target acquired from a first viewpoint, and the second viewpoint image is an image of the target acquired from a second viewpoint.
The matching space calculation module 1001 is configured to select sampling points on the first viewpoint image according to a preset rule, then sequentially select pixel points on the first viewpoint image as current pixel points, search the current pixel points as original points along a positive direction and a negative direction of a first axis by using the pixel points as search points one by one until a point which does not satisfy a preset constraint condition is searched, and use all the searched points which satisfy the constraint condition as first matching points; respectively taking each first matching point as an origin, searching by taking pixel points one by one as search points along the positive direction and the negative direction of the second axis until a point which does not meet the preset constraint condition is searched, and taking all the searched points which meet the constraint condition as second matching points; and taking the first matching point and the second matching point as a first matching space of the current pixel point. The matching space calculation module 1001 is further configured to search, with the current pixel point as an origin, in a positive direction and a negative direction along a second axis, with pixel points one by one as search points until a point that does not satisfy a preset constraint condition is searched, and use all the searched points that satisfy the constraint condition as third matching points; respectively taking each third matching point as an origin, searching by taking pixel points one by one as a searching point along the positive direction and the negative direction of the first axis until a point which does not meet the preset constraint condition is searched, and taking all the searched points which meet the constraint condition as fourth matching points; and taking the third matching point and the fourth matching point as a second matching space of the current pixel point. The constraint conditions comprise linear constraint conditions and space constraint conditions based on the sampling points, the linear constraint conditions are constraints of Euclidean distances between the current pixel point and the search point on the color, the space constraint conditions are constraints of Euclidean distances between the search point and the sampling points on the color, and the first axis is perpendicular to the second axis.
The matching cost calculation module 1002 is configured to calculate a sum of matching costs of all the points in the first matching space, and calculate a sum of matching costs of all the points in the second matching space.
The initial disparity calculating module 1003 is configured to calculate an initial disparity according to the sum of the matching costs of all the points in the first matching space and the sum of the matching costs of all the points in the second matching space, and filter the initial disparity to obtain reliable points.
The image blocking module 1004 is configured to perform image blocking on the original image by using any one of the above embodiments.
The final disparity calculating module 1005 is configured to calculate a final disparity of each pixel point in the first viewpoint image based on the image blocks.
The global disparity estimation system provided in this embodiment corresponds to the global disparity estimation method, and the working principle thereof is not described herein again.
Referring to fig. 5, an experimental result diagram of the Middlebury data set by using the global disparity estimation method provided in the embodiment of the present application is shown, and a test result on a Middlebury test platform shows that a result (a 2 nd row result) obtained by using the global disparity estimation method provided in the embodiment of the present application is superior to most methods at present. In fig. 5, "non-occlusion region (nonocc)" "all regions (all)" "discontinuous region (disc)" is used as an evaluation index, and an error rate threshold is set to 1.0, that is, a difference of more than 1 from the true parallax (ground route) is marked as an error point.
According to the global parallax estimation method and system, the mixed cost of the pixel points is obtained based on a robust mixed cost function, and the single-point cost is aggregated by adopting an improved aggregation space; then, performing optimal calculation of the global cost by adopting a rapid belief propagation global algorithm; finally, image blocks specially aiming at parallax estimation and marks of shielding points are adopted, so that the accuracy of final parallax calculation can be greatly improved.
Those skilled in the art will appreciate that all or part of the steps of the various methods in the above embodiments may be implemented by instructions associated with hardware via a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read-only memory, random access memory, magnetic or optical disk, and the like.
The foregoing is a more detailed description of the present application in connection with specific embodiments thereof, and it is not intended that the present application be limited to the specific embodiments thereof. It will be apparent to those skilled in the art from this disclosure that many more simple derivations or substitutions can be made without departing from the inventive concepts herein.
Claims (10)
1. A global disparity estimation system, comprising:
the image reading module is used for reading a first viewpoint image and a second viewpoint image, wherein the first viewpoint image is an image of a target acquired from a first viewpoint, and the second viewpoint image is an image of the target acquired from a second viewpoint;
the matching space calculation module is used for selecting sampling points on the first viewpoint image according to a preset rule, then sequentially selecting pixel points on the first viewpoint image as current pixel points, taking the current pixel points as original points, searching in the positive direction and the negative direction of the first axis by taking the pixel points one by one as search points until the point which does not meet the preset constraint condition is searched, and taking all the searched points which meet the constraint condition as first matching points; respectively taking each first matching point as an origin, searching by taking pixel points one by one as search points along the positive direction and the negative direction of the second axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as second matching points; taking the first matching point and the second matching point as a first matching space of the current pixel point;
taking the current pixel point as an origin, searching by taking the pixel points one by one as search points along the positive direction and the negative direction of the second axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as third matching points; respectively taking each third matching point as an origin, searching by taking pixel points one by one as a searching point along the positive direction and the negative direction of the first axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as fourth matching points; taking the third matching point and the fourth matching point as a second matching space of the current pixel point;
the constraint conditions comprise linear constraint conditions and space constraint conditions based on the sampling points, the linear constraint conditions are constraints of Euclidean distances between the current pixel point and the search point on the color, the space constraint conditions are constraints of Euclidean distances between the search point and the sampling points on the color, and the first axis is perpendicular to the second axis;
the matching cost calculation module is used for calculating the sum of the matching costs of all the points in the first matching space and calculating the sum of the matching costs of all the points in the second matching space;
the initial parallax calculation module is used for calculating the initial parallax according to the sum of the matching costs of all the points in the first matching space and the sum of the matching costs of all the points in the second matching space, and screening to obtain reliable points;
an image blocking module for performing image blocking on the first viewpoint image and the second viewpoint image;
and the final parallax calculation module is used for calculating the final parallax of each pixel point in the first viewpoint image according to the initial parallax of the reliable point based on the image blocks.
2. The system of claim 1, wherein the constraints are:
wherein l1Is the distance from the pixel point p to the search point q, the pixel point p is the current pixel point l2For pixel point p to sampling point eiA distance of (A), Olab(p, q) is the Euclidean distance between the pixel point p and the search point q in color, Olab(q,ei) For searching point q and sampling point eiEuclidean distance, k, in color1、k2、k3、k4、w1、w2Is a custom parameter, and k1>k2、k4>k3、w2>w1。
3. The system of claim 1, wherein the predetermined rule is such that each sample point is a predetermined distance from its upper, lower, left, and right four adjacent sample points.
4. The system of claim 1, wherein the image reading module is further configured to perform epipolar line correction on the first viewpoint image and the second viewpoint image after reading in the first viewpoint image and the second viewpoint image.
5. The system of claim 1, further comprising an occlusion region labeling module for calculating a final disparity calculation module after the image blocking module image-blocks the original imageBefore the final parallax, marking an occlusion region in the image, specifically: taking a first reliable point L (p) of each block of the first viewpoint image from the left end in each line and according to the parallax d of the point L (p)pCalculates a point R (p-d) corresponding to the second view imagep) (ii) a From point R (p-d) in the second viewpoint imagep-1) starting to find the first reliable point R (q) to the left and finding its disparity dqThe calculated point r (q) corresponds to the point L (q + d) in the first viewpoint imageq) Two horizontal points L (p) and L (q + d)q) The points in between are the occlusion points.
6. The system of claim 1, wherein the initial disparity calculation module calculates the initial disparity using a fast belief propagation global algorithm based on a sum of matching costs for all points in the first matching space and a sum of matching costs for all points in the second matching space.
7. The system of any one of claims 1 to 6, wherein the image blocking module, when image-blocking the first view image and the second view image:
the image partitioning module divides the first viewpoint image and the second viewpoint image into a plurality of image blocks;
merging image blocks according to colors: merging the image blocks with the pixel points less than the preset value with the image blocks with the closest colors in the adjacent image blocks; and/or when the colors of two adjacent image blocks are close and the sum of the pixel points of the two image blocks is smaller than a preset value, combining the two image blocks;
merging the image blocks according to the parallax: merging the image blocks with the reliable points with the quantity less than the preset value with the image blocks with the closest colors in the adjacent image blocks, wherein the reliable points are obtained by screening according to the initial parallax of each pixel point in the original image; and/or judging whether the parallax change of two adjacent image blocks is smooth or not, and if so, merging the two image blocks.
8. The system of claim 7, wherein when the image blocking module divides the first view image and the second view image into the image blocks: the image blocking module divides an image into image blocks based on superpixel color blocking.
9. The system of claim 7, wherein the image partitioning module, when determining whether the disparity change of two adjacent image blocks is smooth:
image partitioning module finds out current image block S and adjacent image block SkIs paired with the boundary neighboring point PS(i)、PSk(i),PS(i) And PSk(i) Is a block S and a block SkThe ith neighbor point pair of (a);
with PS(i) Searching a rectangular box of a b for the center, calculating the mean value V of the disparity of the reliable points belonging to the block S in the boxS(i) With PSk(i) Searching a rectangular box of a and b for the center, calculating the block S belonging to the boxkOf the reliable points of (1) mean value V of the disparity of the reliable pointsSk(i) Wherein a and b are preset pixel widths;
when max | VS(i)-VSk(i)|<j, it is determined as the current image block S and the adjacent image block SkIs smoothed, wherein i ∈ WS,Sk,WS,SkIs a block S and a block SkAnd f, collecting subscripts of all adjacent point pairs of the boundary, wherein j is a preset value.
10. A global disparity estimation method, comprising:
reading in a first viewpoint image and a second viewpoint image, wherein the first viewpoint image is an image of a target acquired from a first viewpoint, and the second viewpoint image is an image of the target acquired from a second viewpoint;
selecting sampling points on the first viewpoint image according to a preset rule;
sequentially selecting pixel points on a first viewpoint image as current pixel points, taking the current pixel points as original points, and searching by taking the pixel points one by one as search points along the positive direction and the negative direction of a first axis until the point which does not meet the preset constraint condition is searched, and taking all the searched points which meet the constraint condition as first matching points; respectively taking each first matching point as an origin, searching by taking pixel points one by one as search points along the positive direction and the negative direction of the second axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as second matching points; taking the first matching point and the second matching point as a first matching space of the current pixel point;
taking the current pixel point as an origin, searching by taking the pixel points one by one as search points along the positive direction and the negative direction of the second axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as third matching points; respectively taking each third matching point as an origin, searching by taking pixel points one by one as a searching point along the positive direction and the negative direction of the first axis until a point which does not meet a preset constraint condition is searched, and taking all the searched points which meet the constraint condition as fourth matching points; taking the third matching point and the fourth matching point as a second matching space of the current pixel point;
the constraint conditions comprise linear constraint conditions and space constraint conditions based on the sampling points, the linear constraint conditions are constraints of Euclidean distances between the current pixel point and the search point on the color, the space constraint conditions are constraints of Euclidean distances between the search point and the sampling points on the color, and the first axis is perpendicular to the second axis;
calculating the sum of the matching costs of all the points in the first matching space, and calculating the sum of the matching costs of all the points in the second matching space;
calculating initial parallax according to the sum of the matching costs of all the points in the first matching space and the sum of the matching costs of all the points in the second matching space, and screening to obtain reliable points;
performing image blocking on the first viewpoint image and the second viewpoint image;
and respectively calculating the final parallax of each pixel point in the first viewpoint image according to the initial parallax of the reliable point based on the image blocks.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410604055.3A CN104408710B (en) | 2014-10-30 | 2014-10-30 | Global parallax estimation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410604055.3A CN104408710B (en) | 2014-10-30 | 2014-10-30 | Global parallax estimation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104408710A CN104408710A (en) | 2015-03-11 |
CN104408710B true CN104408710B (en) | 2017-05-24 |
Family
ID=52646339
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410604055.3A Active CN104408710B (en) | 2014-10-30 | 2014-10-30 | Global parallax estimation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104408710B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016065578A1 (en) * | 2014-10-30 | 2016-05-06 | 北京大学深圳研究生院 | Global disparity estimation method and system |
GB2553782B (en) * | 2016-09-12 | 2021-10-20 | Niantic Inc | Predicting depth from image data using a statistical model |
CN110223338A (en) * | 2019-06-11 | 2019-09-10 | 中科创达(重庆)汽车科技有限公司 | Depth information calculation method, device and electronic equipment based on image zooming-out |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976455A (en) * | 2010-10-08 | 2011-02-16 | 东南大学 | Color image three-dimensional reconstruction method based on three-dimensional matching |
CN102999913A (en) * | 2012-11-29 | 2013-03-27 | 清华大学深圳研究生院 | Local three-dimensional matching method based on credible point spreading |
CN103996202A (en) * | 2014-06-11 | 2014-08-20 | 北京航空航天大学 | Stereo matching method based on hybrid matching cost and adaptive window |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7720282B2 (en) * | 2005-08-02 | 2010-05-18 | Microsoft Corporation | Stereo image segmentation |
-
2014
- 2014-10-30 CN CN201410604055.3A patent/CN104408710B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101976455A (en) * | 2010-10-08 | 2011-02-16 | 东南大学 | Color image three-dimensional reconstruction method based on three-dimensional matching |
CN102999913A (en) * | 2012-11-29 | 2013-03-27 | 清华大学深圳研究生院 | Local three-dimensional matching method based on credible point spreading |
CN103996202A (en) * | 2014-06-11 | 2014-08-20 | 北京航空航天大学 | Stereo matching method based on hybrid matching cost and adaptive window |
Non-Patent Citations (6)
Title |
---|
A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms;Daniel Scharstein 等;《International Journal of Computer Vision》;20021231;第47卷(第1-3期);全文 * |
Cross-Based Local Stereo Matching Using Orthogonal Integral Images;Ke Zhang 等;《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;20090731;第19卷(第7期);全文 * |
Efficient Belief Propagation for Early Vision;Pedro F. Felzenszwalb 等;《International Journal of Computer Vision》;20041231;第70卷(第1期);全文 * |
On Building an Accurate Stereo Matching System on Graphics Hardware;Xing Mei 等;《IEEE International Conference on Computer Vision Workshops》;20111231;第21卷(第5期);全文 * |
SLIC Superpixels Compared to State-of-the-Art Superpixel Methods;Radhakrishna Achanta 等;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20121130;第34卷(第11期);全文 * |
基于图像区域分割和置信传播的立体匹配算法;张惊雷 等;《计算机工程》;20130731;第39卷(第7期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN104408710A (en) | 2015-03-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10846913B2 (en) | System and method for infinite synthetic image generation from multi-directional structured image array | |
US10818029B2 (en) | Multi-directional structured image array capture on a 2D graph | |
CN104331890B (en) | A kind of global disparity method of estimation and system | |
EP3869797B1 (en) | Method for depth detection in images captured using array cameras | |
CN102930530B (en) | Stereo matching method of double-viewpoint image | |
CN102665086B (en) | Method for obtaining parallax by using region-based local stereo matching | |
CN103345736B (en) | A kind of virtual viewpoint rendering method | |
US9237330B2 (en) | Forming a stereoscopic video | |
KR100793076B1 (en) | Edge-adaptive stereo/multi-view image matching apparatus and its method | |
KR100776649B1 (en) | A depth information-based Stereo/Multi-view Stereo Image Matching Apparatus and Method | |
KR100745691B1 (en) | Binocular or multi-view stereo matching apparatus and its method using occlusion area detection | |
CN103310421B (en) | The quick stereo matching process right for high-definition image and disparity map acquisition methods | |
KR101869605B1 (en) | Three-Dimensional Space Modeling and Data Lightening Method using the Plane Information | |
CN106408596A (en) | Edge-based local stereo matching method | |
CN106530336A (en) | Stereo matching algorithm based on color information and graph-cut theory | |
CN103679739A (en) | Virtual view generating method based on shielding region detection | |
CN104408710B (en) | Global parallax estimation method and system | |
CN104778673B (en) | A kind of improved gauss hybrid models depth image enhancement method | |
CN107155100A (en) | A kind of solid matching method and device based on image | |
CN107578419B (en) | Stereo image segmentation method based on consistency contour extraction | |
CN102567992B (en) | Image matching method of occluded area | |
CN108062765A (en) | Binocular image processing method, imaging device and electronic equipment | |
CN117726747A (en) | Three-dimensional reconstruction method, device, storage medium and equipment for complementing weak texture scene | |
CN107204013B (en) | Method and device for calculating pixel point parallax value applied to binocular stereo vision | |
WO2016065579A1 (en) | Global disparity estimation method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |