CN107270875B - Visual feature three-dimensional reconstruction method under motion blur effect - Google Patents

Visual feature three-dimensional reconstruction method under motion blur effect Download PDF

Info

Publication number
CN107270875B
CN107270875B CN201710321151.0A CN201710321151A CN107270875B CN 107270875 B CN107270875 B CN 107270875B CN 201710321151 A CN201710321151 A CN 201710321151A CN 107270875 B CN107270875 B CN 107270875B
Authority
CN
China
Prior art keywords
image
motion
coding
point
coding mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710321151.0A
Other languages
Chinese (zh)
Other versions
CN107270875A (en
Inventor
张丽艳
陈明军
周含策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201710321151.0A priority Critical patent/CN107270875B/en
Publication of CN107270875A publication Critical patent/CN107270875A/en
Application granted granted Critical
Publication of CN107270875B publication Critical patent/CN107270875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a visual characteristic three-dimensional reconstruction method under a motion blur effect, which comprises the following steps: calibrating a camera to be used; arranging coding mark points on the surface of a measured object; acquiring a motion blurred image; identifying the identity of the coded mark points in the image; aiming at the same coding mark point, by means of time sequence images shot at different moments, carrying out coarse positioning on the space positions of the same coding mark point at different moments and fitting the same coding mark point into a spline curve to serve as an initial value of a space motion track; constructing a fuzzy imaging model for coding the motion of the mark points; and in each exposure time, according to the fuzzy imaging model, optimizing and solving the motion path and the posture. Under the condition of motion blur, the central position and the posture of the coding mark point in the exposure time are restored, and the three-dimensional information of the surface of the measured object and the motion information in the exposure time are obtained. The invention expands the vision-based measuring method to dynamic occasions, and plays an important role in analyzing, designing and reverse engineering of high-speed moving parts.

Description

Visual feature three-dimensional reconstruction method under motion blur effect
The technical field is as follows:
the invention relates to a visual characteristic three-dimensional reconstruction method under a motion blur effect, and belongs to the field of machine vision measurement.
Background art:
coded marker points are widely used in machine vision based industrial measurements and reverse engineering. Before measurement, the coded mark points are arranged on the surface of the measured object. According to a group of images of the measured object shot by the calibrated one or more cameras, the position information of the coded mark points in the space can be reconstructed, so that the three-dimensional parameters of the measured object can be obtained.
When the object to be measured is in a high-speed motion state, the acquired image is blurred. At this time, the conventional method for identifying the identity of the coded mark point is invalid. The existing method for positioning the center of the coding mark point of the clear image is not suitable for identifying the identity of the coding mark point in the blurred image at all and is also not suitable for positioning the center of the coding mark point in the blurred image at all.
The invention content is as follows:
the invention provides a visual characteristic three-dimensional reconstruction method under the motion blur effect to solve the problems in the prior art, which can still recover the accurate position of the center of a coding mark point at any time during the exposure period under the condition that the motion blur causes the image blur of the coding mark point.
The technical scheme adopted by the invention is as follows: a visual feature three-dimensional reconstruction method under the motion blur effect comprises the following steps:
the method comprises the following steps: calibrating a camera to be used;
step two: arranging coding mark points on the surface of a measured object;
step three: acquiring a motion blurred image;
step four: identifying the identity of the coded mark points in the image;
step five: aiming at the same coding mark point, by means of time sequence images shot at different moments, carrying out coarse positioning on the space positions of the same coding mark point at different moments and fitting the same coding mark point into a spline curve to serve as an initial value of a space motion track;
step six: constructing a fuzzy imaging model for coding the motion of the mark points;
step seven: and in each exposure time, according to the fuzzy imaging model, optimizing and solving the motion path and the posture.
The invention has the following beneficial effects: the invention can restore the central position and the posture of the coding mark point in the exposure time under the condition of motion blur, thereby obtaining the three-dimensional information of the surface of the measured object and the motion information in the exposure time. The invention extends the vision-based measurement method to dynamic situations. The method plays an important role in analysis, design and reverse engineering of high-speed moving parts.
Description of the drawings:
FIG. 1 is a diagram illustrating exposure timing.
Fig. 2 is a schematic diagram of a clear encoding point.
Fig. 3 is a diagram illustrating motion blur encoded points.
The specific implementation mode is as follows:
the invention will be further described with reference to the accompanying drawings.
The invention relates to a visual characteristic three-dimensional reconstruction method under a motion blur effect, which comprises the following steps:
1. calibrating a pair of cameras, recording separatelyIs a left camera C0And a right camera C1Their imaging matrices are respectively denoted as P0And P1. Distortion coefficient vectors of two camera lenses are respectively expressed as d(c),c=0,1。
2. Selecting a threshold value TpThe method is used for epipolar constraint detection.
3. And selecting the identity number of the coding mark point to be used. The identity number is a natural number with the value from 1 to N0N is0The total number of marker points for the full set of codes. The selected coding mark point is recorded as
Figure BDA0001289823760000021
And N is the total number of the selected coding mark points.
4. To pair
Figure BDA0001289823760000022
Each id innN is 1, 2. Preparing an image M of encoded dotsnAll images have the same pixel size and the same number of pixels in width and height, denoted as z.
5. According to MnAnd manufacturing an actual coding mark point sticker with side length of l.
6. And pasting the actual coding mark points to the surface of the measured object. The same number of coded mark points appears at most once in one measurement.
7. A motion-blurred image group, i.e., a pair of images acquired by a pair of cameras at a plurality of time instants, is acquired. Due to the motion blur effect, the coded marker points in each picture are imaged with different degrees of blur. By using
Figure BDA0001289823760000031
Each represents the kth image captured by the left (c-0) and right (c-1) cameras. K is the total number of shots. The duration time of each shooting is delta T, and the starting time of each shooting is TkK is 1, 2.. K, the exposure times of two adjacent shots do not overlap. In two adjacent shots, the interval time from the end time of the previous exposure to the start time of the next exposure is the same and is recorded as delta T.
8. According to distortion coefficient vector d of camera lens(c)Correction of
Figure BDA0001289823760000032
The lens distortion effect of (1) is recorded as
Figure BDA0001289823760000033
9. For each image
Figure BDA0001289823760000034
And (4) carrying out segmentation so that each small block obtained after segmentation just comprises a complete blurred image of one coding mark point.
Figure BDA0001289823760000035
The number of the image small blocks contained in the image is recorded as
Figure BDA0001289823760000036
The divided image patches are recorded as
Figure BDA0001289823760000037
Where c is 0,1 corresponds to the left and right cameras, respectively, K is 1,2,. K corresponds to the shooting order,
Figure BDA0001289823760000038
correspond to
Figure BDA0001289823760000039
The s-th tile of (1).
Figure BDA00012898237600000310
Is centered on
Figure BDA00012898237600000311
Has a pixel coordinate of
Figure BDA00012898237600000312
10. To use the method in remarks (remark: 1. computer simulation to generate the differentEncoding various motion-blurred images of the points; 2. constructing a deep convolutional neural network; 3. training a deep convolutional neural network by using the image generated by simulation; 4. using the trained network to identify the motion blurred image of the actually shot coding mark point to obtain the identity id) of the blurred coding mark point in the identification image, wherein small image blocks need to be subjected to identification
Figure BDA00012898237600000313
And (4) carrying out pretreatment. The method uses a deep convolutional network MBCNet to identify the identity of a coding mark point of motion blur, the network is constructed by specifying the width and height of an input layer, and w represents the width and height of an image required by the input layer and has the unit of pixel.
11. Performing size preprocessing on each image small block, and setting the width of each small block as wHPixel of height wVA pixel. (1) If w isH=wVNo processing is required. (2) If max { wH,wVIs equal to w, the image small block is zoomed
Figure BDA00012898237600000314
And then, blank areas with the same gray level as the background are symmetrically added on the upper side, the lower side or the left side and the right side of the small image blocks respectively, so that the width and the height of the image are all w pixels. Recording the preprocessed image small blocks
Figure BDA00012898237600000315
12. For each one
Figure BDA0001289823760000041
The identity of the fuzzy coding mark points contained in the remarks is identified by using the method in the remarks, and the identification is recorded as
Figure BDA0001289823760000042
Device for placing
Figure BDA0001289823760000043
13. Screening the identified coding mark points, comprising the following steps:
a) screening all ID e IDs, if some K e {1,2,. multidata., K } exists,
Figure BDA0001289823760000044
this id is marked as invalid.
b) All IDs e IDs that are not currently marked as invalid are filtered, and if there is some K e {1,2pBelow the level, epipolar constraints are satisfied for the left and right images, the flag is invalid.
c)
Figure BDA0001289823760000045
14. Marking point identity for each code
Figure BDA0001289823760000046
Calculating initial values of fitting starting and ending endpoints
Figure BDA0001289823760000047
The method comprises the following steps:
a) for each K ∈ {1, 2.,. K }, and for each c ∈ {0,1}, there is some oneSo that
Figure BDA0001289823760000049
According to
Figure BDA00012898237600000410
And two camera matrices P1,P2Reconstructing the initial value M of the spatial position of the coded mark point id at the kth momentid,kIts three-dimensional coordinate is (x)id,k,yid,k,zid,k)T
b) According to Mid,kK-th interpolation generates a K-th order B-spline curve SPid。SPidPass through each Mid,k。SPidIs expressed as
Figure BDA00012898237600000411
c) Computing SPidArc length of (d), denoted as σidAnd will SPidAn approximate arc length parameterization is performed. The equation of the curve after reparameterization is denoted as Vid(t),t∈[0,σid]. At this time there is Vid(0)=Mid,1,Vidid)=Mid,K
d) In SPidEach M ofid,KCorresponding parameter is tid,kI.e. Mid,k=Vid(tid,k),k=1,2,...,K。
e) Calculating half window size
Figure BDA00012898237600000412
f) For k equal to 1, calculate
Figure BDA00012898237600000413
g) For K2, 3, K-1, calculation
Figure BDA00012898237600000414
h) For K equal to K, calculate
Figure BDA0001289823760000051
15. Constructing a static virtual imaging model of coding points moving in space, which comprises the following steps:
a) and arranging the camera C and the encoding point E to be positioned under the same three-dimensional space coordinate system.
b) The imaging matrix of camera C is set to P, with no distortion.
c) The image of the coding point id is marked as M, the binary image has the gray value of 0 or 1, and the length and the width are all l pixels. In the plane of the self body, according to the anticlockwise direction, the homogeneous coordinates of four vertexes are respectively
Figure BDA0001289823760000052
Figure BDA0001289823760000053
d) The code point E to be imaged is a square plane with the side length of l, one side of the square plane is pasted with the pattern M, and the square plane is filled with the pattern M without distortion.
e) M (u) is an image M of encoded points represented in functional form. Wherein the parameter u is a homogeneous coordinate (u, v, s)TCorresponding to non-homogeneous coordinates of
Figure BDA0001289823760000054
M (u) represents a position on an image
Figure BDA0001289823760000055
The gray value of the pixel at (a). If this coordinate is a non-integer value, the grey value is generated by interpolation. If the coordinate falls outside the image, the function returns a gray value of 0.
f) The position of E in space is determined entirely by the coordinates of the four vertices. When the image of the encoded point is oriented toward the viewer, the four vertices are sequentially Q in the counterclockwise direction1(v,x),Q2(v,x),Q3(v,x),Q4(v, x) where the parameter vector v ═ (α, γ)T,x=(x,,y,z)TThe attitude and position are determined separately.
g) Each Q1(v,x),Q2(v,x),Q3(v,x),Q4(v, x) are each independently of
Figure BDA0001289823760000056
Figure BDA0001289823760000057
And obtaining the product through coordinate transformation.
α, gamma determining a rotation matrix
Figure BDA0001289823760000058
Figure BDA0001289823760000061
x, y, z determine the translation matrix
Figure BDA0001289823760000062
For the case of i-1,2,3,4,
Figure BDA0001289823760000063
h) the homogeneous coordinates of the image points of the four vertexes of E in the image plane of the camera C are respectively zi=PQi(v,x)。
i) The homography matrix H is constructed such that it is in the sense of homogeneous coordinates
Figure BDA0001289823760000064
j) The encoded point is imaged as I in camera C in this position poseM,v,x,PIn functional form IM,v,P(u) ═ m (hu), where u ═ u, v,1)TIs the pixel position (u, v)THomogeneous coordinates of (a).
16. Using the structure of the previous step IM,v,x,PConstructing a fuzzy imaging model of the motion of the coding points, comprising the following steps:
a) the discrete particle size N is selected as a natural number, and is generally 100 or more and 1000 or less. And if the value of N is large, the fuzzy effect is closer to the real effect.
b) Under the premise of short-time exposure, in the process of limiting movement, the attitude angle v of the encoding point is (α, gamma)TRemain unchanged.
c) On the premise of short-time exposure, the motion is defined as a straight-line segment motion with a constant speed, and the starting point is x1=(x1,y1,z1)TEnd point is x2=(x2,y2,z2)T
d) The result of the blur imaging is
Figure BDA0001289823760000065
17. Marking point identity for each code
Figure BDA0001289823760000066
For each exposure number K1, 2. The method comprises the following steps:
a) the image corresponding to the coding point id is recorded as M. Note the book
Figure BDA0001289823760000071
c is 0,1 is the right and left images of the k-th shot, and is recorded
Figure BDA0001289823760000072
And the image small blocks containing the motion blur coding mark points of the coding points are displayed, wherein c is 0,1 respectively corresponds to a left camera and a right camera, and s is the number of the image small blocks divided from the image obtained by shooting.
b) Selecting an optimization variable as θ12312312312
C)θ123The initial value of (d) is randomly chosen within 0 to 2 pi.
d)(λ123) Has an initial value of
Figure BDA0001289823760000073
e)(μ123) Has an initial value of
Figure BDA0001289823760000074
f)ω1For image gain, the initial value is 1.
g)ω2For image bias, the initial value is 0.
h) Put v ═ λ123)T,x1=(λ123)T,x2=(μ123)T
i) Masking function
Figure BDA0001289823760000075
Wherein c, k, s have the meanings given in
Figure BDA0001289823760000076
The corresponding symbols in (1) have the same meaning. Parameter u ═ (u, v, s)T,
Figure BDA0001289823760000077
As pixel coordinates, when the coordinates fall within an image patch
Figure BDA0001289823760000078
Within the pixel coordinate range occupied in its parent image,
Figure BDA0001289823760000079
return 1, otherwise return 0.
j) Calculating an optimized objective function
Figure BDA00012898237600000710
Wherein | | | purple hair2Representing the square of the norm. When W is an image, | W | | non-woven calculation2Which is the sum of the squares of the gray values of all pixels in the image.
k) By optimizing the parameter theta12312312312So that f takes the minimum optimum value.
l) each time of theta123And c), assigning different random values, and repeating the steps b) to k), and selecting the minimum optimized value of f as the final optimized result. The number of repetitions is not less than 16.
m) is completed, and the motion track of the coding point is from (lambda) in the exposure time123) To (mu)123) A straight line segment of (a). During the exposure time, the attitude parameter of the encoding point is (theta)123,)。
The foregoing is only a preferred embodiment of this invention and it should be noted that modifications can be made by those skilled in the art without departing from the principle of the invention and these modifications should also be considered as the protection scope of the invention.

Claims (9)

1. A visual characteristic three-dimensional reconstruction method under motion blur effect is characterized by comprising the following steps: comprises the following steps
The method comprises the following steps: calibrating a camera to be used;
step two: arranging coding mark points on the surface of a measured object;
step three: acquiring a motion blurred image;
step four: identifying the identity of the coded mark points in the image;
step five: aiming at the same coding mark point, by means of time sequence images shot at different moments, carrying out coarse positioning on the spatial positions of the same coding mark point at different moments and fitting the same coding mark point into a spline curve to be used as an initial value of a spatial motion track of the coding mark point;
step six: constructing a fuzzy imaging model for coding the motion of the mark points;
step seven: in each exposure time, according to the fuzzy imaging model, optimizing and solving the motion path and the posture of the coding mark point in the exposure time;
in the third step:
(a) acquiring a motion-blurred image group, namely a pair of images acquired by a pair of cameras at multiple moments, wherein coding mark points in each image are imaged with different degrees of blurring due to motion blurring effect, and using the motion-blurred image group
Figure FDA0002333109230000011
Each image represents the K-th image captured by a camera with left c being 0 and right c being 1, K is the total number of shots, each shot has a duration Δ T, and the start time of each shot is TkK is 1,2,. K, the exposure time of two adjacent shots is not overlapped, the interval time from the end of the previous exposure to the start of the next exposure is the same, and is recorded as delta T;
(b) according to the phaseDistortion coefficient vector d of camera lens(c)Correction of
Figure FDA0002333109230000012
The lens distortion effect of (1) is recorded as
Figure FDA0002333109230000013
(c) For each image
Figure FDA0002333109230000014
The segmentation is carried out, each small block obtained after the segmentation just comprises a complete blurred image of the coding mark point,
Figure FDA0002333109230000015
the number of the image small blocks contained in the image is recorded as
Figure FDA0002333109230000016
The divided image patches are recorded as
Figure FDA0002333109230000017
Where c is 0,1 corresponds to the left and right cameras, respectively, K is 1,2,. K corresponds to the shooting order,
Figure FDA0002333109230000018
correspond to
Figure FDA0002333109230000019
The number s of the small blocks in (1),
Figure FDA00023331092300000110
is centered on
Figure FDA00023331092300000111
Has a pixel coordinate of
Figure FDA00023331092300000112
2. The method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 1, characterized in that: in the first step, a pair of cameras are calibrated and respectively marked as a left camera C0And a right camera C1Their imaging matrices are respectively denoted as P0And P1The distortion coefficient vectors of the two camera lenses are respectively expressed as d(c)C is 0, 1; selecting a threshold value TpThe method is used for epipolar constraint detection.
3. The method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 2, characterized in that: in the second step:
(a) selecting the identity number of the coding mark point to be used, wherein the identity number is a natural number and takes the value from 1 to N0N is0For the total number of the coding marks, the selected set of coding marks is
Figure FDA0002333109230000021
N is the total number of the selected coding mark points;
(b) to pair
Figure FDA0002333109230000022
Each id innN is 1, 2.. times.n, an image M for preparing a coding marker pointnAll images have the same pixel size, and the number of the wide and high pixels is recorded as z;
(c) according to MnManufacturing an actual coding mark point, wherein the side length is l;
(d) and pasting the actual coding mark points to the surface of the measured object.
4. A method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 3, characterized in that: in the fourth step:
(a) for small image block
Figure FDA0002333109230000023
Preprocessing is carried out, a deep convolutional network MBCNet is used for identifying the identity of a coding mark point of motion blur, the network is constructed, the width and the height of an input layer must be specified, w represents the width and the height of an image required by the input layer, and the unit is a pixel;
(b) performing size preprocessing on each image small block, and setting the width of each small block as wHPixel of height wVPixel (1) if wH=wVNo processing is required if w, (2) if max { w }H,wVIs equal to w, the image small block is zoomed
Figure FDA0002333109230000024
Then, blank areas with the same gray level as the background are symmetrically added on the upper side, the lower side or the left side and the right side of the image small blocks respectively, so that the image width and the height are all w pixels, and the image small blocks after preprocessing are marked as w pixels
Figure FDA0002333109230000025
(c) For each one
Figure FDA0002333109230000026
Identify the fuzzy code mark points contained in the fuzzy code mark points
Figure FDA0002333109230000027
By using
Figure FDA0002333109230000028
Representing the set of all identified coded marker points.
5. The method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 4, characterized in that: in the fifth step:
(a) screening the identified coding mark points;
(b) calculating initial values of the initial fitting and termination points for the identity ID e of each coding mark point
Figure FDA0002333109230000031
6. The method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 5, characterized in that: screening the identified coding mark points, comprising the following steps:
(a) screening all ID e IDs, if some K e {1,2,. multidata., K } exists,
Figure FDA0002333109230000032
mark this id as invalid; (b) all IDs e IDs that are not currently marked as invalid are filtered, and if there is some K e {1,2pIf the epipolar constraint conditions are met below the level for the left and right images, marking the images as invalid;
(c)
Figure FDA0002333109230000033
7. the method for three-dimensional reconstruction of visual features under motion blur effect according to claim 6, characterized by: marking point identity for each code
Figure FDA0002333109230000034
Calculating initial values of fitting starting and ending endpoints
Figure FDA0002333109230000035
The method comprises the following steps:
(a) for each K ∈ {1, 2.,. K }, and for each c ∈ {0,1}, there is some one
Figure FDA0002333109230000036
So that
Figure FDA0002333109230000037
According to
Figure FDA0002333109230000038
And two camera matrices P1,P2Reconstructing the initial value M of the spatial position of the coded mark point id at the kth momentid,kIts three-dimensional coordinate is (x)id,k,yid,k,zid,k)T
(b) According to Mid,kK-1 th order B-spline curve SP is generated by K interpolationid,SPidPass through each Mid,k,SPidIs expressed as
Figure FDA0002333109230000039
(c) Computing SPidArc length of (d), denoted as σidAnd will SPidCarrying out approximate arc length parameterization, and recording the equation of the curve after the parameterization is carried out again as Vid(t),t∈[0,σid]At this time there is Vid(0)=Mid,1,Vidid)=Mid,K
(d) In SPidEach M ofid,KCorresponding parameter is tid,kI.e. Mid,k=Vid(tid,k),k=1,2,...,K;
(e) Calculating half window size
Figure FDA00023331092300000310
(f) For k equal to 1, calculate
Figure FDA00023331092300000311
(g) For K2, 3, K-1, calculation
Figure FDA00023331092300000312
(h) For K equal to K, calculate
Figure FDA00023331092300000313
8. The method for three-dimensional reconstruction of visual features under motion blur effect according to claim 7, characterized by: in step six, the blurred imaging model is constructed as follows:
(a) arranging the camera C and the coding mark point E under the same three-dimensional space coordinate system;
(b) setting an imaging matrix of a camera C as P, and having no distortion;
(c) the image of the coded mark point id is marked as M, the binary image has the gray value of 0 or 1 and the length and the width of l, and in the plane of the coded mark point id, the homogeneous coordinates of four vertexes are respectively shown as M and L according to the anticlockwise direction
Figure FDA0002333109230000041
(d) The code point E to be imaged is a square plane with the side length of l, one side of the square plane is pasted with a pattern M, and the square plane is filled with the pattern M without distortion;
(e) m (u) is an image M of the encoded points expressed in functional form, where the parameter u is the homogeneous coordinate (u, v, s)TCorresponding to non-homogeneous coordinates of
Figure FDA0002333109230000042
M (u) represents a position on an image
Figure FDA0002333109230000043
The gray value of the pixel at (d);
(f) e is completely determined by the coordinates of the four vertices, which are Q in the order of the four vertices in the counterclockwise direction when the image of the encoded point is oriented toward the viewer1(v,x),Q2(v,x),Q3(v,x),Q4(v, x) where the parameter vector v ═ (α, γ)T,x=(x,y,z)TDetermining the posture and the position respectively;
(g) each Q1(v,x),Q2(v,x),Q3(v,x),Q4(v, x) are each independently of
Figure FDA0002333109230000044
Figure FDA0002333109230000045
Obtained by coordinate transformation, α, gamma determines a rotation matrix
Figure FDA0002333109230000046
Figure FDA0002333109230000047
x, y, z determine the translation matrix
Figure FDA0002333109230000048
For a group i of 1,2,3,4,
Figure FDA0002333109230000049
(h) the homogeneous coordinates of the image points of the four vertexes of E in the image plane of the camera C are respectively zi=PQi(v,x);
(i) The homography matrix H is constructed such that it is in the sense of homogeneous coordinates
Figure FDA0002333109230000051
(j) The encoded point is imaged as I in camera C in this position poseM,v,x,PIn functional form IM,v,x,P(u) ═ m (hu), where u ═ u, v,1)TIs the pixel position (u, v)THomogeneous coordinates of (a).
9. The method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 8, characterized by: in the seventh step:
(a) using the above step of construction IM,v,x,PConstructing a coding point motion fuzzy imaging model;
(b) marking the point identity for each code
Figure FDA0002333109230000052
For each exposure, the number K is 1,2And (4) diameter.
CN201710321151.0A 2017-05-09 2017-05-09 Visual feature three-dimensional reconstruction method under motion blur effect Active CN107270875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710321151.0A CN107270875B (en) 2017-05-09 2017-05-09 Visual feature three-dimensional reconstruction method under motion blur effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710321151.0A CN107270875B (en) 2017-05-09 2017-05-09 Visual feature three-dimensional reconstruction method under motion blur effect

Publications (2)

Publication Number Publication Date
CN107270875A CN107270875A (en) 2017-10-20
CN107270875B true CN107270875B (en) 2020-04-24

Family

ID=60073863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710321151.0A Active CN107270875B (en) 2017-05-09 2017-05-09 Visual feature three-dimensional reconstruction method under motion blur effect

Country Status (1)

Country Link
CN (1) CN107270875B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299172B (en) 2021-12-31 2022-07-08 广东工业大学 Planar coding target for visual system and real-time pose measurement method thereof
CN114757993B (en) * 2022-06-13 2022-09-09 中国科学院力学研究所 Motion and parameter identification method and system for schlieren image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995004331A1 (en) * 1993-08-03 1995-02-09 Apple Computer, Inc. Three-dimensional image synthesis using view interpolation
US7023922B1 (en) * 2000-06-21 2006-04-04 Microsoft Corporation Video coding system and method using 3-D discrete wavelet transform and entropy coding with motion information
CN101750029A (en) * 2008-12-10 2010-06-23 中国科学院沈阳自动化研究所 Characteristic point three-dimensional reconstruction method based on trifocal tensor
CN106254722A (en) * 2016-07-15 2016-12-21 北京邮电大学 A kind of video super-resolution method for reconstructing and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8319798B2 (en) * 2008-12-17 2012-11-27 Disney Enterprises, Inc. System and method providing motion blur to rotating objects

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995004331A1 (en) * 1993-08-03 1995-02-09 Apple Computer, Inc. Three-dimensional image synthesis using view interpolation
US7023922B1 (en) * 2000-06-21 2006-04-04 Microsoft Corporation Video coding system and method using 3-D discrete wavelet transform and entropy coding with motion information
CN101750029A (en) * 2008-12-10 2010-06-23 中国科学院沈阳自动化研究所 Characteristic point three-dimensional reconstruction method based on trifocal tensor
CN106254722A (en) * 2016-07-15 2016-12-21 北京邮电大学 A kind of video super-resolution method for reconstructing and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于多观测点图像SURF特征配准及去模糊的三维重建;时愈等;《光学与光电技术》;20160228;第28页右栏第2段、第29页右栏、第30页左栏,图1-2 *

Also Published As

Publication number Publication date
CN107270875A (en) 2017-10-20

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
Mueggler et al. Continuous-time trajectory estimation for event-based vision sensors
CN111339870B (en) Human body shape and posture estimation method for object occlusion scene
CN111489394B (en) Object posture estimation model training method, system, device and medium
CN107622257A (en) A kind of neural network training method and three-dimension gesture Attitude estimation method
CN111833237B (en) Image registration method based on convolutional neural network and local homography transformation
CN111768452B (en) Non-contact automatic mapping method based on deep learning
CN111524233A (en) Three-dimensional reconstruction method for dynamic target of static scene
CN108564120B (en) Feature point extraction method based on deep neural network
CN104537705B (en) Mobile platform three dimensional biological molecular display system and method based on augmented reality
CN106780546B (en) The personal identification method of motion blur encoded point based on convolutional neural networks
WO2021219835A1 (en) Pose estimation method and apparatus
CN113947589A (en) Missile-borne image deblurring method based on countermeasure generation network
CN108805987A (en) Combined tracking method and device based on deep learning
CN112734832B (en) Method for measuring real size of on-line object in real time
CN113614735A (en) Dense 6-DoF gesture object detector
CN113159158A (en) License plate correction and reconstruction method and system based on generation countermeasure network
CN107270875B (en) Visual feature three-dimensional reconstruction method under motion blur effect
CN110599588A (en) Particle reconstruction method and device in three-dimensional flow field, electronic device and storage medium
CN118154770A (en) Single tree image three-dimensional reconstruction method and device based on nerve radiation field
CN114862866B (en) Calibration plate detection method and device, computer equipment and storage medium
CN116758149A (en) Bridge structure displacement detection method based on unmanned aerial vehicle system
CN113674360B (en) Line structure light plane calibration method based on covariate
CN115731351A (en) Vehicle explicit three-dimensional reconstruction method under rough camera pose labeling condition
CN108830804A (en) Virtual reality fusion Fuzzy Consistent processing method based on line spread function standard deviation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant