CN107270875B - Visual feature three-dimensional reconstruction method under motion blur effect - Google Patents
Visual feature three-dimensional reconstruction method under motion blur effect Download PDFInfo
- Publication number
- CN107270875B CN107270875B CN201710321151.0A CN201710321151A CN107270875B CN 107270875 B CN107270875 B CN 107270875B CN 201710321151 A CN201710321151 A CN 201710321151A CN 107270875 B CN107270875 B CN 107270875B
- Authority
- CN
- China
- Prior art keywords
- image
- motion
- coding
- point
- coding mark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000000694 effects Effects 0.000 title claims abstract description 22
- 230000000007 visual effect Effects 0.000 title claims abstract description 16
- 238000003384 imaging method Methods 0.000 claims abstract description 14
- 239000011159 matrix material Substances 0.000 claims description 8
- 239000013598 vector Substances 0.000 claims description 6
- 239000003550 marker Substances 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 claims description 2
- 238000004519 manufacturing process Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 238000013519 translation Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims 1
- 238000005259 measurement Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a visual characteristic three-dimensional reconstruction method under a motion blur effect, which comprises the following steps: calibrating a camera to be used; arranging coding mark points on the surface of a measured object; acquiring a motion blurred image; identifying the identity of the coded mark points in the image; aiming at the same coding mark point, by means of time sequence images shot at different moments, carrying out coarse positioning on the space positions of the same coding mark point at different moments and fitting the same coding mark point into a spline curve to serve as an initial value of a space motion track; constructing a fuzzy imaging model for coding the motion of the mark points; and in each exposure time, according to the fuzzy imaging model, optimizing and solving the motion path and the posture. Under the condition of motion blur, the central position and the posture of the coding mark point in the exposure time are restored, and the three-dimensional information of the surface of the measured object and the motion information in the exposure time are obtained. The invention expands the vision-based measuring method to dynamic occasions, and plays an important role in analyzing, designing and reverse engineering of high-speed moving parts.
Description
The technical field is as follows:
the invention relates to a visual characteristic three-dimensional reconstruction method under a motion blur effect, and belongs to the field of machine vision measurement.
Background art:
coded marker points are widely used in machine vision based industrial measurements and reverse engineering. Before measurement, the coded mark points are arranged on the surface of the measured object. According to a group of images of the measured object shot by the calibrated one or more cameras, the position information of the coded mark points in the space can be reconstructed, so that the three-dimensional parameters of the measured object can be obtained.
When the object to be measured is in a high-speed motion state, the acquired image is blurred. At this time, the conventional method for identifying the identity of the coded mark point is invalid. The existing method for positioning the center of the coding mark point of the clear image is not suitable for identifying the identity of the coding mark point in the blurred image at all and is also not suitable for positioning the center of the coding mark point in the blurred image at all.
The invention content is as follows:
the invention provides a visual characteristic three-dimensional reconstruction method under the motion blur effect to solve the problems in the prior art, which can still recover the accurate position of the center of a coding mark point at any time during the exposure period under the condition that the motion blur causes the image blur of the coding mark point.
The technical scheme adopted by the invention is as follows: a visual feature three-dimensional reconstruction method under the motion blur effect comprises the following steps:
the method comprises the following steps: calibrating a camera to be used;
step two: arranging coding mark points on the surface of a measured object;
step three: acquiring a motion blurred image;
step four: identifying the identity of the coded mark points in the image;
step five: aiming at the same coding mark point, by means of time sequence images shot at different moments, carrying out coarse positioning on the space positions of the same coding mark point at different moments and fitting the same coding mark point into a spline curve to serve as an initial value of a space motion track;
step six: constructing a fuzzy imaging model for coding the motion of the mark points;
step seven: and in each exposure time, according to the fuzzy imaging model, optimizing and solving the motion path and the posture.
The invention has the following beneficial effects: the invention can restore the central position and the posture of the coding mark point in the exposure time under the condition of motion blur, thereby obtaining the three-dimensional information of the surface of the measured object and the motion information in the exposure time. The invention extends the vision-based measurement method to dynamic situations. The method plays an important role in analysis, design and reverse engineering of high-speed moving parts.
Description of the drawings:
FIG. 1 is a diagram illustrating exposure timing.
Fig. 2 is a schematic diagram of a clear encoding point.
Fig. 3 is a diagram illustrating motion blur encoded points.
The specific implementation mode is as follows:
the invention will be further described with reference to the accompanying drawings.
The invention relates to a visual characteristic three-dimensional reconstruction method under a motion blur effect, which comprises the following steps:
1. calibrating a pair of cameras, recording separatelyIs a left camera C0And a right camera C1Their imaging matrices are respectively denoted as P0And P1. Distortion coefficient vectors of two camera lenses are respectively expressed as d(c),c=0,1。
2. Selecting a threshold value TpThe method is used for epipolar constraint detection.
3. And selecting the identity number of the coding mark point to be used. The identity number is a natural number with the value from 1 to N0N is0The total number of marker points for the full set of codes. The selected coding mark point is recorded asAnd N is the total number of the selected coding mark points.
4. To pairEach id innN is 1, 2. Preparing an image M of encoded dotsnAll images have the same pixel size and the same number of pixels in width and height, denoted as z.
5. According to MnAnd manufacturing an actual coding mark point sticker with side length of l.
6. And pasting the actual coding mark points to the surface of the measured object. The same number of coded mark points appears at most once in one measurement.
7. A motion-blurred image group, i.e., a pair of images acquired by a pair of cameras at a plurality of time instants, is acquired. Due to the motion blur effect, the coded marker points in each picture are imaged with different degrees of blur. By usingEach represents the kth image captured by the left (c-0) and right (c-1) cameras. K is the total number of shots. The duration time of each shooting is delta T, and the starting time of each shooting is TkK is 1, 2.. K, the exposure times of two adjacent shots do not overlap. In two adjacent shots, the interval time from the end time of the previous exposure to the start time of the next exposure is the same and is recorded as delta T.
8. According to distortion coefficient vector d of camera lens(c)Correction ofThe lens distortion effect of (1) is recorded as
9. For each imageAnd (4) carrying out segmentation so that each small block obtained after segmentation just comprises a complete blurred image of one coding mark point.The number of the image small blocks contained in the image is recorded asThe divided image patches are recorded asWhere c is 0,1 corresponds to the left and right cameras, respectively, K is 1,2,. K corresponds to the shooting order,correspond toThe s-th tile of (1).Is centered onHas a pixel coordinate of
10. To use the method in remarks (remark: 1. computer simulation to generate the differentEncoding various motion-blurred images of the points; 2. constructing a deep convolutional neural network; 3. training a deep convolutional neural network by using the image generated by simulation; 4. using the trained network to identify the motion blurred image of the actually shot coding mark point to obtain the identity id) of the blurred coding mark point in the identification image, wherein small image blocks need to be subjected to identificationAnd (4) carrying out pretreatment. The method uses a deep convolutional network MBCNet to identify the identity of a coding mark point of motion blur, the network is constructed by specifying the width and height of an input layer, and w represents the width and height of an image required by the input layer and has the unit of pixel.
11. Performing size preprocessing on each image small block, and setting the width of each small block as wHPixel of height wVA pixel. (1) If w isH=wVNo processing is required. (2) If max { wH,wVIs equal to w, the image small block is zoomedAnd then, blank areas with the same gray level as the background are symmetrically added on the upper side, the lower side or the left side and the right side of the small image blocks respectively, so that the width and the height of the image are all w pixels. Recording the preprocessed image small blocks
12. For each oneThe identity of the fuzzy coding mark points contained in the remarks is identified by using the method in the remarks, and the identification is recorded asDevice for placing
13. Screening the identified coding mark points, comprising the following steps:
a) screening all ID e IDs, if some K e {1,2,. multidata., K } exists,
b) All IDs e IDs that are not currently marked as invalid are filtered, and if there is some K e {1,2pBelow the level, epipolar constraints are satisfied for the left and right images, the flag is invalid.
14. Marking point identity for each codeCalculating initial values of fitting starting and ending endpointsThe method comprises the following steps:
a) for each K ∈ {1, 2.,. K }, and for each c ∈ {0,1}, there is some oneSo thatAccording toAnd two camera matrices P1,P2Reconstructing the initial value M of the spatial position of the coded mark point id at the kth momentid,kIts three-dimensional coordinate is (x)id,k,yid,k,zid,k)T。
b) According to Mid,kK-th interpolation generates a K-th order B-spline curve SPid。SPidPass through each Mid,k。SPidIs expressed as
c) Computing SPidArc length of (d), denoted as σidAnd will SPidAn approximate arc length parameterization is performed. The equation of the curve after reparameterization is denoted as Vid(t),t∈[0,σid]. At this time there is Vid(0)=Mid,1,Vid(σid)=Mid,K。
d) In SPidEach M ofid,KCorresponding parameter is tid,kI.e. Mid,k=Vid(tid,k),k=1,2,...,K。
15. Constructing a static virtual imaging model of coding points moving in space, which comprises the following steps:
a) and arranging the camera C and the encoding point E to be positioned under the same three-dimensional space coordinate system.
b) The imaging matrix of camera C is set to P, with no distortion.
c) The image of the coding point id is marked as M, the binary image has the gray value of 0 or 1, and the length and the width are all l pixels. In the plane of the self body, according to the anticlockwise direction, the homogeneous coordinates of four vertexes are respectively
d) The code point E to be imaged is a square plane with the side length of l, one side of the square plane is pasted with the pattern M, and the square plane is filled with the pattern M without distortion.
e) M (u) is an image M of encoded points represented in functional form. Wherein the parameter u is a homogeneous coordinate (u, v, s)TCorresponding to non-homogeneous coordinates ofM (u) represents a position on an imageThe gray value of the pixel at (a). If this coordinate is a non-integer value, the grey value is generated by interpolation. If the coordinate falls outside the image, the function returns a gray value of 0.
f) The position of E in space is determined entirely by the coordinates of the four vertices. When the image of the encoded point is oriented toward the viewer, the four vertices are sequentially Q in the counterclockwise direction1(v,x),Q2(v,x),Q3(v,x),Q4(v, x) where the parameter vector v ═ (α, γ)T,x=(x,,y,z)TThe attitude and position are determined separately.
g) Each Q1(v,x),Q2(v,x),Q3(v,x),Q4(v, x) are each independently of And obtaining the product through coordinate transformation.
h) the homogeneous coordinates of the image points of the four vertexes of E in the image plane of the camera C are respectively zi=PQi(v,x)。
j) The encoded point is imaged as I in camera C in this position poseM,v,x,PIn functional form IM,v,P(u) ═ m (hu), where u ═ u, v,1)TIs the pixel position (u, v)THomogeneous coordinates of (a).
16. Using the structure of the previous step IM,v,x,PConstructing a fuzzy imaging model of the motion of the coding points, comprising the following steps:
a) the discrete particle size N is selected as a natural number, and is generally 100 or more and 1000 or less. And if the value of N is large, the fuzzy effect is closer to the real effect.
b) Under the premise of short-time exposure, in the process of limiting movement, the attitude angle v of the encoding point is (α, gamma)TRemain unchanged.
c) On the premise of short-time exposure, the motion is defined as a straight-line segment motion with a constant speed, and the starting point is x1=(x1,y1,z1)TEnd point is x2=(x2,y2,z2)T。
17. Marking point identity for each codeFor each exposure number K1, 2. The method comprises the following steps:
a) the image corresponding to the coding point id is recorded as M. Note the bookc is 0,1 is the right and left images of the k-th shot, and is recordedAnd the image small blocks containing the motion blur coding mark points of the coding points are displayed, wherein c is 0,1 respectively corresponds to a left camera and a right camera, and s is the number of the image small blocks divided from the image obtained by shooting.
b) Selecting an optimization variable as θ1,θ2,θ3,λ1,λ2,λ3,μ1,μ2,μ3,ω1,ω2。
C)θ1,θ2,θ3The initial value of (d) is randomly chosen within 0 to 2 pi.
f)ω1For image gain, the initial value is 1.
g)ω2For image bias, the initial value is 0.
h) Put v ═ λ1,λ2,λ3)T,x1=(λ1,λ2,λ3)T,x2=(μ1,μ2,μ3)T。
i) Masking functionWherein c, k, s have the meanings given inThe corresponding symbols in (1) have the same meaning. Parameter u ═ (u, v, s)T,As pixel coordinates, when the coordinates fall within an image patchWithin the pixel coordinate range occupied in its parent image, return 1, otherwise return 0.
j) Calculating an optimized objective function
Wherein | | | purple hair2Representing the square of the norm. When W is an image, | W | | non-woven calculation2Which is the sum of the squares of the gray values of all pixels in the image.
k) By optimizing the parameter theta1,θ2,θ3,λ1,λ2,λ3,μ1,μ2,μ3,ω1,ω2So that f takes the minimum optimum value.
l) each time of theta1,θ2,θ3And c), assigning different random values, and repeating the steps b) to k), and selecting the minimum optimized value of f as the final optimized result. The number of repetitions is not less than 16.
m) is completed, and the motion track of the coding point is from (lambda) in the exposure time1,λ2,λ3) To (mu)1,μ2,μ3) A straight line segment of (a). During the exposure time, the attitude parameter of the encoding point is (theta)1,θ2,θ3,)。
The foregoing is only a preferred embodiment of this invention and it should be noted that modifications can be made by those skilled in the art without departing from the principle of the invention and these modifications should also be considered as the protection scope of the invention.
Claims (9)
1. A visual characteristic three-dimensional reconstruction method under motion blur effect is characterized by comprising the following steps: comprises the following steps
The method comprises the following steps: calibrating a camera to be used;
step two: arranging coding mark points on the surface of a measured object;
step three: acquiring a motion blurred image;
step four: identifying the identity of the coded mark points in the image;
step five: aiming at the same coding mark point, by means of time sequence images shot at different moments, carrying out coarse positioning on the spatial positions of the same coding mark point at different moments and fitting the same coding mark point into a spline curve to be used as an initial value of a spatial motion track of the coding mark point;
step six: constructing a fuzzy imaging model for coding the motion of the mark points;
step seven: in each exposure time, according to the fuzzy imaging model, optimizing and solving the motion path and the posture of the coding mark point in the exposure time;
in the third step:
(a) acquiring a motion-blurred image group, namely a pair of images acquired by a pair of cameras at multiple moments, wherein coding mark points in each image are imaged with different degrees of blurring due to motion blurring effect, and using the motion-blurred image groupEach image represents the K-th image captured by a camera with left c being 0 and right c being 1, K is the total number of shots, each shot has a duration Δ T, and the start time of each shot is TkK is 1,2,. K, the exposure time of two adjacent shots is not overlapped, the interval time from the end of the previous exposure to the start of the next exposure is the same, and is recorded as delta T;
(b) according to the phaseDistortion coefficient vector d of camera lens(c)Correction ofThe lens distortion effect of (1) is recorded as
(c) For each imageThe segmentation is carried out, each small block obtained after the segmentation just comprises a complete blurred image of the coding mark point,the number of the image small blocks contained in the image is recorded asThe divided image patches are recorded asWhere c is 0,1 corresponds to the left and right cameras, respectively, K is 1,2,. K corresponds to the shooting order,correspond toThe number s of the small blocks in (1),is centered onHas a pixel coordinate of
2. The method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 1, characterized in that: in the first step, a pair of cameras are calibrated and respectively marked as a left camera C0And a right camera C1Their imaging matrices are respectively denoted as P0And P1The distortion coefficient vectors of the two camera lenses are respectively expressed as d(c)C is 0, 1; selecting a threshold value TpThe method is used for epipolar constraint detection.
3. The method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 2, characterized in that: in the second step:
(a) selecting the identity number of the coding mark point to be used, wherein the identity number is a natural number and takes the value from 1 to N0N is0For the total number of the coding marks, the selected set of coding marks isN is the total number of the selected coding mark points;
(b) to pairEach id innN is 1, 2.. times.n, an image M for preparing a coding marker pointnAll images have the same pixel size, and the number of the wide and high pixels is recorded as z;
(c) according to MnManufacturing an actual coding mark point, wherein the side length is l;
(d) and pasting the actual coding mark points to the surface of the measured object.
4. A method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 3, characterized in that: in the fourth step:
(a) for small image blockPreprocessing is carried out, a deep convolutional network MBCNet is used for identifying the identity of a coding mark point of motion blur, the network is constructed, the width and the height of an input layer must be specified, w represents the width and the height of an image required by the input layer, and the unit is a pixel;
(b) performing size preprocessing on each image small block, and setting the width of each small block as wHPixel of height wVPixel (1) if wH=wVNo processing is required if w, (2) if max { w }H,wVIs equal to w, the image small block is zoomedThen, blank areas with the same gray level as the background are symmetrically added on the upper side, the lower side or the left side and the right side of the image small blocks respectively, so that the image width and the height are all w pixels, and the image small blocks after preprocessing are marked as w pixels
5. The method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 4, characterized in that: in the fifth step:
(a) screening the identified coding mark points;
6. The method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 5, characterized in that: screening the identified coding mark points, comprising the following steps:
(a) screening all ID e IDs, if some K e {1,2,. multidata., K } exists,mark this id as invalid; (b) all IDs e IDs that are not currently marked as invalid are filtered, and if there is some K e {1,2pIf the epipolar constraint conditions are met below the level for the left and right images, marking the images as invalid;
7. the method for three-dimensional reconstruction of visual features under motion blur effect according to claim 6, characterized by: marking point identity for each codeCalculating initial values of fitting starting and ending endpointsThe method comprises the following steps:
(a) for each K ∈ {1, 2.,. K }, and for each c ∈ {0,1}, there is some oneSo thatAccording toAnd two camera matrices P1,P2Reconstructing the initial value M of the spatial position of the coded mark point id at the kth momentid,kIts three-dimensional coordinate is (x)id,k,yid,k,zid,k)T;
(b) According to Mid,kK-1 th order B-spline curve SP is generated by K interpolationid,SPidPass through each Mid,k,SPidIs expressed as
(c) Computing SPidArc length of (d), denoted as σidAnd will SPidCarrying out approximate arc length parameterization, and recording the equation of the curve after the parameterization is carried out again as Vid(t),t∈[0,σid]At this time there is Vid(0)=Mid,1,Vid(σid)=Mid,K;
(d) In SPidEach M ofid,KCorresponding parameter is tid,kI.e. Mid,k=Vid(tid,k),k=1,2,...,K;
8. The method for three-dimensional reconstruction of visual features under motion blur effect according to claim 7, characterized by: in step six, the blurred imaging model is constructed as follows:
(a) arranging the camera C and the coding mark point E under the same three-dimensional space coordinate system;
(b) setting an imaging matrix of a camera C as P, and having no distortion;
(c) the image of the coded mark point id is marked as M, the binary image has the gray value of 0 or 1 and the length and the width of l, and in the plane of the coded mark point id, the homogeneous coordinates of four vertexes are respectively shown as M and L according to the anticlockwise direction
(d) The code point E to be imaged is a square plane with the side length of l, one side of the square plane is pasted with a pattern M, and the square plane is filled with the pattern M without distortion;
(e) m (u) is an image M of the encoded points expressed in functional form, where the parameter u is the homogeneous coordinate (u, v, s)TCorresponding to non-homogeneous coordinates ofM (u) represents a position on an imageThe gray value of the pixel at (d);
(f) e is completely determined by the coordinates of the four vertices, which are Q in the order of the four vertices in the counterclockwise direction when the image of the encoded point is oriented toward the viewer1(v,x),Q2(v,x),Q3(v,x),Q4(v, x) where the parameter vector v ═ (α, γ)T,x=(x,y,z)TDetermining the posture and the position respectively;
(g) each Q1(v,x),Q2(v,x),Q3(v,x),Q4(v, x) are each independently of Obtained by coordinate transformation, α, gamma determines a rotation matrix x, y, z determine the translation matrixFor a group i of 1,2,3,4,
(h) the homogeneous coordinates of the image points of the four vertexes of E in the image plane of the camera C are respectively zi=PQi(v,x);
(j) The encoded point is imaged as I in camera C in this position poseM,v,x,PIn functional form IM,v,x,P(u) ═ m (hu), where u ═ u, v,1)TIs the pixel position (u, v)THomogeneous coordinates of (a).
9. The method for three-dimensional reconstruction of visual features under the effect of motion blur according to claim 8, characterized by: in the seventh step:
(a) using the above step of construction IM,v,x,PConstructing a coding point motion fuzzy imaging model;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710321151.0A CN107270875B (en) | 2017-05-09 | 2017-05-09 | Visual feature three-dimensional reconstruction method under motion blur effect |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710321151.0A CN107270875B (en) | 2017-05-09 | 2017-05-09 | Visual feature three-dimensional reconstruction method under motion blur effect |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107270875A CN107270875A (en) | 2017-10-20 |
CN107270875B true CN107270875B (en) | 2020-04-24 |
Family
ID=60073863
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710321151.0A Active CN107270875B (en) | 2017-05-09 | 2017-05-09 | Visual feature three-dimensional reconstruction method under motion blur effect |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107270875B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114299172B (en) | 2021-12-31 | 2022-07-08 | 广东工业大学 | Planar coding target for visual system and real-time pose measurement method thereof |
CN114757993B (en) * | 2022-06-13 | 2022-09-09 | 中国科学院力学研究所 | Motion and parameter identification method and system for schlieren image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1995004331A1 (en) * | 1993-08-03 | 1995-02-09 | Apple Computer, Inc. | Three-dimensional image synthesis using view interpolation |
US7023922B1 (en) * | 2000-06-21 | 2006-04-04 | Microsoft Corporation | Video coding system and method using 3-D discrete wavelet transform and entropy coding with motion information |
CN101750029A (en) * | 2008-12-10 | 2010-06-23 | 中国科学院沈阳自动化研究所 | Characteristic point three-dimensional reconstruction method based on trifocal tensor |
CN106254722A (en) * | 2016-07-15 | 2016-12-21 | 北京邮电大学 | A kind of video super-resolution method for reconstructing and device |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8319798B2 (en) * | 2008-12-17 | 2012-11-27 | Disney Enterprises, Inc. | System and method providing motion blur to rotating objects |
-
2017
- 2017-05-09 CN CN201710321151.0A patent/CN107270875B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1995004331A1 (en) * | 1993-08-03 | 1995-02-09 | Apple Computer, Inc. | Three-dimensional image synthesis using view interpolation |
US7023922B1 (en) * | 2000-06-21 | 2006-04-04 | Microsoft Corporation | Video coding system and method using 3-D discrete wavelet transform and entropy coding with motion information |
CN101750029A (en) * | 2008-12-10 | 2010-06-23 | 中国科学院沈阳自动化研究所 | Characteristic point three-dimensional reconstruction method based on trifocal tensor |
CN106254722A (en) * | 2016-07-15 | 2016-12-21 | 北京邮电大学 | A kind of video super-resolution method for reconstructing and device |
Non-Patent Citations (1)
Title |
---|
基于多观测点图像SURF特征配准及去模糊的三维重建;时愈等;《光学与光电技术》;20160228;第28页右栏第2段、第29页右栏、第30页左栏,图1-2 * |
Also Published As
Publication number | Publication date |
---|---|
CN107270875A (en) | 2017-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110569704B (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
Mueggler et al. | Continuous-time trajectory estimation for event-based vision sensors | |
CN111339870B (en) | Human body shape and posture estimation method for object occlusion scene | |
CN111489394B (en) | Object posture estimation model training method, system, device and medium | |
CN107622257A (en) | A kind of neural network training method and three-dimension gesture Attitude estimation method | |
CN111833237B (en) | Image registration method based on convolutional neural network and local homography transformation | |
CN111768452B (en) | Non-contact automatic mapping method based on deep learning | |
CN111524233A (en) | Three-dimensional reconstruction method for dynamic target of static scene | |
CN108564120B (en) | Feature point extraction method based on deep neural network | |
CN104537705B (en) | Mobile platform three dimensional biological molecular display system and method based on augmented reality | |
CN106780546B (en) | The personal identification method of motion blur encoded point based on convolutional neural networks | |
WO2021219835A1 (en) | Pose estimation method and apparatus | |
CN113947589A (en) | Missile-borne image deblurring method based on countermeasure generation network | |
CN108805987A (en) | Combined tracking method and device based on deep learning | |
CN112734832B (en) | Method for measuring real size of on-line object in real time | |
CN113614735A (en) | Dense 6-DoF gesture object detector | |
CN113159158A (en) | License plate correction and reconstruction method and system based on generation countermeasure network | |
CN107270875B (en) | Visual feature three-dimensional reconstruction method under motion blur effect | |
CN110599588A (en) | Particle reconstruction method and device in three-dimensional flow field, electronic device and storage medium | |
CN118154770A (en) | Single tree image three-dimensional reconstruction method and device based on nerve radiation field | |
CN114862866B (en) | Calibration plate detection method and device, computer equipment and storage medium | |
CN116758149A (en) | Bridge structure displacement detection method based on unmanned aerial vehicle system | |
CN113674360B (en) | Line structure light plane calibration method based on covariate | |
CN115731351A (en) | Vehicle explicit three-dimensional reconstruction method under rough camera pose labeling condition | |
CN108830804A (en) | Virtual reality fusion Fuzzy Consistent processing method based on line spread function standard deviation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |