CN104318559A - Quick feature point detecting method for video image matching - Google Patents
Quick feature point detecting method for video image matching Download PDFInfo
- Publication number
- CN104318559A CN104318559A CN201410563502.5A CN201410563502A CN104318559A CN 104318559 A CN104318559 A CN 104318559A CN 201410563502 A CN201410563502 A CN 201410563502A CN 104318559 A CN104318559 A CN 104318559A
- Authority
- CN
- China
- Prior art keywords
- point
- pixel
- unique point
- feature point
- gradient
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image treatment and discloses a quick feature point detecting method for video image matching. The quick feature point detecting method for the video image matching has a certain robustness and high detection efficiency, and the quick feature point detecting method is capable of detecting feature points when an image has gray level change, rotation, interference noise and the like, and capable of being used for a video system. According to the technical scheme, the quick feature point detecting method for the video image matching includes step that removing non-feature points; performing gradient computation; calculating feature point response parameters; eliminating interference candidate feature points. The quick feature point detecting method for the video image matching is mainly used for the image treatment.
Description
Technical field
The present invention relates to a kind of method of technical field of image processing, particularly can be used for the unique point method for quick of video image coupling.
Technical background
Unique point is the Local modulus maxima of curvature on objective contour, has decisive action to the contour feature of master goal, has also just roughly grasped the shape of target once the contour feature that have found target.Unique point has the shape that can sketch out region, can transmit many advantages such as most of image information, and analyzing them not only can provide basis for images match, again for virtual view synthesis provides Theories and methods.Unique point does not have clear and definite mathematical definition, but people generally believe that unique point is the point that two dimensional image brightness changes curvature maximum value on violent point or image border curve, and unique point is the important local feature of one of image.These, while reservation image graphics key character, effectively can reduce the data volume of information, make the content of its information very high, effectively improve the speed of calculating, be conducive to the reliable matching of image, make to be treated as possibility in real time.It is in 3 D scene rebuilding, estimation, target following, target identification, image registration and the computer vision field such as to mate and play very important effect.
At present, the mathematical definition of unique point has: the position of the maximal value corresponding to gradation of image first order derivative; The intersection point at two or more edge in image; Two dimensional image brightness changes violent position etc.Due to the difference of unique point definition mode, also just define the feature point detecting method of different principle.Conventional method comprises two classes: based on the feature point detecting method of image border and the feature point detecting method based on gradation of image information.Based on the feature point detecting method of image border, the edge of main selection image carries out extracting and detecting as unique point, comprise image pre-segmentation, extract profile chain code and feature point detection three steps, the localized variation susceptibility that this method treats surveyed area is stronger, and it is quite large in calculated amount, mainly because encode in the edge of whole process need to image, this just makes this testing process will rely on Iamge Segmentation and Edge extraction to a great extent.Therefore, the usable range of these class methods is relatively little.And for the feature point detecting method based on gradation of image information, be then using the gradient of image and curvature directly as the standard of judging characteristic point existence, avoid the former defect structurally, applied widely.
Harris feature point detector is widely used as the detection method based on gradation of image information.It is upper proposition based on Moravec feature point detector, the displacement that Moravec is a local window by studying image on level, vertical, diagonal line and back-diagonal four direction, each pixel grey scale situation of change of computed image, Harris extends thinking by Taylor Series Expansion Method, grey scale change situation after calculation window moves along any direction, utilize analytic equation to determine unique point further, positioning precision is high.Introduce smoothing factor simultaneously, enhance the robustness of this algorithm.Whole algorithm is the inspiration being subject to autocorrelation function in signal transacting, introduces the matrix M be associated with autocorrelation function.The eigenwert of matrix M can represent the extreme value curvature of certain any gradation of image autocorrelation function, if two curvature extremum values are all high, so just getting this pixel is unique point.Harris feature point detector principle is with reference to figure 1, and wherein λ 1 and λ 2 is two eigenwerts of matrix M.Although Harris positioning feature point is accurate, owing to relating to convolution algorithm, calculated amount is comparatively large, and time complexity is high, and the application in video image coupling is subject to certain limitation.
Summary of the invention
In order to overcome the deficiencies in the prior art, the unique point method for quick being used for video image coupling is proposed, it has certain robustness, unique point can be detected in the situations such as image generation grey scale change, rotation and interference noise, there is very high detection efficiency simultaneously, can be applied in video system, for this reason, the technical scheme that the present invention takes is, for the unique point method for quick of video image coupling, comprises successively: get rid of non-unique point step; Gradient calculation step; Unique point response parameter calculation procedure; Exclusive PCR candidate feature point step.
Get rid of non-unique point step to be specially:
First described feature point detecting method chooses the Bresenham circle that a radius is the discretize of 3 pixels, need pixel 1 uniform around check point pixel P and pixel P to pixel 16 totally 16 points in circle, first pixel 1 and 9 is detected, if the gray-scale value of these two pixels is all at [I (p)-t, I (p)+t] scope in, then measuring point pixel P to be checked is not unique point, is got rid of, wherein I (p) is the gray-scale value of some P, and t is a threshold value; If fruit dot p still may be a unique point, just continue to detect pixel 5 and 13, if have at least the gray-scale value of three points be greater than I (p)+t or be less than I (p)-t in these four pixels, so put p and be a candidate feature point, for described feature point detecting method subsequent treatment, as shown in formula one.
N=∑ | I (n) – I (p) | >t (formula one)
Wherein, n is any point in pixel 1,5,9 and 13, if N is more than or equal to 3, so puts P and is a candidate feature point, otherwise got rid of;
Gradient calculation step is specially:
The gradient calculation step respectively horizontal difference operator of use carries out filtering with vertical difference operator to image, thus tries to achieve image gradient in the horizontal and vertical directions, and the product of compute gradient, for generating the element of autocorrelation matrix M; Gradient calculation is as shown in formula two and formula three:
Wherein I represents original image,
represent convolution, Ix and Iy represents the gradient of image on x direction and y direction respectively, namely do convolution algorithm by the difference operator in pixel each in image and x direction and y direction to try to achieve, 3 × 3 matrixes of horizontal direction difference operator reference formula two, 3 × 3 matrixes of vertical direction difference operator reference formula three;
The product of compute gradient, namely Ix2=Ix × Ix is obtained by square calculating of gradient on x direction; Namely Iy2=Iy × Iy is obtained by square calculating of gradient on y direction; Namely Ixy=Ix × Iy is calculated by the product of gradient on gradient on x direction and y direction and obtains.
Unique point response parameter calculation procedure is specially:
Unique point response parameter is obtained by unique point response function formulae discovery, unique point response function formula calculates the response parameter of each pixel by the determinant of autocorrelation matrix and mark, and autocorrelation matrix carries out gaussian filtering to the quadratic sum product of gradient to try to achieve, as shown in formula four and formula five:
R=det (M)-k × (trace (M))
2(formula four)
Wherein ω (p) is Gaussian filter function, adopts the quadratic sum product of compute mode to gradient of convolution to carry out Gaussian smoothing filter respectively, obtains the elements A of autocorrelation matrix M, B, C; Described feature point detecting method adopts window size to be 5 × 5, standard deviation is Gauss's window of 2, det (M)=A × B-C × C, the determinant of representing matrix M, trace (M)=A+B, the mark of representing matrix M, k is empirical value, its span is [0.04,0.06], and w is the image window of one (2n+1) × (2n+1);
Finally the response parameter value of all candidate feature points is compared, draw a maximal value Rmax, as the use of the threshold value of exclusive PCR candidate feature point step.
Exclusive PCR candidate feature point step is specially: adopt non-maxima suppression to carry out exclusive PCR candidate feature point, to whole image according to direction search candidate feature point from left to right, from top to bottom, once find a candidate feature point, just start to perform non-maxima suppression, first the candidate feature point searched must be in following situations: above it, all pixels are non-candidate unique point; Same a line pixel of its left is non-candidate unique point, feature point detecting method adopts the window of 3 × 3 to carry out non-maxima suppression, and namely with a P, coordinate is (x, y) there are eight following points in the neighborhood putting 3 × 3 centered by P: (x-1, y-1), (x-1, y), (x-1, y+1), (x, y-1), (x, y+1), (x+1, y-1), (x+1, y), (x+1, y+1); With P point right pixel (x, y+1) for starting point starts to compare response parameter, and compare with other pixels successively in a clockwise direction, if the response parameter of P point is greater than pixel (x, y+1), and this pixel is candidate feature point, then this pixel is deleted from candidate feature lattice array, analogize in proper order, until to the response function of P point be less than in neighborhood certain a bit, now P point is deleted from candidate point array;
If the response function of P point is greater than other points in neighborhood and is greater than threshold value 0.01 × Rmax, so P point is real unique point, is marked in original image, and other candidate feature points now in neighborhood all will be deleted from candidate point array.By that analogy, until to all candidate feature points having traveled through image.
Get rid of non-unique point step specifically also to comprise: iff the pixel having two vicinities, such as pixel 1 and 5, its gray-scale value is all obviously greater than Ip+t or is significantly less than Ip-t, so putting p still may be a unique point, an i.e. given threshold value th, if the gray-scale value of pixel 1 and 5 is greater than Ip+th or is less than Ip-th, so putting p still may be a unique point, as shown in Figure 4.Although the gray-scale value of pixel 9 and 13 is approximately equal to the gray-scale value of a p, the gray-scale value of pixel 1 and 5 is all much larger than the gray-scale value of a p, and some p now remains a unique point.
Described feature point detecting method is selected to carry out filtering to whole image, and because image gradient calculation in the x direction and gradient calculation in y-direction there is no data correlation and data sharing, gradient calculation in both direction is resolved into two independently parts by described feature point detecting method, spatial parallelism mode is adopted to calculate it respectively, namely use two kinds of computational resources to carry out the gradient calculation in both direction simultaneously, thus improve detection speed.
Compared with the prior art, technical characterstic of the present invention and effect:
Reasonably arrange threshold value t can get rid of a large amount of non-unique point and retain real unique point, thus there is certain robustness, unique point can be detected in the situations such as image generation grey scale change, rotation and interference noise.
Reasonably arrange threshold value t can get rid of a large amount of non-unique point and retain real unique point, adopt the window of (2n+1) × (2n+1) to carry out non-maxima suppression simultaneously, the unique point thus extracted is even and reasonable.
Get rid of non-unique point step and can exclude a large amount of non-unique points, subsequent step is only processed candidate feature point, thus there is very high detection efficiency, can be applied in video system.
Accompanying drawing explanation
Fig. 1 is Harris feature point detector principle schematic.
Fig. 2 is process flow diagram of the present invention.
The Bresenham circle schematic diagram of Fig. 3 to be radius be discretize of 3.
Fig. 4 is the pel array schematic diagram in particular cases residing for p point.
Fig. 5 is candidate feature neighborhood of a point schematic diagram.
Fig. 6 is the pel array schematic diagram of first candidate feature point.
Embodiment
The present invention includes the following step: get rid of non-unique point step; Gradient calculation step; Unique point response parameter calculation procedure; Exclusive PCR candidate feature point step.Process flow diagram of the present invention is with reference to figure 2.According to application demand actual in video image coupling, the image-region in described feature point detecting method handled by each step is the region except outermost 5 row 5 row.
Step one: get rid of non-unique point.
First described feature point detecting method chooses the Bresenham circle that a radius is the discretize of 3 pixels, and circle has 16 points, and its sample point configuration is with reference to figure 3, and wherein pixel P is measuring point to be checked, and pixel 1 is to 16 points in pixel 16 for this reason circle.First pixel 1 and 9 is detected, if the gray-scale value of these two pixels is all at [I (p)-t, I (p)+t] scope in, then measuring point P to be checked is not unique point, is got rid of, and wherein I (p) is the gray-scale value of some P, t is a threshold value, threshold value t is larger, and the pixel of eliminating is more, reasonably arranges threshold value t and gets rid of a large amount of non-unique point and the key of keeping characteristics point.If fruit dot p still may be a unique point, just continue to detect pixel 5 and 13, if have at least the gray-scale value of three points be greater than Ip+t or be less than Ip-t in these four pixels, so put p and be a candidate feature point, for described feature point detecting method subsequent treatment, as shown in formula one.
N=∑ | I (x) – I (p) | >t (formula one)
Wherein, x is any point in pixel 1,5,9 and 13, if N is more than or equal to 3, so puts P and is a candidate feature point, otherwise got rid of.
There is any it should be noted that, iff the pixel having two vicinities, such as pixel 1 and 5, its gray-scale value is all obviously greater than Ip+t or is significantly less than Ip-t, so putting p still may be a unique point, i.e. a given threshold value th, if the gray-scale value of pixel 1 and 5 is greater than Ip+th or is less than Ip-th, so putting p still may be a unique point, as shown in Figure 4.Although the gray-scale value of pixel 9 and 13 is approximately equal to the gray-scale value of a p, the gray-scale value of pixel 1 and 5 is all much larger than the gray-scale value of a p, and some p now remains a unique point.
A large amount of non-unique points can be got rid of, so described feature point detecting method can obviously accelerate by described eliminating non-unique point step.For the point of remainder, be referred to as candidate feature point.
Step 2: gradient calculation.
The gradient calculation step respectively horizontal difference operator of use carries out filtering with vertical difference operator to image, thus tries to achieve image gradient in the x and y direction, and the product of compute gradient, for generating the element of autocorrelation matrix M.Gradient calculation is as shown in formula two and formula three.
Wherein I represents original image,
represent convolution, Ix and Iy represents image gradient in the x and y direction respectively, namely do convolution algorithm by the difference operator of pixel each in image and horizontal direction and vertical direction to try to achieve, 3 × 3 matrixes of horizontal direction difference operator reference formula two, 3 × 3 matrixes of vertical direction difference operator reference formula three.
The product of compute gradient, namely Ix2=Ix × Ix is obtained by square calculating of gradient on x direction; Namely Iy2=Iy × Iy is obtained by square calculating of gradient on y direction; Namely Ixy=Ix × Iy is calculated by the product of gradient on gradient on x direction and y direction and obtains.
Different from the calculating unique point response function of next step, gradient calculation step carries out filtering to whole image, instead of centered by candidate feature point, carry out filtering to its neighborhood, because the former complexity is less than the latter.Because may overlapping region be there is in two and two or more candidate feature neighborhood of a point, as shown in Figure 5, wherein two white pixel points are candidate feature point, black pixel point is its neighborhood of 5 × 5, pixel 1 to 8 is overlapping region, if now select to carry out filtering to its neighborhood, the double counting of overlapping region will inevitably be caused.Actual conditions often occur that multiple candidate feature point is adjacent, will cause the repeatedly calculating of overlapping region like this, cause calculation times to exceed the calculation times of whole image being carried out to filtering.
So described feature point detecting method is selected to carry out filtering to whole image, and because image gradient calculation in the x direction and gradient calculation in y-direction there is no data correlation and data sharing, gradient calculation in both direction is resolved into two independently parts by described feature point detecting method, spatial parallelism mode is adopted to calculate it respectively, namely use two kinds of computational resources to carry out the gradient calculation in both direction simultaneously, thus improve detection speed.
Step 3: unique point response parameter calculates.
Unique point response parameter is obtained by unique point response function formulae discovery.Unique point response function formula calculates the response parameter of each pixel by the determinant of autocorrelation matrix and mark, and autocorrelation matrix carries out gaussian filtering to the quadratic sum product of gradient to try to achieve, as shown in formula four and formula five.
R=det (M)-k × (trace (M))
2(formula four)
Wherein ω (p) is Gaussian filter function, adopts the quadratic sum product of compute mode to gradient of convolution to carry out Gaussian smoothing filter respectively, obtains the elements A of autocorrelation matrix M, B, C.Described feature point detecting method adopts window size to be 5 × 5, and standard deviation is Gauss's window of 2.Det (M)=A × B-C × C, the determinant of representing matrix M, trace (M)=A+B, the mark of representing matrix M.K is empirical value, and its span is [0.04,0.06], and its value is 0.04 by described feature point detecting method.W is the image window of one (2n+1) × (2n+1), is consistent with the size of Gauss's window.
Original Harris feature point detection algorithm carries out the calculating of response parameter to each pixel, will inevitably cause the extra computation to non-unique point, complexity increases greatly.Described feature point detecting method get rid of non-unique point step basis on, the response parameter of a calculated candidate unique point.As shown in Figure 5, only calculate the response parameter of two white pixel points, therefore described feature point detecting method can accelerate greatly.Because get rid of non-unique point step do not get rid of real unique point, so described feature point detecting method does not have loss in Position location accuracy.
Finally the response parameter value of all candidate feature points is compared, draw a maximal value Rmax, as the use of the threshold value of exclusive PCR candidate feature point step.
Step 4: exclusive PCR candidate feature point.
Described feature point detecting method adopts non-maxima suppression to carry out exclusive PCR candidate feature point.
Non-maxima suppression can be considered to local maximum search, and namely local maximum is greater than all elements in its neighborhood.In original Harris feature point detector, if the response parameter of a pixel is the maximum value in its neighborhood, and be greater than a given threshold value, so this point is considered to a unique point.Neighborhood (2n+1) normally centered by tested point × (2n+1) region.But once a local maximum is found, this just means other pixels all can skipped in this maximum point neighborhood, because their response parameter must be less than this maximum point.
Exclusive PCR candidate feature point step in described feature point detecting method, just starts to perform non-maxima suppression once find a candidate feature point according to direction search candidate feature point from left to right, from top to bottom to whole image.First the candidate feature point searched must be in following situations: above it, all pixels are non-candidate unique point; Same a line pixel of its left is non-candidate unique point.As shown in Figure 6, gray pixels point represents non-candidate unique point, and white pixel point represents candidate feature point, and P point is first candidate feature point.Described feature point detecting method adopts the window of 3 × 3 to carry out non-maxima suppression, namely in the neighborhood of 3 × 3 centered by a P (supposing that its coordinate is (x, y)), there are eight following points: (x-1, y-1), (x-1, y), (x-1, y+1), (x, y-1), (x, y+1), (x+1, y-1), (x+1, y), (x+1, y+1).With P point right pixel (x, y+1) for starting point starts to compare response parameter, and compare with other pixels successively in a clockwise direction, if the response parameter of P point is greater than pixel (x, y+1), and this pixel is candidate feature point, then this pixel is deleted from candidate feature lattice array, analogize in proper order, until to the response function of P point be less than in neighborhood certain a bit, now P point is deleted from candidate point array.
If the response function of P point is greater than other points in neighborhood and is greater than threshold value 0.01 × Rmax, so P point is real unique point, is marked in original image, and other candidate feature points now in neighborhood all will be deleted from candidate point array.By that analogy, until to all candidate feature points having traveled through image.
For making the object, technical solutions and advantages of the present invention more clear, provide the specific descriptions of embodiment of the present invention below in conjunction with example.In the non-unique point step of eliminating, its value, in [10,30], is 25 can obtain good effect between accuracy and runtime by the span of threshold value t; The span of threshold value th is in [40,60].In gradient calculation step, horizontal difference operator is [-101;-101;-10 1]; Vertical difference operator is [-1-1-1; 000; 11 1], namely get the macro block of in image 3 × 3, tertial three pixels are deducted respectively three pixels of first row, and it is added up, the gradient in horizontal direction can be tried to achieve; Three of the third line pixels are deducted respectively three pixels of the first row, and it is added up, the gradient in vertical direction can be tried to achieve.In unique point response function, the determinant that response function equals autocorrelation matrix deduct k be multiplied by autocorrelation matrix mark square, wherein the span of k is [0.04,0.06], is 0.04 can obtain good effect by its value.In non-maxima suppression step, Size of Neighborhood be 3 × 3 non-maxima suppression can effectively get rid of non-candidate unique point, and retain real unique point.
Claims (7)
1., for a unique point method for quick for video image coupling, it is characterized in that, comprise successively: get rid of non-unique point step; Gradient calculation step; Unique point response parameter calculation procedure; Exclusive PCR candidate feature point step.
2. as claimed in claim 1 for the unique point method for quick of video image coupling, it is characterized in that, get rid of non-unique point step to be specially: first described feature point detecting method chooses the Bresenham circle that a radius is the discretize of 3 pixels, need pixel 1 uniform around check point pixel P and pixel P to pixel 16 totally 16 points in circle, first pixel 1 and 9 is detected, if the gray-scale value of these two pixels is all at [I (p)-t, I (p)+t] scope in, then measuring point pixel P to be checked is not unique point, got rid of, wherein I (p) is the gray-scale value of some P, t is a threshold value, if fruit dot p still may be a unique point, just continue to detect pixel 5 and 13, if have at least the gray-scale value of three points be greater than I (p)+t or be less than I (p)-t in these four pixels, so put p and be a candidate feature point, for described feature point detecting method subsequent treatment, as shown in formula one:
N=∑ | I (n) – I (p) | >t (formula one)
Wherein, n is any point in pixel 1,5,9 and 13, if N is more than or equal to 3, so puts P and is a candidate feature point, otherwise got rid of.
3. as claimed in claim 1 for the unique point method for quick of video image coupling, it is characterized in that, gradient calculation step is specially: the gradient calculation step respectively horizontal difference operator of use carries out filtering with vertical difference operator to image, thus try to achieve image gradient in the horizontal and vertical directions, and the product of compute gradient, for generating the element of autocorrelation matrix M; Gradient calculation is as shown in formula two and formula three:
Wherein I represents original image,
represent convolution, Ix and Iy represents the gradient of image on x direction and y direction respectively, namely do convolution algorithm by the difference operator in pixel each in image and x direction and y direction to try to achieve, 3 × 3 matrixes of horizontal direction difference operator reference formula two, 3 × 3 matrixes of vertical direction difference operator reference formula three;
The product of compute gradient, namely Ix2=Ix × Ix is obtained by square calculating of gradient on x direction; Namely Iy2=Iy × Iy is obtained by square calculating of gradient on y direction; Namely Ixy=Ix × Iy is calculated by the product of gradient on gradient on x direction and y direction and obtains.
4. as claimed in claim 1 for the unique point method for quick of video image coupling, it is characterized in that, unique point response parameter calculation procedure is specially: unique point response parameter is obtained by unique point response function formulae discovery, unique point response function formula calculates the response parameter of each pixel by the determinant of autocorrelation matrix and mark, and autocorrelation matrix carries out gaussian filtering to the quadratic sum product of gradient to try to achieve, as shown in formula four and formula five:
R=det (M)-k × (trace (M))
2(formula four)
Wherein ω (p) is Gaussian filter function, adopts the quadratic sum product of compute mode to gradient of convolution to carry out Gaussian smoothing filter respectively, obtains the elements A of autocorrelation matrix M, B, C; Described feature point detecting method adopts window size to be 5 × 5, standard deviation is Gauss's window of 2, det (M)=A × B-C × C, the determinant of representing matrix M, trace (M)=A+B, the mark of representing matrix M, k is empirical value, its span is [0.04,0.06]; W is the image window of one (2n+1) × (2n+1);
Finally the response parameter value of all candidate feature points is compared, draw a maximal value Rmax, as the use of the threshold value of exclusive PCR candidate feature point step.
5. as claimed in claim 1 for the unique point method for quick of video image coupling, it is characterized in that, exclusive PCR candidate feature point step is specially: adopt non-maxima suppression to carry out exclusive PCR candidate feature point, to whole image according to direction search candidate feature point from left to right, from top to bottom, once find a candidate feature point, just start to perform non-maxima suppression, first the candidate feature point searched must be in following situations: above it, all pixels are non-candidate unique point; Same a line pixel of its left is non-candidate unique point, feature point detecting method adopts the window of 3 × 3 to carry out non-maxima suppression, and namely with a P, coordinate is (x, y) there are eight following points in the neighborhood putting 3 × 3 centered by P: (x-1, y-1), (x-1, y), (x-1, y+1), (x, y-1), (x, y+1), (x+1, y-1), (x+1, y), (x+1, y+1); With P point right pixel (x, y+1) for starting point starts to compare response parameter, and compare with other pixels successively in a clockwise direction, if the response parameter of P point is greater than pixel (x, y+1), and this pixel is candidate feature point, then this pixel is deleted from candidate feature lattice array, analogize in proper order, until to the response function of P point be less than in neighborhood certain a bit, now P point is deleted from candidate point array;
If the response function of P point is greater than other points in neighborhood and is greater than threshold value 0.01 × Rmax, so P point is real unique point, is marked in original image, and other candidate feature points now in neighborhood all will be deleted from candidate point array.By that analogy, until to all candidate feature points having traveled through image.
6. as claimed in claim 2 for the unique point method for quick of video image coupling, it is characterized in that, get rid of non-unique point step specifically also to comprise: iff the pixel having two vicinities, such as pixel 1 and 5, its gray-scale value is all obviously greater than Ip+t or is significantly less than Ip-t, so putting p still may be a unique point, an i.e. given threshold value th, if the gray-scale value of pixel 1 and 5 is greater than Ip+th or is less than Ip-th, so putting p still may be a unique point, as shown in Figure 4.Although the gray-scale value of pixel 9 and 13 is approximately equal to the gray-scale value of a p, the gray-scale value of pixel 1 and 5 is all much larger than the gray-scale value of a p, and some p now remains a unique point.
7., as claimed in claim 3 for the unique point method for quick of video image coupling, it is characterized in that,
Described feature point detecting method is selected to carry out filtering to whole image, and because image gradient calculation in the x direction and gradient calculation in y-direction there is no data correlation and data sharing, gradient calculation in both direction is resolved into two independently parts by described feature point detecting method, spatial parallelism mode is adopted to calculate it respectively, namely use two kinds of computational resources to carry out the gradient calculation in both direction simultaneously, thus improve detection speed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410563502.5A CN104318559A (en) | 2014-10-21 | 2014-10-21 | Quick feature point detecting method for video image matching |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410563502.5A CN104318559A (en) | 2014-10-21 | 2014-10-21 | Quick feature point detecting method for video image matching |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104318559A true CN104318559A (en) | 2015-01-28 |
Family
ID=52373785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410563502.5A Pending CN104318559A (en) | 2014-10-21 | 2014-10-21 | Quick feature point detecting method for video image matching |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104318559A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106023692A (en) * | 2016-05-13 | 2016-10-12 | 广东博士早教科技有限公司 | AR interest learning system and method based on entertainment interaction |
CN106056046A (en) * | 2016-05-20 | 2016-10-26 | 北京集创北方科技股份有限公司 | Method and device of extracting features from image |
CN107123105A (en) * | 2017-01-20 | 2017-09-01 | 南京理工大学 | Images match defect inspection method based on FAST algorithms |
CN107247953A (en) * | 2017-05-31 | 2017-10-13 | 大连理工大学 | A kind of characteristic point type selection method based on edge rate |
CN107507208A (en) * | 2017-07-12 | 2017-12-22 | 天津大学 | A kind of characteristics of image point extracting method based on Curvature Estimation on profile |
CN108305226A (en) * | 2018-01-19 | 2018-07-20 | 河南城建学院 | A kind of processing method of unmanned plane aerial photography three-dimensional imaging setting |
CN108369650A (en) * | 2015-11-30 | 2018-08-03 | 德尔福技术有限责任公司 | The method that candidate point in the image of calibrating pattern is identified as to the possibility characteristic point of the calibrating pattern |
CN109117851A (en) * | 2018-07-06 | 2019-01-01 | 航天星图科技(北京)有限公司 | A kind of video image matching process based on lattice statistical constraint |
CN111060948A (en) * | 2019-12-14 | 2020-04-24 | 深圳市优必选科技股份有限公司 | Positioning method, positioning device, helmet and computer readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054269A (en) * | 2009-10-27 | 2011-05-11 | 华为技术有限公司 | Method and device for detecting feature point of image |
US20120045135A1 (en) * | 2010-08-19 | 2012-02-23 | Sharp Laboratories Of America, Inc. | System for feature detection for low contrast images |
-
2014
- 2014-10-21 CN CN201410563502.5A patent/CN104318559A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054269A (en) * | 2009-10-27 | 2011-05-11 | 华为技术有限公司 | Method and device for detecting feature point of image |
US20120045135A1 (en) * | 2010-08-19 | 2012-02-23 | Sharp Laboratories Of America, Inc. | System for feature detection for low contrast images |
Non-Patent Citations (2)
Title |
---|
EDWARD ROSTEN ET AL: "Faster and Better: A Machine Learning Approach to Corner Detection", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
王慧勇: "一种快速自适应的Harris角点检测方法研究", 《电视技术》 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108369650A (en) * | 2015-11-30 | 2018-08-03 | 德尔福技术有限责任公司 | The method that candidate point in the image of calibrating pattern is identified as to the possibility characteristic point of the calibrating pattern |
CN108369650B (en) * | 2015-11-30 | 2022-04-19 | 德尔福技术有限责任公司 | Method for identifying possible characteristic points of calibration pattern |
CN106023692A (en) * | 2016-05-13 | 2016-10-12 | 广东博士早教科技有限公司 | AR interest learning system and method based on entertainment interaction |
CN106056046A (en) * | 2016-05-20 | 2016-10-26 | 北京集创北方科技股份有限公司 | Method and device of extracting features from image |
CN106056046B (en) * | 2016-05-20 | 2019-01-18 | 北京集创北方科技股份有限公司 | The method and apparatus of feature are extracted from image |
CN107123105A (en) * | 2017-01-20 | 2017-09-01 | 南京理工大学 | Images match defect inspection method based on FAST algorithms |
CN107247953A (en) * | 2017-05-31 | 2017-10-13 | 大连理工大学 | A kind of characteristic point type selection method based on edge rate |
CN107247953B (en) * | 2017-05-31 | 2020-05-19 | 大连理工大学 | Feature point type selection method based on edge rate |
CN107507208A (en) * | 2017-07-12 | 2017-12-22 | 天津大学 | A kind of characteristics of image point extracting method based on Curvature Estimation on profile |
CN108305226A (en) * | 2018-01-19 | 2018-07-20 | 河南城建学院 | A kind of processing method of unmanned plane aerial photography three-dimensional imaging setting |
CN109117851A (en) * | 2018-07-06 | 2019-01-01 | 航天星图科技(北京)有限公司 | A kind of video image matching process based on lattice statistical constraint |
CN111060948A (en) * | 2019-12-14 | 2020-04-24 | 深圳市优必选科技股份有限公司 | Positioning method, positioning device, helmet and computer readable storage medium |
CN111060948B (en) * | 2019-12-14 | 2021-10-29 | 深圳市优必选科技股份有限公司 | Positioning method, positioning device, helmet and computer readable storage medium |
US11416719B2 (en) | 2019-12-14 | 2022-08-16 | Ubtech Robotics Corp Ltd | Localization method and helmet and computer readable storage medium using the same |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104318559A (en) | Quick feature point detecting method for video image matching | |
Li et al. | Automatic pavement crack detection by multi-scale image fusion | |
CN103218605B (en) | A kind of fast human-eye positioning method based on integral projection and rim detection | |
CN108182383B (en) | Vehicle window detection method and device | |
CN105139412A (en) | Hyperspectral image corner detection method and system | |
CN107748873A (en) | A kind of multimodal method for tracking target for merging background information | |
CN106920245B (en) | Boundary detection method and device | |
CN106023171A (en) | Image corner detection method based on turning radius | |
CN105809173B (en) | A kind of image RSTN invariable attribute feature extraction and recognition methods based on bionical object visual transform | |
CN102682428A (en) | Fingerprint image computer automatic mending method based on direction fields | |
CN106340010A (en) | Corner detection method based on second-order contour difference | |
CN103500453A (en) | SAR(synthetic aperture radar) image significance region detection method based on Gamma distribution and neighborhood information | |
CN105488541A (en) | Natural feature point identification method based on machine learning in augmented reality system | |
CN105405138A (en) | Water surface target tracking method based on saliency detection | |
CN103914829B (en) | Method for detecting edge of noisy image | |
CN101908214A (en) | Moving object detection method with background reconstruction based on neighborhood correlation | |
CN106529548A (en) | Sub-pixel level multi-scale Harris corner detection algorithm | |
CN104156979A (en) | Method for on-line detection of abnormal behaviors in videos based on Gaussian mixture model | |
CN101572820B (en) | Preprocessing method of video signal in detection process of moving target | |
CN103533332B (en) | A kind of 2D video turns the image processing method of 3D video | |
CN111582270A (en) | Identification tracking method based on high-precision bridge region visual target feature points | |
Kang et al. | Image registration based on harris corner and mutual information | |
CN103426178B (en) | Target tracking method and system based on mean shift in complex scene | |
CN104463896A (en) | Image corner point detection method and system based on kernel similar region distribution characteristics | |
CN104200460A (en) | Image registration method based on images characteristics and mutual information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20150128 |
|
WD01 | Invention patent application deemed withdrawn after publication |