CN109272536A - A kind of diatom vanishing point tracking based on Kalman filter - Google Patents
A kind of diatom vanishing point tracking based on Kalman filter Download PDFInfo
- Publication number
- CN109272536A CN109272536A CN201811110435.6A CN201811110435A CN109272536A CN 109272536 A CN109272536 A CN 109272536A CN 201811110435 A CN201811110435 A CN 201811110435A CN 109272536 A CN109272536 A CN 109272536A
- Authority
- CN
- China
- Prior art keywords
- line
- vanishing point
- sub
- point
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 25
- 239000011159 matrix material Substances 0.000 claims description 35
- 238000001914 filtration Methods 0.000 claims description 20
- 238000004364 calculation method Methods 0.000 claims description 13
- 238000009825 accumulation Methods 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 4
- 230000026676 system process Effects 0.000 claims 1
- 238000010801 machine learning Methods 0.000 abstract description 4
- 238000001514 detection method Methods 0.000 abstract description 3
- 238000012545 processing Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract 1
- 230000002452 interceptive effect Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/168—Segmentation; Edge detection involving transform domain methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20061—Hough transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30256—Lane; Road marking
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of diatom vanishing point tracking based on Kalman filter, belongs to field of image processing.The present invention is input with the image sequence that vehicle-mounted vidicon acquires, and the straight-line segment in detection image excludes the line segment not included or comprising less diatom block in the method for machine learning, to achieve the purpose that really to determine diatom vanishing point by diatom edge line.In terms of existing technologies, the present invention can reduce influence of the various interfering objects occurred in the ken to diatom vanishing point estimated accuracy.
Description
Technical Field
The invention relates to the field of image processing, in particular to a line vanishing point tracking method based on Kalman filtering.
Background
The Advanced Driving Assistance System (ADAS) senses the surrounding environment in the driving process of the automobile by using various sensors mounted on the automobile, collects data, identifies, detects and tracks static and dynamic objects, and lets the driver perceive the possible danger in advance, thereby effectively increasing the comfort and safety of automobile driving. In the images acquired by the cameras equipped in the ADAS system, the parallel lines on the road surface converge at one point in the image, the vanishing point. For applications such as lane line detection, lane line deviation warning, and leading vehicle detection, the position of the vanishing point in the image is very important input information.
Chinese patent 201610492617.9 discloses a vanishing point calibration method based on horizon line search, which requires defining a plurality of horizon line position templates, and determining the position of the horizon line in a verification manner; chinese patent 201710651702.X discloses a method of constructing a line-stacked image with a video as input, searching for a maximum value in the stacked image, determining a lane line from the position of the maximum value, and determining a vanishing point from the intersection of the lane lines. In addition to the lane marking, objects such as various vehicles, shadows cast by vehicles, lane separation guardrails, and the like often appear in the field of view observed by the camera of the ADAS system, and the intersection points formed by straight lines determined by the edges of the objects and the lane vanishing points usually have a large deviation, and the accuracy of vanishing point estimation is seriously affected by estimating the vanishing points with the intersection points.
Disclosure of Invention
The invention provides a method for tracking a lane vanishing point based on Kalman filtering, which takes an image sequence acquired by a vehicle-mounted camera as input, detects a straight line segment in an image, and eliminates a line segment which does not contain or contains a few lane blocks by a machine learning method, thereby achieving the aim of really determining the lane vanishing point by a lane edge straight line; and modeling the track line vanishing point coordinates into a state that a discrete dynamic system changes along with time, and tracking the track line vanishing point by using a Kalman filtering algorithm.
The technical scheme adopted by the invention is as follows:
a line vanishing point tracking method based on Kalman filtering comprises the following steps:
detecting edge pixels of an input image, and detecting line segments by the edge pixels through a Hough transform algorithm;
secondly, extracting image blocks of the detected line segments by taking points on the line segments as anchor points and by multiple scales and offsets, and identifying whether the extracted image blocks are road line blocks or not by using a pre-trained classifier;
counting the number of points of the image blocks on the line segment which are identified as the track line blocks, if the number is greater than a preset threshold value, adding the corresponding line segment into a candidate line segment set, and calculating that the weight value corresponding to the line segment is equal to the number of points of the image blocks which are identified as the track line blocks divided by the length of the line segment;
extending each line segment in the candidate line segment set to form a straight line, calculating an intersection point between every two straight lines for the non-parallel straight lines, and calculating a weighted average value and a covariance matrix of all samples by taking a two-dimensional coordinate of the intersection point as a sample;
and fifthly, setting the current frame as the t-th frame, estimating the lane vanishing point by using a Kalman filtering algorithm according to the weighted average value and the covariance of the intersection point coordinate samples of the current frame obtained by calculation in the first step to the fourth step and the lane vanishing point continuously tracked from the 0 th frame to the t-1 th frame, and outputting the estimated vanishing point as the tracking result of the t-th frame.
The steps in the above technical scheme can be realized in the following specific manner.
Taking the points on the line segments as anchor points, extracting image blocks by a plurality of scales and offsets, and identifying whether the extracted image blocks are line blocks by a pre-trained classifier, wherein the method comprises the following steps:
let (X, Y) be a point on the line segment, and the image block extracted with this point as the anchor point is I (X-delta)X,Y-δYW/s, H/s) represents by (X-delta)X,Y-δY) A rectangular image area with upper left corner coordinates, W/s and H/s being width and height, where W and H are the preset reference window width and height, respectively, and deltaXAnd deltaYRespectively the offset in the horizontal and vertical directions, s is a preset scale coefficient;
the classifier for identifying whether the extracted image block is a line block is a cascade classifier, wherein each stage is a strong classifier formed by combining a plurality of weak classifiers;
each weak classifier corresponds to a feature and is calculated according to the following formula:
wherein x is an image block to be detected, p is +/-1 and is used for controlling unequal sign directions, theta is a threshold value, and f is a characteristic value calculation function;
in the training process, the weighting error-division loss function of each weak classifier to be selected is calculated according to the following formula,
εt=minf,p,θ∑iwi|h(xi,f,p,θ)-yi|
wherein x isiAnd yiRespectively, sample and corresponding label, if xiIs a positive sample, then yi1, otherwise yiH (-) is the weak classifier, and if the output value of the weak classifier is inconsistent with the marked symbol, the weak classifier is wrongly classified; and selecting the weak classifier with the minimum misclassification loss function value as the optimal weak classifier to form a strong classifier.
The corresponding characteristic of the weak classifier can adopt a Haar-like characteristic, and the calculation method comprises the following steps: firstly, taking a rectangular area of an image block x to be detected, and dividing the area into 2, 3 or 4 sub-areas with the same size; if the sub-area is divided into 2 sub-areas, the 2 sub-areas can be distributed left and right or up and down, and the characteristic value is the difference between the pixel value accumulation sum in one sub-area and the pixel value accumulation sum in the other sub-area; if the sub-area is divided into 3 sub-areas, the 3 sub-areas can be distributed in a left-middle-right mode or in an upper-middle-lower mode, and the characteristic value is a difference value obtained by subtracting the pixel value accumulation sum of the middle sub-area from the pixel value accumulation sum of the left sub-area, the right sub-area or the upper-lower sub-area; if the image is divided into four sub-regions, the horizontal direction and the vertical direction are respectively divided into two parts, and the characteristic value is the difference value obtained by subtracting the pixel value accumulated sum of the upper right sub-region and the lower right sub-region from the pixel value accumulated sum of the upper right sub-region and the lower left sub-region.
The fourth step is that the intersection point between every two non-parallel straight lines is calculated, the two-dimensional coordinate of the intersection point is taken as a sample, and the weighted average value and the covariance matrix of all the samples are calculated, wherein the method comprises the following steps:
for any two line segments LiAnd LjThe weights are η respectivelyiAnd ηjAnd extend the line segment LiAnd LjThe straight lines formed have an intersection, which is weighted ηi+ηj;
Let the intersection sample set beWherein (X)k,Yk) Coordinates of the kth intersection point; the weight set corresponding to the intersection point isηkIs the weight of the kth intersection point, N is the total number of the intersection point samples in the intersection point sample set, and the average value of the samples in the intersection point sample set isWherein
The covariance matrix of the samples is:
wherein
In the fifth step, estimating the lane vanishing point by a Kalman filtering algorithm according to the mean value and covariance of the intersection point coordinate samples of the current frame calculated in the first step to the fourth step and the lane vanishing point continuously tracked from the 0 th frame to the t-1 th frame, comprising:
firstly, the coordinate of the vanishing point of the track line is modeled into the state of a discrete dynamic system changing along with the time, and the vanishing point of the t-th frame is recorded as VtVanishing point V from the previous momentt-1The relation of (A) is as follows:
Vt=Vt-1+z
wherein z represents the process noise of the system, and conforms to the normal distribution with the mean value of 0 and the covariance matrix of Q;
secondly, from the lane vanishing point V of the previous momentt-1Predicting the vanishing point at time t from the covariance matrix P of the state errors at the previous timet-1Sum matrix Q predicts the state error covariance matrix P at time tt -:
Pt -=Pt-1+Q
Wherein,representing the estimated trace vanishing point at time t, Q representing the noise covariance matrix of the system;
calculating Kalman gain according to the following formula:
Kt=Pt -(Pt -+Σ)-1
wherein Σ is the sample covariance matrix described in step four;
and thirdly, updating the vanishing point of the track line, wherein the calculation formula is as follows:
wherein u is the weighted average of the samples described in step four,the updated time t is the vanishing point of the line;
and finally, updating the state error covariance matrix at the time t, and calculating the formula as follows:
Pt=(D-Kt)Pt -
where D is a 2X 2 identity matrix, PtIs the updated state error covariance matrix at time t.
The invention discloses a method for tracking a lane vanishing point based on Kalman filtering, which takes an image sequence acquired by a vehicle-mounted camera as input, detects straight line segments in an image, and eliminates line segments which do not contain or contain a few lane blocks by a machine learning method, thereby achieving the aim of really determining the lane vanishing point by a lane edge straight line. Compared with the prior art, the method can reduce the influence of various interference objects appearing in the vision field on the track line vanishing point estimation precision.
Drawings
FIG. 1 is a schematic flow chart of a trace vanishing point tracking method based on Kalman filtering according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of Haar-like feature calculation;
FIG. 3 is a schematic diagram of trace marking and positive sample extraction;
fig. 4 is a schematic flow chart of training a classifier by using the Adaboost algorithm.
Detailed Description
The invention provides a method for tracking a lane vanishing point based on Kalman filtering, which takes an image sequence acquired by a vehicle-mounted camera as input, detects a straight line segment in an image, and eliminates a line segment which does not contain or contains a few lane blocks by a machine learning method, thereby achieving the aim of really determining the lane vanishing point by a lane edge straight line; and modeling the track line vanishing point coordinates into a state that a discrete dynamic system changes along with time, and tracking the track line vanishing point by using a Kalman filtering algorithm.
As shown in FIG. 1, the flow of the trace vanishing point tracking method based on Kalman filtering of the present invention may include the following steps 101-105:
step 101, detecting edge pixels of an input image, and detecting line segments by the edge pixels through a Hough transform algorithm;
102, extracting image blocks of the detected line segments by taking points on the line segments as anchor points and by multiple scales and offsets, and identifying whether the extracted image blocks are road line blocks or not by using a pre-trained classifier;
103, counting the number of the points of the image blocks on the line segment, which are identified as the track line blocks, adding the corresponding line segment into a candidate line segment set if the number is greater than a preset threshold value, and calculating that the weight value corresponding to the line segment is equal to the number of the points of the image blocks, which are identified as the track line blocks, divided by the length of the line segment;
step 104, extending each line segment in the candidate line segment set to form a straight line, calculating an intersection point between every two straight lines for the non-parallel straight lines, and calculating a weighted average value and a covariance matrix of all samples by taking a two-dimensional coordinate of the intersection point as a sample;
and 105, setting the current frame as the tth frame, estimating a lane vanishing point by using a Kalman filtering algorithm according to the weighted average value and covariance of the intersection point coordinate samples of the tth frame obtained by calculation from the step 101 to the step 104 and the lane vanishing point continuously tracked from the 0 th frame to the t-1 th frame, and outputting the estimated vanishing point as the result of the tth frame.
The specific implementation of the above steps in this embodiment will be described below with reference to the drawings.
In step 102, for the detected line segment, points on the line segment are used as anchor points, image blocks are extracted in multiple scales and offsets, a pre-trained classifier is used to identify whether the extracted image blocks are road line blocks, specifically, (X, Y) is used as a point on the line segment, and the image blocks extracted by using the point as an anchor point are I (X-delta)X,Y-δYW/s, H/s) represents by (X-delta)X,Y-δY) A rectangular image area with upper left corner coordinates, W/s and H/s being width and height, where W and H are the preset reference window width and height, respectively, and deltaXAnd deltaYThe offset is in the horizontal and vertical directions, respectively, s is a preset scale factor, and one embodiment of the invention takes W as 24, H as 10, δX,δY∈{-8,-4,0,+4,+8},s∈{0.8,0.9,1.0,1.1,1.25}。
In step 102, a pre-trained classifier is used to identify whether an extracted image block is a road line block, the classifier is a cascade classifier, each stage of the classifier is a strong classifier formed by combining a plurality of weak classifiers, each weak classifier corresponds to a feature and is calculated according to the following formula:
where x is an image block to be detected, p is ± 1, and is used to control an unequal sign direction, θ is a threshold, and f is a feature value calculation function, in the embodiment of the present invention, a Haar-like feature is adopted, see fig. 2, and the calculation method is: firstly, taking a rectangular area of x, and dividing the area into 2, 3 or 4 sub-areas with equal size; if the image is divided into 2 sub-regions, the 2 sub-regions can be distributed left and right or up and down, and the characteristic value is the difference between the pixel value accumulation sum in the white sub-region and the pixel value accumulation sum in the black sub-region; if the image is divided into 3 sub-areas, the 3 sub-areas can be distributed in a left-middle-right mode or in an upper-middle-lower mode, and the characteristic value is a difference value obtained by subtracting the pixel value accumulation sum of the middle black sub-area from the pixel value accumulation sum of the left white sub-area, the right white sub-area or the upper white sub-area and the lower white sub-area; if the image is divided into four sub-regions, the horizontal direction and the vertical direction are respectively divided into two parts, and the characteristic value is the difference value obtained by subtracting the pixel value accumulated sum of the upper right black sub-region and the lower right white sub-region from the pixel value accumulated sum of the upper right black sub-region and the lower left black sub-region.
Referring to fig. 3, the positive sample is a rectangular image block with a certain width and height, L in the figure is a horizontal straight line, intersection points of the straight line and the edge of a lane line are a and B, the length of a line segment AB is w, and a mark area of the lane line approximately occupies the middle part of the image block; the negative samples are road surface area images without lane marks, and the positive samples and the negative samples are scaled to preset sizes.
In the embodiment of the present invention, the training of the strong classifier by using the Adaboost algorithm, specifically, referring to fig. 4, may include the following steps:
step 401, initializing the weight of each sample, wherein the weight of each positive sample is 1/2NpThe weight of the negative sample is 1/2NfIn which N ispIs the number of positive samples, NfThe number of negative samples, the strong classifier to be trained is initialized to contain 0 weak classifiers;
step 402, T iterates from 1 to T, wherein T is the number of weak classifiers which are allowed to be contained in a preset strong classifier at most, and a weak classifier is selected in each iteration;
step 403, selecting the optimal weak classifier, firstly, calculating the weighted misclassification loss function of each weak classifier to be selected according to the following formula,
εt=minf,p,θ∑iwi|h(xi,f,p,θ)-yi| (2)
wherein x isiAnd yiAre respectively as followsThis and the corresponding mark, if xiIs a positive sample, then yi1, otherwise yiH (-) is a weak classifier shown in formula (1), and if the output value of the weak classifier is inconsistent with the marked symbol, the weak classifier is mistakenly classified; secondly, selecting the weak classifier with the minimum error division loss function value as the optimal weak classifier, and recording the optimal weak classifier as ht;
Step 404, if εtIf the value is more than 0.5, the iteration is ended; otherwise, go to step 405;
step 405, updating the weight of each sample according to the following formula,
wherein, when the sample xiIf correctly classified, the corresponding ei0, otherwise ei=1,
Step 406, calculating the weight of the current weak classifier in the strong classifier according to the following formula,
and combining each weak classifier according to the weight of the weak classifier to form a strong classifier:
F(x)=sign(∑tαtht(x)) (5)
step 407, classifying the test samples by using the current strong classifier, if the result of the classifier reaches the expected target, ending the iteration, and outputting the strong classifier shown in formula (5); otherwise, go to step 402.
In step 104, each line segment in the extension candidate line segment set becomes a straight line, for a non-parallel straight line, an intersection point between every two straight lines is calculated, and a weighted average and a covariance matrix of all samples are calculated by taking a two-dimensional coordinate of the intersection point as a sample, which specifically includes:
for any two line segments LiAnd LjThe weights are η respectivelyiAnd ηjAnd extend the line segment LiAnd LjThe straight lines formed intersect, the intersection point is given a weight of ηi+ηj;
Let the intersection sample set beWherein (X)k,Yk) Coordinates of the kth intersection point; the weight set corresponding to the intersection point isηkIs the weight of the kth intersection point, N is the total number of the intersection point samples in the intersection point sample set, and the average value of the samples in the intersection point sample set isWherein
The covariance matrix of the samples is:
wherein
In step 105, estimating a lane vanishing point by using a Kalman filter algorithm, where the mean and covariance of the intersection coordinate samples of the t-th frame calculated in steps 101 to 104, and the lane vanishing point continuously tracked from the 0-th frame to the t-1-th frame may specifically include:
firstly, the coordinate of the vanishing point of the track line is modeled into the state of a discrete dynamic system changing along with the time, and the vanishing point of the t-th frame is recorded as VtVanishing point V from the previous momentt-1The following relationship exists:
Vt=Vt-1+z (6)
wherein z represents the process noise of the system, and conforms to the normal distribution with a mean of 0 and a covariance matrix of Q;
secondly, from the lane vanishing point V of the previous momentt-1Predicting the vanishing point at time t from the covariance matrix P of the state errors at the previous timet-1Sum matrix Q predicts the state error covariance matrix P at time tt -:
Pt -=Pt-1+Q (8)
Wherein,representing a trace vanishing point at the time t, wherein Q represents a covariance matrix of the system noise;
calculating Kalman gain according to the following formula:
Kt=Pt -(Pt -+Σ)-1(9)
wherein Σ is the sample covariance matrix of step 104;
and thirdly, updating the vanishing point of the track line, wherein the calculation formula is as follows:
where u is the sample weighted average described in step 104,the updated time t is the vanishing point of the line;
and finally, updating the state error covariance matrix at the time t, and calculating the formula as follows:
Pt=(D-Kt)Pt -(11)
where D is a 2 x 2 identity matrix.
Through the processing flow, the aim of accurately determining the vanishing point of the road line by the edge straight line of the road line can be achieved.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any modification or replacement within the spirit and principle of the present invention should be covered within the scope of the present invention.
Claims (5)
1. A line vanishing point tracking method based on Kalman filtering is characterized by comprising the following steps:
detecting edge pixels of an input image, and detecting line segments by the edge pixels through a Hough transform algorithm;
secondly, extracting image blocks of the detected line segments by taking points on the line segments as anchor points and by multiple scales and offsets, and identifying whether the extracted image blocks are road line blocks or not by using a pre-trained classifier;
counting the number of points of the image blocks on the line segment which are identified as the track line blocks, if the number is greater than a preset threshold value, adding the corresponding line segment into a candidate line segment set, and calculating that the weight value corresponding to the line segment is equal to the number of points of the image blocks which are identified as the track line blocks divided by the length of the line segment;
extending each line segment in the candidate line segment set to form a straight line, calculating an intersection point between every two straight lines for the non-parallel straight lines, and calculating a weighted average value and a covariance matrix of all samples by taking a two-dimensional coordinate of the intersection point as a sample;
and fifthly, setting the current frame as the t-th frame, estimating the lane vanishing point by using a Kalman filtering algorithm according to the weighted average value and the covariance of the intersection point coordinate samples of the current frame obtained by calculation in the first step to the fourth step and the lane vanishing point continuously tracked from the 0 th frame to the t-1 th frame, and outputting the estimated vanishing point as the tracking result of the t-th frame.
2. The Kalman filtering-based line vanishing point tracking method according to claim 1, wherein in the second step, the points on the line segment are taken as anchor points, the image block is extracted by a plurality of scales and offsets, and a pre-trained classifier is used for identifying whether the extracted image block is a line block, comprising:
let (X, Y) be a point on the line segment, and the image block extracted with this point as the anchor point is I (X-delta)X,Y-δYW/s, H/s) represents by (X-delta)X,Y-δY) A rectangular image area with upper left corner coordinates, W/s and H/s being width and height, where W and H are the preset reference window width and height, respectively, and deltaXAnd deltaYRespectively the offset in the horizontal and vertical directions, s is a preset scale coefficient;
the classifier for identifying whether the extracted image block is a line block is a cascade classifier, wherein each stage is a strong classifier formed by combining a plurality of weak classifiers;
each weak classifier corresponds to a feature and is calculated according to the following formula:
wherein x is an image block to be detected, p is +/-1 and is used for controlling unequal sign directions, theta is a threshold value, and f is a characteristic value calculation function;
in the training process, the weighting error-division loss function of each weak classifier to be selected is calculated according to the following formula,
εt=minf,p,θ∑iwi|h(xi,f,p,θ)-yi|
wherein x isiAnd yiRespectively, sample and corresponding label, if xiIs a positive sample, then yi1, otherwise yiAnd (4) selecting the weak classifier with the minimum error distribution loss function value as the optimal weak classifier to form a strong classifier, wherein the value is-1.
3. The Kalman filtering-based trace line vanishing point tracking method according to claim 2, wherein the features corresponding to the weak classifiers adopt Haar-like features, and the calculation method comprises the following steps: firstly, taking a rectangular area of an image block x to be detected, and dividing the area into 2, 3 or 4 sub-areas with the same size; if the sub-area is divided into 2 sub-areas, the 2 sub-areas can be distributed left and right or up and down, and the characteristic value is the difference between the pixel value accumulation sum in one sub-area and the pixel value accumulation sum in the other sub-area; if the sub-area is divided into 3 sub-areas, the 3 sub-areas can be distributed in a left-middle-right mode or in an upper-middle-lower mode, and the characteristic value is a difference value obtained by subtracting the pixel value accumulation sum of the middle sub-area from the pixel value accumulation sum of the left sub-area, the right sub-area or the upper-lower sub-area; if the image is divided into four sub-regions, the horizontal direction and the vertical direction are respectively divided into two parts, and the characteristic value is the difference value obtained by subtracting the pixel value accumulated sum of the upper right sub-region and the lower right sub-region from the pixel value accumulated sum of the upper right sub-region and the lower left sub-region.
4. The Kalman filtering-based lane line vanishing point tracking method according to claim 1, wherein the step four is that the pair of non-parallel straight lines calculates the intersection point between every two straight lines, and calculates the weighted average value and covariance matrix of all samples by taking the two-dimensional coordinates of the intersection point as samples, and comprises:
for any two line segments LiAnd LjThe weights are η respectivelyiAnd ηjAnd extend the line segment LiAnd LjThe straight lines formed have an intersection, which is weighted ηi+ηj;
Let the intersection sample set beWherein (X)k,Yk) Coordinates of the kth intersection point; the weight set corresponding to the intersection point isηkIs the weight of the kth intersection point, N is the total number of the intersection point samples in the intersection point sample set, and the average value of the samples in the intersection point sample set isWherein
The covariance matrix of the samples is:
wherein
5. The trace line vanishing point tracking method based on Kalman filtering as claimed in claim 1, wherein the estimating of trace line vanishing point by Kalman filtering algorithm according to the mean and covariance of the intersection point coordinate samples of the current frame calculated in the step five and the trace line vanishing point continuously tracked from the frame 0 to the frame t-1 comprises:
firstly, the coordinate of the vanishing point of the track line is modeled into the state of a discrete dynamic system changing along with the time, and the vanishing point of the t-th frame is recorded as VtVanishing point V from the previous momentt-1The relation of (A) is as follows:
Vt=Vt-1+z
wherein z represents the process noise of the system, and conforms to the normal distribution with the mean value of 0 and the covariance matrix of Q;
secondly, from the lane vanishing point V of the previous momentt-1Predicting the vanishing point at time t from the covariance matrix P of the state errors at the previous timet-1Sum matrix Q predicts the state error covariance matrix P at time tt -:
Wherein,representing the estimated trace vanishing point at the time t, and Q represents a covariance matrix of the system process noise;
calculating Kalman gain according to the following formula:
wherein Σ is the sample covariance matrix described in step four;
and thirdly, updating the vanishing point of the track line, wherein the calculation formula is as follows:
wherein u is the weighted average of the samples described in step four,the updated time t is the vanishing point of the line;
and finally, updating the state error covariance matrix at the time t, and calculating the formula as follows:
where D is a 2X 2 identity matrix, PtIs the updated state error covariance matrix at time t.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811110435.6A CN109272536B (en) | 2018-09-21 | 2018-09-21 | Lane line vanishing point tracking method based on Kalman filtering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811110435.6A CN109272536B (en) | 2018-09-21 | 2018-09-21 | Lane line vanishing point tracking method based on Kalman filtering |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109272536A true CN109272536A (en) | 2019-01-25 |
CN109272536B CN109272536B (en) | 2021-11-09 |
Family
ID=65198756
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811110435.6A Active CN109272536B (en) | 2018-09-21 | 2018-09-21 | Lane line vanishing point tracking method based on Kalman filtering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109272536B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111968038A (en) * | 2020-10-23 | 2020-11-20 | 网御安全技术(深圳)有限公司 | Method and system for rapidly searching vanishing points in image |
US11373063B2 (en) * | 2018-12-10 | 2022-06-28 | International Business Machines Corporation | System and method for staged ensemble classification |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103366156A (en) * | 2012-04-09 | 2013-10-23 | 通用汽车环球科技运作有限责任公司 | Road structure detection and tracking |
CN103839264A (en) * | 2014-02-25 | 2014-06-04 | 中国科学院自动化研究所 | Detection method of lane line |
CN104318258A (en) * | 2014-09-29 | 2015-01-28 | 南京邮电大学 | Time domain fuzzy and kalman filter-based lane detection method |
CN106228125A (en) * | 2016-07-15 | 2016-12-14 | 浙江工商大学 | Method for detecting lane lines based on integrated study cascade classifier |
CN106529443A (en) * | 2016-11-03 | 2017-03-22 | 温州大学 | Method for improving detection of lane based on Hough transform |
CN106529415A (en) * | 2016-10-16 | 2017-03-22 | 北海益生源农贸有限责任公司 | Characteristic and model combined road detection method |
CN106682586A (en) * | 2016-12-03 | 2017-05-17 | 北京联合大学 | Method for real-time lane line detection based on vision under complex lighting conditions |
CN107316331A (en) * | 2017-08-02 | 2017-11-03 | 浙江工商大学 | For the vanishing point automatic calibration method of road image |
US20180060669A1 (en) * | 2016-08-30 | 2018-03-01 | Canon Kabushiki Kaisha | Method, system and apparatus for processing an image |
CN107796373A (en) * | 2017-10-09 | 2018-03-13 | 长安大学 | A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven |
-
2018
- 2018-09-21 CN CN201811110435.6A patent/CN109272536B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103366156A (en) * | 2012-04-09 | 2013-10-23 | 通用汽车环球科技运作有限责任公司 | Road structure detection and tracking |
CN103839264A (en) * | 2014-02-25 | 2014-06-04 | 中国科学院自动化研究所 | Detection method of lane line |
CN104318258A (en) * | 2014-09-29 | 2015-01-28 | 南京邮电大学 | Time domain fuzzy and kalman filter-based lane detection method |
CN106228125A (en) * | 2016-07-15 | 2016-12-14 | 浙江工商大学 | Method for detecting lane lines based on integrated study cascade classifier |
US20180060669A1 (en) * | 2016-08-30 | 2018-03-01 | Canon Kabushiki Kaisha | Method, system and apparatus for processing an image |
CN106529415A (en) * | 2016-10-16 | 2017-03-22 | 北海益生源农贸有限责任公司 | Characteristic and model combined road detection method |
CN106529443A (en) * | 2016-11-03 | 2017-03-22 | 温州大学 | Method for improving detection of lane based on Hough transform |
CN106682586A (en) * | 2016-12-03 | 2017-05-17 | 北京联合大学 | Method for real-time lane line detection based on vision under complex lighting conditions |
CN107316331A (en) * | 2017-08-02 | 2017-11-03 | 浙江工商大学 | For the vanishing point automatic calibration method of road image |
CN107796373A (en) * | 2017-10-09 | 2018-03-13 | 长安大学 | A kind of distance-finding method of the front vehicles monocular vision based on track plane geometry model-driven |
Non-Patent Citations (6)
Title |
---|
JINJIN SHI ET AL.: "Fast and Robust Vanishing Point Detection for Unstructured Road Following", 《IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》 * |
QING XU ET AL.: "Real-time Rear of Vehicle Detection from a Moving Camera", 《2014 CCDC》 * |
付永春: "单目视觉结构化道路车道线检测和跟踪技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
李佳旺: "基于计算机视觉的前方车辆检测及测距研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 * |
陈茜: "基于Android平台的车道线检测技术的实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
黄惠迪: "基于机器视觉的行车安全预警系统研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11373063B2 (en) * | 2018-12-10 | 2022-06-28 | International Business Machines Corporation | System and method for staged ensemble classification |
CN111968038A (en) * | 2020-10-23 | 2020-11-20 | 网御安全技术(深圳)有限公司 | Method and system for rapidly searching vanishing points in image |
Also Published As
Publication number | Publication date |
---|---|
CN109272536B (en) | 2021-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Marzougui et al. | A lane tracking method based on progressive probabilistic Hough transform | |
Keller et al. | The benefits of dense stereo for pedestrian detection | |
Alon et al. | Off-road path following using region classification and geometric projection constraints | |
Yang et al. | Robust superpixel tracking | |
Zhou et al. | Efficient road detection and tracking for unmanned aerial vehicle | |
EP3223196B1 (en) | A method and a device for generating a confidence measure for an estimation derived from images captured by a camera mounted on a vehicle | |
US8890951B2 (en) | Clear path detection with patch smoothing approach | |
CN102087703B (en) | The method determining the facial pose in front | |
Kim et al. | A Novel On-Road Vehicle Detection Method Using $\pi $ HOG | |
CN106778712B (en) | Multi-target detection and tracking method | |
Shi et al. | Fast and robust vanishing point detection for unstructured road following | |
Tian et al. | A two-stage character segmentation method for Chinese license plate | |
CN102598057A (en) | Method and system for automatic object detection and subsequent object tracking in accordance with the object shape | |
EP2357614A1 (en) | Method and terminal for detecting and tracking moving object using real-time camera motion estimation | |
US20170032676A1 (en) | System for detecting pedestrians by fusing color and depth information | |
EP2725520A2 (en) | Method and apparatus for detecting road | |
CN115240130A (en) | Pedestrian multi-target tracking method and device and computer readable storage medium | |
CN109284664B (en) | Driver assistance system and guardrail detection method | |
CN110458158B (en) | Text detection and identification method for assisting reading of blind people | |
Morris et al. | Improved vehicle classification in long traffic video by cooperating tracker and classifier modules | |
CN105654516B (en) | Satellite image based on target conspicuousness is to ground weak moving target detection method | |
JP2012088881A (en) | Person motion detection device and program thereof | |
CN107066968A (en) | The vehicle-mounted pedestrian detection method of convergence strategy based on target recognition and tracking | |
CN109272536B (en) | Lane line vanishing point tracking method based on Kalman filtering | |
CN110084830A (en) | A kind of detection of video frequency motion target and tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |