CN106682586A - Method for real-time lane line detection based on vision under complex lighting conditions - Google Patents

Method for real-time lane line detection based on vision under complex lighting conditions Download PDF

Info

Publication number
CN106682586A
CN106682586A CN201611098387.4A CN201611098387A CN106682586A CN 106682586 A CN106682586 A CN 106682586A CN 201611098387 A CN201611098387 A CN 201611098387A CN 106682586 A CN106682586 A CN 106682586A
Authority
CN
China
Prior art keywords
illumination
image
lane line
value
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611098387.4A
Other languages
Chinese (zh)
Inventor
刘宏哲
袁家政
唐正
李超
赵小艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Union University
Original Assignee
Beijing Union University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Union University filed Critical Beijing Union University
Priority to CN201611098387.4A priority Critical patent/CN106682586A/en
Publication of CN106682586A publication Critical patent/CN106682586A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for real-time lane line detection based on vision under complex lighting conditions and belongs to the field of computer vision and unmanned intelligent driving. During image preprocessing, light estimation and light color correction are conducted on different light images so that the images can recover under standard white light; noise which is introduced in the image acquisition process is eliminated through Gaussian filtering, and then the images are binarized and subjected to edge extraction; the original images are divided in the extraction process; improved Hough transform is used for obtaining a lane candidate line, and a dynamic interesting region (ROI) is built; through Hough transform based on the dynamic interesting region (ROI) and kalman filtering, a lane line is tracked in real time to realize constraint and update of a lane line model. An algorithm is added into a lane line detection failure judgment module to improve the reliability of detection. The method is high in speed and good in robustness, a good lane line detection effect is obtained under the complex lighting conditions, the dynamic lane line identifying ability of a vehicle is improved, and the safety of unmanned vehicle driving is improved.

Description

Method for detecting lane line in real time based on vision under complex illumination condition
Technical Field
The invention relates to a method for detecting lane lines in real time based on vision under a complex illumination condition, and belongs to the technical field of vehicle autonomous driving and computer-assisted driving.
Background
In recent years, with the increasing of road mileage and the continuous development of automobile industry, traffic safety problems are increasingly serious, vehicles on roads are more and more, accidents are increased year by year, casualties and property loss caused by the traffic accidents are striking surprises, in order to reduce the occurrence of the traffic accidents, the driving safety is ensured by applying scientific and technological means such as a computer-aided driving system, the main key problem of realizing the system is to realize the rapid and accurate detection of lane lines from a vehicle-mounted video image, so that the vehicles can drive according to the accurate specification of real-time road conditions, and the safety of the vehicles and pedestrians is ensured.
The current lane recognition method mainly comprises two types: image feature methods and model matching methods.
1. The basic idea of the image-feature-based method is to use the difference between the lane boundaries or marker lines and the surrounding environment in the image features for detection. The feature differences include shape, texture, continuity, gray scale, contrast, and the like. Donald et al use the geometric information of lane lines to perform lane line detection at high speed by Hough transformation parameter limitation method; lee proposes an offset early warning system for estimating and predicting the lane line direction through an edge publishing function and the change of the vehicle motion direction; the Mastorakis screens out the most possible identification line by using the straight line characteristics of the lane line; wang and Hu propose to recognize lane lines using the property of opposite gradient directions on lane lines and the color characteristics of lane line regions, respectively. The method uses image segmentation, thresholding and other technologies, has a simple algorithm, but the lane cannot be identified due to factors such as shadow occlusion, light change, noise, lane boundary or mark line discontinuity and the like.
2. The model matching-based method mainly aims at stronger geometric characteristics of a structured road and utilizes a two-dimensional or three-dimensional curve to carry out lane line modeling, and common two-dimensional lane models comprise a straight line model and a parabolic model. After the B-Snake lane model provides initial positioning, converting the lane line detection problem into a control point problem required by determining a spline curve through a road model; the Hough transformation and the parabolic model are combined together to detect the lane line, the primary parameters of the road marking line are obtained by the linear model, and then the lane line is detected by the hyperbolic model on the basis, so that a better detection result is obtained; mechat models the lane lines by an SVM-based method and estimates and tracks by a standard Kalman filter. The method analyzes the target information in the image to determine the model parameters on the basis of establishing the road parameter model, has the characteristic of no interference by the road surface condition, but has higher time expenditure of the algorithm due to higher calculation complexity.
Therefore, in practical research, an image feature method and a road model matching method are combined to normalize the lane identification problem.
Disclosure of Invention
The invention aims at the problems that the existing lane line detection technology has low recognition rate of detecting the lane line under complex light, does not perform good preprocessing on the image and enables the distorted image to be corrected to be under standard white light. The prior algorithm is complex, low in efficiency and poor in real-time performance, and provides a method for detecting the lane line in real time based on vision under the complex illumination condition, the image is illuminated and corrected to be under standard white light, the lane line detection and the trend judgment are carried out by utilizing the information of the lane line pixels, and the algorithm has good real-time performance and can efficiently detect the lane line.
To achieve the above object, the inventors provide a method of illumination preprocessing and a method of lane line detection, the method comprising the steps of: during image preprocessing, illumination estimation and illumination color correction are carried out on different illumination images, so that the illumination images are restored to be under standard white light. The method comprises the steps of removing noise introduced in the image acquisition process by adopting Gaussian filtering, carrying out binarization processing and edge extraction on an image, carrying out region division on an original image in the extraction process, obtaining lane candidate lines by utilizing improved Hough transformation, establishing a dynamic region of interest (ROI), realizing constraint and update of a lane line model through Hough transformation based on the dynamic ROI, tracking the lane line in real time by Kalman filtering, and adding a lane line detection failure judgment module into an algorithm to improve the detection reliability.
On a structured road, the lane line information is mainly concentrated in the middle and lower part of the image, since the camera mounting in different situations is taken into account, or the vehicle head is displayed in the image.
The method comprises the following steps: the image is subjected to down sampling, a region of interest (ROI) is set, most of image information is useless for detecting the lane line due to the fact that the adjacent images in the video image have large correlation, and the ROI which is useful for detecting the lane line is searched, so that the operation amount of an algorithm can be reduced, and the identification of the lane line can be simplified. On a structured road, useful information of the lane lines is mainly concentrated in the middle lower part of an image, namely a region of interest, because the installation of a camera under different conditions is considered, or the head of a vehicle is displayed in the image (0-0.1H). WimageRepresenting the width of the image, HimageDefined as the height of the image. So that we can reduce the imageThe extent of the effective detection area.
The lane detection method comprises the following steps of preprocessing an image of an interested area, namely correcting colors: firstly, obtaining a region-of-interest image psi from an image acquisition device such as a monitoring camera, and performing color correction on the region-of-interest image psi to obtain a corrected image psi1
The method comprises the following specific steps:
the purpose of illumination estimation of an image is to correct the image under unknown illumination conditions to an image under standard white light, and this process is briefly summarized as first estimating the illumination color when the image is imaged, and then mapping the image under standard white light by using a Von Kries model. A better white balance effect of the image can be obtained. Generally, the method can be divided into the following steps:
(1) sample block extraction first extracts a sample block from an image. For each block of image samples, the effective illumination impinging on that block is estimated.
(2) And utilizing the existing illumination estimation algorithm under the single illumination condition to carry out illumination estimation. And systematically generating a plurality of different color constancy characteristic value extraction methods by converting parameters based on the Grey-Edge color constancy algorithm framework.
(3) And clustering the illumination estimated value of the sample block, wherein image blocks from the same illumination are clustered together to form a large image block so as to generate a more accurate illumination estimated value, and the blocks under the same illumination are easier to cluster into the same cluster. Thus, all illumination estimates are clustered into M classes (M is the number of illuminations in the scene).
(4) And after the illumination estimated values based on the sample blocks are clustered into M types (M is the number of illumination in a scene), the clustering results are mapped to the original image one by one, namely, pixels belonging to the same sample block belong to the same cluster, so that the illumination position of each type of illumination can be obtained. This results in an illumination map, i.e. each pixel belongs to one of the M illumination. Through backward mapping, the illumination estimation value of each pixel and the clustering center value of the illumination class where the pixel is located can be obtained.
(5) And for the area with overlapped illumination, using a Gaussian filter on the classification result of the illumination estimated value of the backward mapping
(6) And color correction, namely correcting the input image to the standard illumination by utilizing the illumination estimation value of each pixel to obtain an output image under the standard illumination, so that the influence of illumination in the scene is eliminated. The diagonal model that is most commonly used today corrects the image.
A method for using image color correction, characterized by: said (1) assumes 5 × 5 pixels per block of image samples and satisfies the condition that the illumination values illuminated on the samples are uniformly distributed (only one color of light is illuminated on the samples).
The method for correcting the image color by image illumination estimation selects sample blocks with the same size and satisfies the following conditions: a sample block of 5 x 5 pixels and contains illumination color information to accurately estimate the nature of the illumination impinging on the sample block.
Based on the Grey-Edge color constancy algorithm framework, parameters are transformed, as shown below, by the transformation parameters n, q and sigma (n is a factorial, q is a Minkowss' norm, and sigma is the kernel function size of a Gaussian filter), which are constants of a value range [0,1], and f (x) represents the illumination value at the x point in space; 0 represents no reflection, 1 represents total reflection; e is an index e, systematically generating a plurality of different color constancy eigenvalue extraction methods.
In this framework, the image is segmented into a number of sample blocks of the image. Each sample block is assumed to be 5 x 5 pixels and satisfies the assumption that the illumination is uniformly distributed in the sample block. On each sample block, the illumination value on that sample block is estimated using the commonly used single-illumination color constancy algorithm.
The method for correcting image color using image illumination estimation considers the following five representative methods:
method for correcting image color by utilizing image illumination estimation, five candidate color constancy calculation sets are { e }0,1,0,e0,∞,0,e0,∞,1,e1,1,1,e2,1,1}. Each sample block is characterized by an illumination estimate of a selected color constancy algorithm.
In a method for correcting image colors by using image illumination estimation, a feature vector of a sample block can be described as F ═ R, G, B, R, G, B are color channels of an image, and normalized illumination estimation values are used, as shown below, so that the feature vector of the sample block is converted into F ═ R, G, a 1 × 2 vector.
In the method for correcting the image color by utilizing the image illumination estimation, after the illumination estimation value of each sample block is clustered in the chromaticity space formed by the illumination estimation values, the distance from the illumination estimation value of the jth sample block to the ith clustering center can be calculated by using the Euclidean distance which is diIs represented by dkRepresents k [0, M]Distance of cluster center of the kth sample block, Z is the total sample block, then probability p that the sample block is located in the ith illumination areaj,iThe following calculations were made:
probability of coverage area of ith illuminationWherein p isj,iRepresenting the probability that the jth block is illuminated by the ith illumination and p is the total number of sample blocks in the input image.
In order to obtain smooth and continuous illumination distribution, filtering is carried out on a probability mapping graph of an illumination coverage area, two filters are used, namely a Gaussian filter and a median filter, the Gaussian filter takes spatial position information into consideration to calculate the pixel-by-pixel probability of each estimated illumination range, and the median filter has the advantage that edge information can be well reserved and is used for scenes with obvious illumination changes.
A method for correcting image color using image illumination estimation, the illumination estimate for each pixel of the image being calculated according to the following equation:
wherein IeIs an illumination estimate on the scene, Ie,jIs an estimate of the ith illumination, mi(x) Represents the contribution of the ith illumination to the pixel located at x; z represents the total block of samples if miA larger value means that the impact of the ith illumination on this pixel is large, especially if mi(x) By 1 is meant that this pixel is completely under illumination by the ith illumination. The illuminated coverage area probability map is as large as the input image.
Method for correcting image color using image illumination estimation, after obtaining illumination estimation value of each pixel, correcting pixel by pixel according to diagonal model, wherein fu(x) Representing the pixel value at x under illumination by unknown illumination, fc(x) Λ representing the pixel value it exhibits under standard illumination after correctionu,c(x) Is at x fromThe mapping matrix of the unknown illumination to the standard illumination is shown as the following formula: f. ofc(x)=Λu,c(x)fu(x)。
The method for correcting the color of an image by using the illumination estimation of the image has the following diagonal correction model,representing the position at imaging:
wherein,x represents the illumination value measured by the R channel at a certain point in the image space;x represents the illumination value estimated by the R channel at a certain point in the image space;the measured illumination value of the R channel at a point in space is compared to the estimated illumination value.Comparing the measured illumination value of the G channel at a certain point in the space with the estimated illumination value;comparing the measured illumination value of the B channel at a point in space with the estimated illumination value Λu,c(x) Is a mapping matrix from unknown illumination to standard illumination at x.
Preprocessing an image of the region of interest, namely graying the image after color correction, as shown in the following formula; grayR0.299 + G0.587 + B0.114, wherein: r, G, B represent the red, blue, and green channel component values, respectively; grayRepresenting the gray value of the converted pixel. More desire on lane linesThe white and yellow information is stored, so that the proportion of the component values of the B channel is weakened within the range of the extraction error of the lane line. The formula of the gray scale conversion is as follows: gray=R*0.5+G*0.5。
Selecting a lane line model, which is characterized in that: most road sections of the road are straight line sections, and the error calculated by taking the straight line model as a lane line model is only 3 mm. Therefore, the method adopts a straight line model as a model of the lane line.
The method is characterized by comprising the following steps of (1) extracting the edge of a lane line of a gray image: in an actual road environment, the lane line generally has higher luminance than the surrounding road surface, and the gradation processing is performed to make the gradation value of the lane line higher. From the gray-scale image scanned in lines, the value of the lane line part is higher than the values of the two sides of the lane line part, and a peak is formed; presenting a first-rising-then-falling trend from left to right; we use these characteristics to determine the edge of the lane line by calculating the variation of the adjacent image pixels.
The lane line detection method based on the improved Hough transformation is characterized by comprising the following steps: the hole that Hough transform detected the straight line makes an uproar can be strong, can link the edge of disconnection, and is particularly useful for detecting discontinuous lane marking. According to the duality principle of an image space and a Hough parameter space, each feature point in an image is mapped to a plurality of units of an accumulation array of the parameter space, and the counting of each unit is counted to detect an extreme value, so that whether a straight line exists or not is determined, and a straight line parameter is obtained.
After each point in the image space is mapped to a polar coordinate by the classical Hough transformation, voting statistics is carried out, and when rho and theta are calculatedpThe finer the quantization, the higher the detection accuracy, the coarser the quantization, and the inaccurate the detection result. To solve the problem of infinite slope of a vertical line, Hough transformation is generally performed by a linear-polar equation where ρ is x cos θp+y sinθpIn order to reduce the complexity of operation and improve the efficiency of calculation, corresponding conditional constraints are made on the classical Hough transform, so that the method can be more suitable for lane linesAnd (6) detecting.
The detected lane lines need to be constrained, namely, inter-frame association constraint, in an actual acquisition system and most intelligent vehicle systems, video stream information is directly obtained by a vehicle-mounted camera, and two adjacent frames of images in a video stream often have great redundancy. The motion of the vehicle has continuity in time and space, and because the sampling frequency of the vehicle-mounted camera is high (about 100 fps), the vehicle only moves a short distance in the sampling period of the image frame, the change of the road scene is very small, and the change of the position of the lane line between the front frame and the rear frame is slow, so that the image of the front frame provides very strong lane line position information for the image of the rear frame. In order to improve the stability and accuracy of the lane line identification algorithm, an inter-frame relevance constraint is introduced.
The method comprises the following steps: suppose that the number of lane lines detected in the current frame is mlStrip, with set Ll={L1,L2,…,LmRepresents; the number of detected lane lines in the stored history frame is nlUsing the set El={E1,E2,…,EnRepresents; k for interframe correlation constraint filterlDenotes, let Kl={K1,K2,…,Kn}。
First, a C is establishedl=ml×nlMatrix of (1), matrix ClElement c in (1)ijRepresents the ith straight line L in the current frameiAnd the jth straight line E in the history framejA distance Δ d therebetweenijWherein Δ dijThe calculation formula of (2) is as follows:Tlis that the matrix is converted into rank A, B respectively represents a straight line Li、EjTwo end points of (a).
Then in matrix ClIn (ii), statistics of Δ d in row iij<Number e of TiIf e isi<1, it shows that the current lane line is not in agreement withAnd the associated previous frame lane line is used as a brand new lane line, and the historical frame information of the next frame inter-frame association constraint is updated.
If eiIf 1, the current frame lane line L is considerediAnd historical frame lane line EjThe front frame and the rear frame are the same lane line; when e isi>1 hour, by vector ViRecording the position of the lane line meeting the conditions in the ith row of the current frame, namely:at ViAll elements V of column j in which the statistical non-zero element is locatedjTo obtain VjThe smallest element in (1), namely: (Δ d)ij)min=min{Vj}(Vj≠0)。
When in useThen the current frame lane line L is obtainediAnd historical frame lane line EjThe front frame and the rear frame are the same lane line. If the lane line detected by the current frame conforms to the inter-frame associated constraint, the lane line is considered to be the same in the front and rear frames, and the position of the current lane line is displayed; otherwise, abandoning the currently detected lane line. If the accumulated inter-frame association constraint times are more than Tα(Tα3), the parameters of the history frame lane line are updated.
The lane line is detected, and based on Kalman filtering lane line tracking, the method is characterized in that: for a structured road, the position difference of the lane lines in two continuous frames of images is not large, and the detection of the lane line of the next frame can be guided by the information obtained from the previous frame of image by utilizing the correlation of the lane line positions between the adjacent frames, so as to realize the real-time tracking of the lane line.
The failure judgment is characterized in that: when the road is seriously disturbed, such as the condition that the driving or other objects in the road block the lane marking line, the road is turned or the vehicle changes the lane, and the like, the algorithm can generate larger errors and even fail. Therefore, a failure discrimination mechanism is added in the detection. Once the constraint algorithm is invalid, the correct identification of the road marking line can be timely recovered.
Drawings
FIG. 1 is a flowchart of a lane detection method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method for correcting image color using illumination estimation according to an embodiment of the present invention;
FIG. 3 is a lane line model according to an embodiment of the present invention
Fig. 4 is a region of interest according to an embodiment of the present invention.
Fig. 5 is an edge detection diagram according to an embodiment of the present invention.
Fig. 6 shows the lane line filtering effect according to the embodiment of the present invention.
FIG. 7 is a road surface fouling map showing the results of the lane marking test according to the embodiment of the present invention
FIG. 8 is a diagram illustrating the result of the lane marking test according to the embodiment of the present invention, i.e., turning on the lights of the oncoming vehicles in the foggy weather
FIG. 9 shows the result of the lane marking test according to the embodiment of the present invention, i.e. the interference of the common pavement markers
FIG. 10 is a diagram of the lane marking test result of the embodiment of the present invention, namely, the vehicle driving at night
Detailed Description
To explain in detail the objects and effects achieved by the technical content and the structural features of the technical solution, the following detailed description is given with reference to the accompanying drawings.
First, general idea
In order to improve the real-time performance and reliability of lane line identification, a real-time lane line detection algorithm based on vision under a complex illumination condition is provided. In the extraction process, the original image is divided into areas, and then illumination estimation and illumination color correction are carried out on the image preprocessing different illumination images, so that the images are restored to be under standard white light. The method comprises the steps of removing noise introduced in the image acquisition process by Gaussian filtering, carrying out binarization processing and edge extraction on the image, obtaining lane candidate lines by utilizing improved Hough transformation, establishing dynamic ROI, realizing constraint and update of a lane line model by Hough transformation based on the dynamic ROI, and adding a lane line detection failure judgment module in an algorithm to improve the reliability of detection. As shown in fig. 1.
Secondly, determining the region of interest
Since there is a large correlation between adjacent images in the video image, most of the image information is useless for lane line detection, and by finding an interesting region useful for lane line detection, the amount of calculation of the algorithm can be reduced and the lane line identification can be simplified, as shown in fig. 3.
On a structured road, the useful information of the lane lines is mainly concentrated in the middle and lower part of the image, namely the region of interest, because the installation of a camera under different conditions is considered, or the head of a vehicle is displayed in the image. WimageRepresenting the width of the image, HimageDefined as the height of the image. Thus, the range of the effective detection area of the image can be reduced.
Thirdly, preprocessing the image of the region of interest, namely performing color correction, the method comprises the following steps: firstly, obtaining a region-of-interest image psi from an image acquisition device such as a monitoring camera, and performing color correction on the region-of-interest image psi to obtain a corrected image psi1(ii) a As shown in fig. 2, the specific steps are as follows:
the purpose of illumination estimation of an image is to correct the image under unknown illumination conditions to an image under standard white light, and this process is briefly summarized as first estimating the illumination color when the image is imaged, and then mapping the image under standard white light by using a Von Kries model. A better white balance effect of the image can be obtained. Generally, the method can be divided into the following steps:
(1) sample block extraction first extracts a sample block from an image. For each block of image samples, the effective illumination impinging on that block is estimated.
(2) And utilizing the existing illumination estimation algorithm under the single illumination condition to carry out illumination estimation. And systematically generating a plurality of different color constancy characteristic value extraction methods by converting parameters based on the Grey-Edge color constancy algorithm framework.
(3) And clustering the illumination estimated value of the sample block, wherein image blocks from the same illumination are clustered together to form a large image block so as to generate a more accurate illumination estimated value, and the blocks under the same illumination are easier to cluster into the same cluster. Thus, all illumination estimates are clustered into M classes (M is the number of illuminations in the scene).
(4) And after the illumination estimated values based on the sample blocks are clustered into M types (M is the number of illumination in a scene), the clustering results are mapped to the original image one by one, namely, pixels belonging to the same sample block belong to the same cluster, so that the illumination position of each type of illumination can be obtained. This results in an illumination map, i.e. each pixel belongs to one of the M illumination. Through backward mapping, the illumination estimation value of each pixel and the clustering center value of the illumination class where the pixel is located can be obtained.
(5) And for the area with overlapped illumination, using a Gaussian filter on the classification result of the illumination estimated value of the backward mapping.
(6) And color correction, namely correcting the input image to the standard illumination by utilizing the illumination estimation value of each pixel to obtain an output image under the standard illumination, so that the influence of illumination in the scene is eliminated. The diagonal model that is most commonly used today corrects the image.
A method for using image color correction, characterized by: said (1) assumes 5 × 5 pixels per block of image samples and satisfies the condition that the illumination values illuminated on the samples are uniformly distributed (only one color of light is illuminated on the samples).
In the method for correcting the image color by utilizing the image illumination estimation, the sizes of the selected sample blocks are the same, and the following conditions are met: a sample block of 5 x 5 pixels and contains illumination color information to accurately estimate the nature of the illumination impinging on the sample block.
Transforming parameters based on the Grey-Edge color constancy algorithm framework by transforming parameters n, q, and σ (n is a factorial, q is a Minkowski-norm, and σ is a kernel function size of a Gaussian filter), f (x), to represent illumination values at a point x in space, as shown below; is a constant in a value range [0,1], 0 represents no reflection, and 1 represents total reflection; e is an index e, systematically generating a plurality of different color constancy eigenvalue extraction methods.
Under this framework, the image is segmented into a number of sample blocks of the image as follows. Each sample block is assumed to be 5 x 5 pixels and satisfies the assumption that the illumination is uniformly distributed in the sample block. On each sample block, the illumination value on that sample block is estimated using the commonly used single-illumination color constancy algorithm.
The method for correcting image color using image illumination estimation considers the following five representative methods:
method for correcting image color by utilizing image illumination estimation, five candidate colorsConstancy calculation set ═ e0,1,0,e0,∞,0,e0,∞,1,e1,1,1,e2,1,1}. Each sample block is characterized by an illumination estimate of a selected color constancy algorithm.
In a method for correcting image color using image illumination estimation, a feature vector of a sample block may be described as F' ═ R, G, B]R, G, B are color channels of the image, using normalized illumination estimates, as shown below, such that the feature vector of the sample block is converted to F ═ R, G]A vector of 1 × 2:
in the method for correcting the image color by utilizing the image illumination estimation, after the illumination estimation value of each sample block is clustered in the chromaticity space formed by the illumination estimation values, the distance from the illumination estimation value of the jth sample block to the ith clustering center can be calculated by using the Euclidean distance which is diIt is shown that,
dkrepresents k [0, M]Distance of cluster center of the kth sample block, Z is the total sample block, then probability p that the sample block is located in the ith illumination areaj,iThe following calculations were made:
probability of coverage area of ith illuminationWherein p isj,iRepresenting the probability that the jth block is illuminated by the ith illumination and p is the total number of sample blocks in the input image.
In order to obtain smooth and continuous illumination distribution, filtering is carried out on a probability mapping graph of an illumination coverage area, two filters are used, namely a Gaussian filter and a median filter, the Gaussian filter takes spatial position information into consideration to calculate the pixel-by-pixel probability of each estimated illumination range, and the median filter has the advantage that edge information can be well reserved and is used for scenes with obvious illumination changes.
A method for correcting image color using image illumination estimation, the illumination estimate for each pixel of the image being calculated according to the following equation:
wherein IeIs an illumination estimate on the scene, Ie,jIs an estimate of the ith illumination, mi(x) Represents the contribution of the ith illumination to the pixel located at x; z represents the total block of samples,
if m isiA larger value means that the impact of the ith illumination on this pixel is large, especially if mi(x) By 1 is meant that this pixel is completely under illumination by the ith illumination. The illuminated coverage area probability map is as large as the input image.
Method for correcting image color using image illumination estimation, after obtaining illumination estimation value of each pixel, correcting pixel by pixel according to diagonal model, wherein fu(x) Representing the pixel value at x under illumination by unknown illumination, fc(x) Which represents the pixel values that they exhibit after correction under standard illumination.
Λu,c(x) Is a mapping matrix from unknown illumination to standard illumination at x, as shown in the following equation: f. ofc(x)=Λu,c(x)fu(x)。
The method for correcting the color of an image by using the illumination estimation of the image has the following diagonal correction model,representing the position at imaging:
wherein,x represents the illumination value measured by the R channel at a certain point in the image space;x represents the illumination value estimated by the R channel at a certain point in the image space;the measured illumination value of the R channel at a point in space is compared to the estimated illumination value.Comparing the measured illumination value of the G channel at a certain point in the space with the estimated illumination value;comparing the measured illumination value of the B channel to the estimated illumination value for a point in space Λu,c(x) Is a mapping matrix from unknown illumination to standard illumination at x.
Thirdly, preprocessing the image of the region of interest, namely graying the image after color correction
As shown in the following formula; grayR0.299 + G0.587 + B0.114, wherein: r, G, B represent the red, blue, and green channel component values, respectively; grayRepresenting the gray value of the converted pixel. More information to be stored to white and yellow is required on the lane lines, so that the proportion of the B-channel component values is weakened within the range of the extraction error of the lane lines. The formula of the gray scale conversion is as follows: gray=R*0.5+G*0.5。
Fourth, lane line model
The lane line model, as shown in fig. 3, is characterized in that: most road sections of the road are straight line sections, and the error calculated by taking the straight line model as a lane line model is only 3 mm. Therefore, the method adopts a straight line model as a model of the lane line.
Wherein: (x)1,y1),(x2,y2),(x3,y3),(x4,y4) Is the coordinate in the lane line, p represents the distance of the straight line position laterally deviated from the center perpendicular line, and d represents the distance of the straight line vanishing point from the lower sideline. Slope of lane lineAngle of rotationIntercept bτ=y-kx。
The method is characterized by comprising the following steps of (1) extracting the edge of a lane line of a gray image: in an actual road environment, the lane line generally has higher luminance than the surrounding road surface, and the gradation processing is performed to make the gradation value of the lane line higher. From the gray-scale image scanned in lines, the value of the lane line part is higher than the values of the two sides of the lane line part, and a peak is formed; presenting a first-rising-then-falling trend from left to right; we use these characteristics to determine the edge of the lane line by calculating the variation of the adjacent image pixels.
The method comprises the following specific steps:
let a certain point be (x, y), satisfy y ∈ [0, Himage) And x ∈ [2, Wimage-2). x, y are the column and row of pixel points, W, respectivelyimageRepresenting the width of the image, HimageDefined as the height of the image.
Step 1: the mean around the horizontal line of points (x, y) is calculated.Wherein t ∈ [1,3,5,7, … …]And t is 5, so that a good effect can be obtained.
Step 2: an edge extraction threshold T is calculated.
Step 3: calculating the lifting point e of the edgepAnd the point of change ev
Step 4: the up-change point and the down-change point of the lane line appear in pairs in the image and satisfy a certain distance therebetween. Comparing the widths of the ascending change point and the descending change point, and eliminating the unsatisfied points: Δ w ═ ep (x) -ev (x).
If Δ w>WmaxThe lane lines that are considered to be unlikely to appear are discarded. Wherein e isp(x) And ev(x) Column pixel coordinates, W, representing an up-change point and a down-change point, respectivelymaxThe maximum number of pixels occupied by the lane line in the image.
Fifth, edge extraction
The lane line detection method based on the improved Hough transformation is characterized by comprising the following steps: the hole that Hough transform detected the straight line makes an uproar can be strong, can link the edge of disconnection, and is particularly useful for detecting discontinuous lane marking. According to the duality principle of an image space and a Hough parameter space, each feature point in an image is mapped to a plurality of units of an accumulation array of the parameter space, and the counting of each unit is counted to detect an extreme value, so that whether a straight line exists or not is determined, and a straight line parameter is obtained.
After each point in the image space is mapped to a polar coordinate by the classical Hough transformation, voting statistics is carried out, and when rho and theta are calculatedpThe finer the quantization, the higher the detection accuracy, the coarser the quantization, and the inaccurate the detection result. In order to solve the problem of infinite slope of the vertical line, Hough transformation is generally performed by the following linear-polar equation,i.e. p ═ x cos θp+y sinθpIn order to reduce the complexity of the calculation and improve the calculation efficiency, corresponding conditional constraints are made on the classical Hough transform, so that the method can be more suitable for lane line detection, as shown in fig. 5.
Distance error limit d for the approximate region of a given straight linehA series of parameters of Hough transform and a mean error thresholdh. The improved Hough transform comprises the following specific steps:
step1, under given parameters, carrying out probability-based Hough transformation operation on the lane line characteristics to obtain a straight line;
step2, for each straight line obtained through Hough transformation detection, finding straight lines with the distance not more than d in all feature point sets ShThe feature points of (1) constitute a set Eh
Step3. determining regression line parameter k of set E by using least square methodhAnd bhAnd mean square error eh
Step4. Pair set EhAny one of the feature points (x)i,yi) All satisfied khxi+bh>yiThe feature points of (5) constitute a subset EposAll satisfied khxi+bh<yiThe feature points of (5) constitute a subset Eneg
Step5. in set EposAnd EnegIn the method, the point with the maximum error is found outAndwherein d ish(P) represents the distance of point P from the regression line;
step6. removal Point PpAnd PnUpdate set Epos、EnegAnd EhRepeating the step3 until the error occursDifference ehIs less thanh
Sixthly, carrying out constraint on lane lines, namely inter-frame association constraint
In an actual acquisition system and most intelligent vehicle systems, video stream information is directly obtained by a vehicle-mounted camera, and great redundancy is often provided between two adjacent frames of images in the video stream. The motion of the vehicle has continuity in time and space, and because the sampling frequency of the vehicle-mounted camera is high (about 100 fps), the vehicle only moves a short distance in the sampling period of the image frame, the change of the road scene is very small, and the change of the position of the lane line between the front frame and the rear frame is slow, so that the image of the front frame provides very strong lane line position information for the image of the rear frame. In order to improve the stability and accuracy of the lane line identification algorithm, an inter-frame relevance constraint is introduced.
The designed interframe smoothing model is as follows:in this formula, Line represents the approved detection result of the current frame, ωiIt is expressed that the weight value range is (0,1), liIndicates the intra-frame detection result of the ith frame, and z indicates the number of associated frames. The approved detection result of the current frame is obtained by weighting the intra-frame detection results of the current frame and the previous z frames. According to the model, an interframe detection algorithm can be obtained.
An inter-frame buffer is set, if the buffer size is z, the buffer stores the intra-frame detection results of the current frame and the previous z-1 frame. According to the property, when the z value is set to be increased, the detection accuracy of the current frame is improved, and the false detection rate are reduced. When z is too large, the authorized detection cannot represent the real information in the current frame, the detection fails, the algorithm failure program is interrupted, and the program is executed again. Therefore, the accuracy of the detected lane line of the current frame is directly influenced by the magnitude of z.
When z is 1, the detection is equivalent to the intra-frame detection effect, and the inter-frame smoothing loses meaning.And when z is 15, the condition of a road before 14 frames affects the current detection result, the increase of the buffer area brings slowing of the algorithm and reduction of the performance of the interframe smooth clustering algorithm, through experimental result analysis, each time the CUP processes one image, the CUP takes 40 milliseconds, and processes 25 frames of images in 1 second, z ∈ [1,25 ] is adopted]A certain value can optimize the detection effect of the algorithm, the parameter value is set adaptively, and the inter-frame smoothing modelMiddle weight omegaiThreshold value R of sum noisethIt is related. The following relationship is satisfied: the setting of the weight satisfies the following equation: omega-z+1≤ω-z+2≤…≤ω-1≤ω0
Threshold value R of noisethThe criteria for breakage are as follows:in the expression result, the ratio of the total weighted sum of the t-th lane line feature in the z frame to the total frame number must be greater than the threshold value RthOtherwise, the vehicle is considered as a noise lane line.
RthCalculating the formula:wherein c is a correction factor region 0.2<c<0.3 to preserve sharp edges and image details, NcThe number of pixels of the image is η the variance of the noise.
Seventh, based on Kalman filtering lane line tracking
The method is characterized in that: for a structured road, the difference between the lane positions in two consecutive frames of images is not large, and the information obtained from the previous frame of image can be used to guide the detection of the lane of the next frame by using the correlation between the lane positions of the adjacent frames, so as to realize the real-time tracking of the lane, as shown in fig. 6.
The failure judgment is characterized in that: when the road is seriously disturbed, such as the condition that the driving or other objects in the road block the lane marking line, the road is turned or the vehicle changes the lane, and the like, the algorithm can generate larger errors and even fail. Therefore, a failure discrimination mechanism is added in the detection. Once the constraint algorithm is invalid, the correct identification of the road marking line can be timely recovered. If the lane line parameters are detected to meet one of the following conditions, the algorithm failure program is judged to be interrupted, and the program is executed again.
(1) In the dynamic region of interest, the number of the straight lines detected by Hough transformation is zero.
(2) The number of frames which do not satisfy the constraint condition of the lane line is more than Tβ(Tβ=5)。
(3) The lane line parameter detected from the current frame is suddenly changed relative to the previous frame, namely the slope change rate of the straight line should not exceed 10 degrees, and the intercept does not exceed 15 pixels.
Fig. 6-10 are graphs of lane line detection effects.

Claims (10)

1. Determining a to-be-detected area according to a camera image, detecting a lane mark in the to-be-detected area to obtain a detection result, namely, an image down-sampling set region of interest, preprocessing an image, establishing a lane line model, Hough transforming a candidate lane line, kalman filtering and judging a module;
(1) preprocessing images of the region of interest, namely performing color correction;
firstly, extracting psi sample blocks from an image; for each block of image samples, estimating the effective illumination impinging on that block;
secondly, performing illumination estimation by using an existing illumination estimation algorithm under a single illumination condition; generating a plurality of different color constancy characteristic value extraction methods by converting parameters based on a Grey-Edge color constancy algorithm framework;
step three, clustering the illumination estimated values of the sample blocks, namely clustering the image blocks from the same illumination together to form a large image block so as to generate a more accurate illumination estimated value, wherein the blocks under the same illumination are easier to cluster into the same cluster; all illumination estimates are clustered into M classes; wherein M is the number of illumination in the scene;
step four, after the illumination estimated values based on the sample blocks are clustered into M types, the clustering results are mapped to the original image one by one, namely, pixels belonging to the same sample block belong to the same cluster, so that the illumination position of each illumination is obtained; obtaining an illumination mapping chart, namely each pixel belongs to one of M illumination; obtaining an illumination estimation value of each pixel and a clustering center value of an illumination class where the pixel is located through backward mapping;
step five, for the area of the overlapped illumination, a Gaussian filter is used on the classification result of the illumination estimation value of the backward mapping;
and sixthly, correcting color, namely correcting the input image to the standard illumination by utilizing the illumination estimation value of each pixel to obtain an output image under the standard illumination
(2) Graying the image after color correction as shown in the following formula; wherein, in the formula: r, G, B represent the red, blue, and green channel component values, respectively; grayRepresenting the gray value of the converted pixel; gray=R*0.5+G*0.5
(3) The method comprises the following steps of carrying out improved Hough transformation after extracting the edge of the lane line of the gray image, and specifically:
step1, under given parameters, carrying out probability-based Hough transformation operation on the lane line characteristics to obtain a straight line;
to each step2A straight line obtained through Hough transformation detection is searched for a straight line with the distance not more than d in all the feature point sets ShThe feature points of (1) constitute a set Eh
Step3. determining regression line parameter k of set E by using least square methodhAnd bhWherein k ishIs the slope of a straight line, bhIs the intercept of a straight line, and the mean square error eh
Step4. Pair set EhAny one of the feature points (x)i,yi) All satisfied khxi+bh>yiThe feature points of (5) constitute a subset EposAll satisfied khxi+bh<yiThe feature points of (5) constitute a subset Eneg
Step5. in set EposAnd EnegIn (1), find the point P with the largest errorpAnd Pn
Step6. removal Point PpAnd PnUpdate set Epos、EnegAnd EhAnd repeating the step3 until the error ehIs less thanh
(4) Detecting lane lines, tracking the lane lines based on Kalman filtering,
(5) inter-frame association relationship of lane lines
(6) If the lane line parameters are detected to meet one of the following conditions, judging that the algorithm is invalid; a program is interrupted, the program is executed from the beginning,
1) in the dynamic region of interest, the number of the straight lines detected by Hough transformation is zero;
2) the number of frames which do not satisfy the constraint condition of the lane line is more than Tβ,Tβ=5;
3) The lane line parameter detected from the current frame is suddenly changed relative to the previous frame, namely the slope change rate of the straight line should not exceed 10 degrees, and the intercept does not exceed 15 pixels.
2. The method for correcting color of an image using image illumination estimation according to claim 1, wherein the selected sample blocks are the same size, and the following condition is satisfied: a sample block of 5 x 5 pixels and contains illumination color information to accurately estimate the nature of the illumination impinging on the sample block.
3. The method of claim 1, further comprising: five candidate color constancy calculation sets { e ═ e0,1,0,e0,∞,0,e0,∞,1,e1,1,1,e2,1,1}; each sample block is characterized by an illumination estimate of a selected color constancy algorithm.
4. The method of claim 1, wherein: the feature vector of the sample block is described as F' ═ R, G, B]R, G, B are color channels of the image, using normalized illumination estimates, as shown below, such that the feature vector of the sample block is converted to F ═ R, G]A vector of 1 × 2:
5. the method of claim 1, wherein: in a chromaticity space formed by the illumination estimated values, after the illumination estimated values of all the sample blocks are clustered, the distance from the illumination estimated value of the jth sample block to the ith clustering center is calculated by using the Euclidean distance diIs represented by dkRepresents k [0, M]Distance of cluster center of the kth sample block, Z is the total sample block, then probability p that the sample block is located in the ith illumination areaj,iThe following calculations were made:
probability of coverage area of ith illuminationWherein p isj,iRepresents the probability that the jth block is illuminated by the ith illumination and M is the outputThe total number of sample blocks in the incoming image.
6. The method of claim 1, wherein: the illumination estimate for each pixel of the image is calculated according to the following equation:
wherein IeIs an illumination estimate on the scene, Ie,jIs an estimate of the ith illumination, mi(x) Represents the contribution of the ith illumination to the pixel located at x; z represents the total sample block.
7. The method of claim 1, wherein: after obtaining the illumination estimation value of each pixel, correcting pixel by pixel according to a diagonal model, wherein fu(x) Representing the pixel value at x under illumination by unknown illumination, fc(x) Representing the pixel values it exhibits under standard illumination after correction Λu,c(x) Is a mapping matrix from unknown illumination to standard illumination at x, as shown in the following equation: f. ofc(x)=Λu,c(x)fu(x)。
8. The method of claim 1, further comprising: the diagonal correction model is shown as follows, wherein,representing the position at imaging:
wherein,x represents the illumination value measured by the R channel at a certain point in the image space;x represents the illumination value estimated by the R channel at a certain point in the image space;comparing the measured illumination value of the R channel at a certain point in the space with the estimated illumination value;comparing the measured illumination value of the G channel at a certain point in the space with the estimated illumination value;comparing the measured illumination value of the B channel at a point in space with the estimated illumination value Λu,c(x) Is a mapping matrix from unknown illumination to standard illumination at x.
9. The method of claim 1, wherein for the edge extraction of the gray scale image lane line, let a certain point be (x, y) and satisfy y ∈ [0, h ]image) And x ∈ [2, wiamge-2); x, y are the column and row of pixel points, wiamgeIs the width of the image, hiamgeIs the height of the image;
step 1: calculating a mean value around the horizontal line of the point (x, y);wherein t is 5;
step 2: calculating an edge extraction threshold T;
step 3: calculating the lifting point e of the edgepAnd the point of change ev
ep∈{f(x+2,y)-f(x,y)>T}
ev∈{f(x+2,y)-f(x,y)<-T}
Step 4: the ascending change point and the descending change point of the lane line appear in pairs in the image and meet a certain distance; comparing the widths of the ascending change point and the descending change point, and eliminating the unsatisfied points: Δ w ═ ep(x)-ev(x);
If Δ w>WmaxIf the lane line is not possible, discarding the lane line; wherein e isp(x) And ev(x) Column pixel coordinates, W, representing an up-change point and a down-change point, respectivelymaxThe maximum number of pixels occupied by the lane line in the image.
10. The method of claim 1, wherein the designed interframe smoothing model is as follows:in this formula, Line represents the approved detection result of the current frame, ωiIt is expressed that the weight value range is (0,1), liRepresenting the intra-frame detection result of the ith frame, z representing the associated frame number, obtaining the approved detection result of the current frame by weighting the intra-frame detection results of the current frame and the previous z frames, obtaining an inter-frame detection algorithm according to the model, setting an inter-frame buffer area, if the size of the buffer area is z, storing the intra-frame detection results of the current frame and the previous z-1 frame in the buffer area, increasing the detection accuracy of the current frame and reducing the false detection and false detection rate according to the property when the z value is set to be increased, and z ∈ [1,25 ]]And satisfies the following relationship: the setting of the weight satisfies the following equation: omega-z+1≤ω-z+2≤…≤ω-1≤ω0(ii) a Threshold value R of noisethThe criteria for breakage are as follows:in the expression result, the ratio of the total weighted sum of the t-th lane line feature in the z frame to the total frame number must be greater than the threshold value RthOtherwise, the vehicle is considered as a noise lane line; rthCalculating the formula:wherein c is a correction factor region 0.2<c<0.3 to preserve sharp edges and image details, NcThe number of pixels of the image is η the variance of the noise.
CN201611098387.4A 2016-12-03 2016-12-03 Method for real-time lane line detection based on vision under complex lighting conditions Pending CN106682586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611098387.4A CN106682586A (en) 2016-12-03 2016-12-03 Method for real-time lane line detection based on vision under complex lighting conditions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611098387.4A CN106682586A (en) 2016-12-03 2016-12-03 Method for real-time lane line detection based on vision under complex lighting conditions

Publications (1)

Publication Number Publication Date
CN106682586A true CN106682586A (en) 2017-05-17

Family

ID=58867368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611098387.4A Pending CN106682586A (en) 2016-12-03 2016-12-03 Method for real-time lane line detection based on vision under complex lighting conditions

Country Status (1)

Country Link
CN (1) CN106682586A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451585A (en) * 2017-06-21 2017-12-08 浙江大学 Potato pattern recognition device and method based on laser imaging
CN107578012A (en) * 2017-09-05 2018-01-12 大连海事大学 A kind of drive assist system based on clustering algorithm selection sensitizing range
CN107909007A (en) * 2017-10-27 2018-04-13 上海识加电子科技有限公司 Method for detecting lane lines and device
CN108537224A (en) * 2018-04-23 2018-09-14 北京小米移动软件有限公司 Image detecting method and device
CN108734105A (en) * 2018-04-20 2018-11-02 东软集团股份有限公司 Method for detecting lane lines, device, storage medium and electronic equipment
CN109002745A (en) * 2017-06-06 2018-12-14 武汉小狮科技有限公司 A kind of lane line real-time detection method based on deep learning and tracking technique
CN109272536A (en) * 2018-09-21 2019-01-25 浙江工商大学 A kind of diatom vanishing point tracking based on Kalman filter
CN109740550A (en) * 2019-01-08 2019-05-10 哈尔滨理工大学 A kind of lane detection and tracking method based on monocular vision
CN109858438A (en) * 2019-01-30 2019-06-07 泉州装备制造研究所 A kind of method for detecting lane lines based on models fitting
CN110084190A (en) * 2019-04-25 2019-08-02 南开大学 Unstructured road detection method in real time under a kind of violent light environment based on ANN
CN110765890A (en) * 2019-09-30 2020-02-07 河海大学常州校区 Lane and lane mark detection method based on capsule network deep learning architecture
CN111126109A (en) * 2018-10-31 2020-05-08 沈阳美行科技有限公司 Lane line identification method and device and electronic equipment
CN111580500A (en) * 2020-05-11 2020-08-25 吉林大学 Evaluation method for safety of automatic driving automobile
CN111753749A (en) * 2020-06-28 2020-10-09 华东师范大学 Lane line detection method based on feature matching
CN112101163A (en) * 2020-09-04 2020-12-18 淮阴工学院 Lane line detection method
CN112115784A (en) * 2020-08-13 2020-12-22 北京嘀嘀无限科技发展有限公司 Lane line identification method and device, readable storage medium and electronic equipment
CN112767359A (en) * 2021-01-21 2021-05-07 中南大学 Steel plate corner detection method and system under complex background
CN113200052B (en) * 2021-05-06 2021-11-16 上海伯镭智能科技有限公司 Intelligent road condition identification method for unmanned driving
CN115806202A (en) * 2023-02-02 2023-03-17 山东新普锐智能科技有限公司 Self-adaptive learning-based weighing hydraulic unloading device and turnover control system thereof
CN116029947A (en) * 2023-03-30 2023-04-28 之江实验室 Complex optical image enhancement method, device and medium for severe environment
EP4047317A3 (en) * 2021-07-13 2023-05-31 Beijing Baidu Netcom Science Technology Co., Ltd. Map updating method and apparatus, device, server, and storage medium
US12049172B2 (en) 2021-10-19 2024-07-30 Stoneridge, Inc. Camera mirror system display for commercial vehicles including system for identifying road markings

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839264A (en) * 2014-02-25 2014-06-04 中国科学院自动化研究所 Detection method of lane line
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN104866823A (en) * 2015-05-11 2015-08-26 重庆邮电大学 Vehicle detection and tracking method based on monocular vision
CN105260713A (en) * 2015-10-09 2016-01-20 东方网力科技股份有限公司 Method and device for detecting lane line
CN105678791A (en) * 2016-02-24 2016-06-15 西安交通大学 Lane line detection and tracking method based on parameter non-uniqueness property
CN105893949A (en) * 2016-03-29 2016-08-24 西南交通大学 Lane line detection method under complex road condition scene
CN105966314A (en) * 2016-06-15 2016-09-28 北京联合大学 Lane departure pre-warning method based on double low-cost cameras

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839264A (en) * 2014-02-25 2014-06-04 中国科学院自动化研究所 Detection method of lane line
CN103940434A (en) * 2014-04-01 2014-07-23 西安交通大学 Real-time lane line detecting system based on monocular vision and inertial navigation unit
CN104866823A (en) * 2015-05-11 2015-08-26 重庆邮电大学 Vehicle detection and tracking method based on monocular vision
CN105260713A (en) * 2015-10-09 2016-01-20 东方网力科技股份有限公司 Method and device for detecting lane line
CN105678791A (en) * 2016-02-24 2016-06-15 西安交通大学 Lane line detection and tracking method based on parameter non-uniqueness property
CN105893949A (en) * 2016-03-29 2016-08-24 西南交通大学 Lane line detection method under complex road condition scene
CN105966314A (en) * 2016-06-15 2016-09-28 北京联合大学 Lane departure pre-warning method based on double low-cost cameras

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ARJAN GIJSENIJ 等: "Color Constancy for Multiple Light Sources", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
杨喜宁 等: "基于改进Hough变换的车道线检测技术", 《计算机测量与控制》 *
董俊鹏: "基于光照分析的颜色恒常性算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
郭斯羽 等: "结合Hough变换与改进最小二乘法的直线检测", 《计算机科学》 *
陆子辉: "基于视觉的全天时车外安全检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109002745A (en) * 2017-06-06 2018-12-14 武汉小狮科技有限公司 A kind of lane line real-time detection method based on deep learning and tracking technique
CN107451585B (en) * 2017-06-21 2023-04-18 浙江大学 Potato image recognition device and method based on laser imaging
CN107451585A (en) * 2017-06-21 2017-12-08 浙江大学 Potato pattern recognition device and method based on laser imaging
CN107578012A (en) * 2017-09-05 2018-01-12 大连海事大学 A kind of drive assist system based on clustering algorithm selection sensitizing range
CN107578012B (en) * 2017-09-05 2020-10-27 大连海事大学 Driving assistance system for selecting sensitive area based on clustering algorithm
CN107909007A (en) * 2017-10-27 2018-04-13 上海识加电子科技有限公司 Method for detecting lane lines and device
CN107909007B (en) * 2017-10-27 2019-12-13 上海识加电子科技有限公司 lane line detection method and device
CN108734105A (en) * 2018-04-20 2018-11-02 东软集团股份有限公司 Method for detecting lane lines, device, storage medium and electronic equipment
CN108537224A (en) * 2018-04-23 2018-09-14 北京小米移动软件有限公司 Image detecting method and device
CN109272536A (en) * 2018-09-21 2019-01-25 浙江工商大学 A kind of diatom vanishing point tracking based on Kalman filter
CN109272536B (en) * 2018-09-21 2021-11-09 浙江工商大学 Lane line vanishing point tracking method based on Kalman filtering
CN111126109B (en) * 2018-10-31 2023-09-05 沈阳美行科技股份有限公司 Lane line identification method and device and electronic equipment
CN111126109A (en) * 2018-10-31 2020-05-08 沈阳美行科技有限公司 Lane line identification method and device and electronic equipment
CN109740550A (en) * 2019-01-08 2019-05-10 哈尔滨理工大学 A kind of lane detection and tracking method based on monocular vision
CN109858438B (en) * 2019-01-30 2022-09-30 泉州装备制造研究所 Lane line detection method based on model fitting
CN109858438A (en) * 2019-01-30 2019-06-07 泉州装备制造研究所 A kind of method for detecting lane lines based on models fitting
CN110084190B (en) * 2019-04-25 2024-02-06 南开大学 Real-time unstructured road detection method under severe illumination environment based on ANN
CN110084190A (en) * 2019-04-25 2019-08-02 南开大学 Unstructured road detection method in real time under a kind of violent light environment based on ANN
CN110765890B (en) * 2019-09-30 2022-09-02 河海大学常州校区 Lane and lane mark detection method based on capsule network deep learning architecture
CN110765890A (en) * 2019-09-30 2020-02-07 河海大学常州校区 Lane and lane mark detection method based on capsule network deep learning architecture
CN111580500A (en) * 2020-05-11 2020-08-25 吉林大学 Evaluation method for safety of automatic driving automobile
CN111580500B (en) * 2020-05-11 2022-04-12 吉林大学 Evaluation method for safety of automatic driving automobile
CN111753749A (en) * 2020-06-28 2020-10-09 华东师范大学 Lane line detection method based on feature matching
CN112115784B (en) * 2020-08-13 2021-09-28 北京嘀嘀无限科技发展有限公司 Lane line identification method and device, readable storage medium and electronic equipment
CN112115784A (en) * 2020-08-13 2020-12-22 北京嘀嘀无限科技发展有限公司 Lane line identification method and device, readable storage medium and electronic equipment
CN112101163A (en) * 2020-09-04 2020-12-18 淮阴工学院 Lane line detection method
CN112767359A (en) * 2021-01-21 2021-05-07 中南大学 Steel plate corner detection method and system under complex background
CN112767359B (en) * 2021-01-21 2023-10-24 中南大学 Method and system for detecting corner points of steel plate under complex background
CN113200052B (en) * 2021-05-06 2021-11-16 上海伯镭智能科技有限公司 Intelligent road condition identification method for unmanned driving
EP4047317A3 (en) * 2021-07-13 2023-05-31 Beijing Baidu Netcom Science Technology Co., Ltd. Map updating method and apparatus, device, server, and storage medium
US12049172B2 (en) 2021-10-19 2024-07-30 Stoneridge, Inc. Camera mirror system display for commercial vehicles including system for identifying road markings
CN115806202A (en) * 2023-02-02 2023-03-17 山东新普锐智能科技有限公司 Self-adaptive learning-based weighing hydraulic unloading device and turnover control system thereof
CN115806202B (en) * 2023-02-02 2023-08-25 山东新普锐智能科技有限公司 Hydraulic unloading device capable of weighing based on self-adaptive learning and overturning control system thereof
CN116029947A (en) * 2023-03-30 2023-04-28 之江实验室 Complex optical image enhancement method, device and medium for severe environment

Similar Documents

Publication Publication Date Title
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN108596129B (en) Vehicle line-crossing detection method based on intelligent video analysis technology
CN110178167B (en) Intersection violation video identification method based on cooperative relay of cameras
CN106652465B (en) Method and system for identifying abnormal driving behaviors on road
CN107679520B (en) Lane line visual detection method suitable for complex conditions
Wu et al. Lane-mark extraction for automobiles under complex conditions
US8670592B2 (en) Clear path detection using segmentation-based method
US8890951B2 (en) Clear path detection with patch smoothing approach
US8634593B2 (en) Pixel-based texture-less clear path detection
CN109657632B (en) Lane line detection and identification method
US8611585B2 (en) Clear path detection using patch approach
CN102509098B (en) Fisheye image vehicle identification method
CN108230254B (en) Automatic detection method for high-speed traffic full lane line capable of self-adapting scene switching
Huang et al. Lane detection based on inverse perspective transformation and Kalman filter
CN110866430B (en) License plate recognition method and device
US20090268948A1 (en) Pixel-based texture-rich clear path detection
CN110210451B (en) Zebra crossing detection method
CN101872546A (en) Video-based method for rapidly detecting transit vehicles
Siogkas et al. Random-walker monocular road detection in adverse conditions using automated spatiotemporal seed selection
Cai et al. Real-time arrow traffic light recognition system for intelligent vehicle
CN111753749A (en) Lane line detection method based on feature matching
Ghahremannezhad et al. Robust road region extraction in video under various illumination and weather conditions
Liu et al. Effective road lane detection and tracking method using line segment detector
CN113239733A (en) Multi-lane line detection method
CN110427979B (en) Road water pit identification method based on K-Means clustering algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170517