CN112348813A - Night vehicle detection method and device integrating radar and vehicle lamp detection - Google Patents

Night vehicle detection method and device integrating radar and vehicle lamp detection Download PDF

Info

Publication number
CN112348813A
CN112348813A CN202011414956.8A CN202011414956A CN112348813A CN 112348813 A CN112348813 A CN 112348813A CN 202011414956 A CN202011414956 A CN 202011414956A CN 112348813 A CN112348813 A CN 112348813A
Authority
CN
China
Prior art keywords
image
vehicle
determining
processed
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011414956.8A
Other languages
Chinese (zh)
Inventor
沈蓓
袁志宏
韦松
杜一光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhitu Technology Co Ltd
Original Assignee
Suzhou Zhitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhitu Technology Co Ltd filed Critical Suzhou Zhitu Technology Co Ltd
Priority to CN202011414956.8A priority Critical patent/CN112348813A/en
Publication of CN112348813A publication Critical patent/CN112348813A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a method and a device for detecting vehicles at night by fusing radar and vehicle lamp detection, wherein firstly, images to be processed and corresponding radar data are obtained; then processing the image to be processed based on a self-adaptive threshold algorithm and a pre-acquired vehicle lamp color threshold range to obtain a first image containing vehicle lamp information; determining car light information according to the first image and a preset constraint condition; and finally, determining target vehicle information according to the vehicle lamp information and the radar data. According to the invention, the image to be processed is processed based on the self-adaptive threshold algorithm and the car light color range, so that more accurate car light information is obtained, and further the car light information is fused with radar data to obtain the car information, so that the accuracy of detecting the car information is improved, and the driving safety is improved.

Description

Night vehicle detection method and device integrating radar and vehicle lamp detection
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for detecting vehicles at night through fusion of radar and vehicle lamp detection.
Background
In the related art, at night, a running vehicle usually acquires an image of a preceding vehicle by using an on-vehicle camera, and detects the preceding vehicle by extracting a vehicle tail lamp in the image. However, the light of the street lamp and the light of the current vehicle are insufficient at night, which results in low accuracy of the vehicle detection method.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for detecting a vehicle at night by fusing radar and vehicle light detection, so as to improve the accuracy of detecting vehicle information and improve driving safety.
In a first aspect, an embodiment of the present invention provides a method for detecting a vehicle at night through fusion of radar and vehicle lamp detection, including: acquiring an image to be processed and corresponding radar data; processing an image to be processed based on a self-adaptive threshold algorithm and a vehicle lamp color threshold range to obtain a first image containing vehicle lamp information; determining car light information according to the first image and a preset constraint condition; and determining target vehicle information according to the vehicle light information and the radar data.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the step of processing the image to be processed based on an adaptive threshold algorithm and a vehicle light color threshold range to obtain a first image including vehicle light information includes: processing an image to be processed based on an adaptive threshold algorithm to obtain a first preprocessed image comprising a first highlight area; processing the image to be processed based on the vehicle lamp color threshold range to obtain a second preprocessed image comprising a second highlight area; determining a third preprocessed image according to the first preprocessed image and the second preprocessed image; and performing closing operation and median filtering processing on the third preprocessed image to obtain a first image containing car light information.
With reference to the first possible implementation manner of the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where an image format of the to-be-processed image includes an RGB format; the method comprises the following steps of processing an image to be processed based on an adaptive threshold algorithm to obtain a first preprocessed image comprising a first highlight region, wherein the steps comprise: converting the image format of an image to be processed to obtain a first gray-scale image; calculating the mean value and the standard deviation of the first gray scale image; obtaining an adaptive threshold based on the mean and the standard deviation; and carrying out binarization processing on the first gray-scale image based on the adaptive threshold value to obtain a first preprocessed image comprising a first highlight area.
With reference to the second possible implementation manner of the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the step of processing the to-be-processed image based on the vehicle light color threshold range to obtain a second pre-processed image including a second highlight area includes: converting the image format of the image to be processed to obtain a first preliminary image with the image format HSV; processing an H channel of the first preliminary image based on a pre-acquired vehicle lamp color threshold range to obtain a second preliminary image; converting the image format of the second preliminary image to obtain a second gray scale map; extracting a red channel of the second gray scale map to obtain a third preliminary image; and carrying out binarization processing on the third preliminary image based on the adaptive threshold value to obtain a second preprocessed image comprising a second highlight area.
With reference to the first possible implementation manner of the first aspect, the embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the step of determining a third preprocessed image according to the first preprocessed image and the second preprocessed image includes: for each second pixel in the second highlight area, judging whether a first pixel corresponding to the second pixel belongs to the first highlight area; the first pixel belongs to a first pre-processed image; the second pixel belongs to a second preprocessed image; if the corresponding first pixel belongs to the first highlight area, setting the gray value of the second pixel to be 255; if the corresponding first pixel does not belong to the first highlight area, setting the gray value of the second pixel to be 0; and determining the image composed of the second pixel and other pixels except the second pixel in the second preprocessed image as a third preprocessed image.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the determining the vehicle light information according to the first image and a preset constraint condition includes: obtaining an effective area of a first image based on a shooting condition of an image to be processed; searching a connected domain in the effective area; the connected domain comprises a plurality of; determining an effective connected domain pair according to the central distance of any two connected domains; the valid connected domain pair comprises two connected domains; calculating a parameter value corresponding to a preset parameter of the effective connected domain pair; if the parameters meet the preset constraint conditions, determining that the effective connected domain pair comprises the car light information; obtaining vehicle lamp information based on the effective connected domain; the vehicle light information includes a vehicle light position and a vehicle light brightness.
With reference to the fifth possible implementation manner of the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the vehicle light positions include a left vehicle light position and a right vehicle light position; the step of determining target vehicle information according to the vehicle light information and the radar data includes: determining the position of the left car light and the middle position of a connection line of the positions of the car lights as target pixel points; tracking the target pixel point according to the pre-obtained predicted position, and determining an effective car light position; determining the initial position of the target vehicle according to the effective vehicle lamp position and a preset coordinate conversion model; determining the position of an effective target point according to the radar data; performing data association on the preliminary position of the target vehicle and the effective target point position to obtain a matched target point; tracking a target point to obtain the position and the speed of a target vehicle; determining the state of the target vehicle according to the brightness of the vehicle lamp; the state of the target vehicle includes normal running or braking; the position, speed, and state of the target vehicle are determined as target vehicle information.
In a second aspect, an embodiment of the present invention further provides a night vehicle detection device with fusion of radar and vehicle lamp detection, including: the data acquisition module is used for acquiring the image to be processed and corresponding radar data; the image processing module is used for processing the image to be processed based on the self-adaptive threshold algorithm and the vehicle lamp color threshold range to obtain a first image containing vehicle lamp information; the vehicle light information determining module is used for determining vehicle light information according to the first image and a preset constraint condition; and the vehicle information determining module is used for determining the target vehicle information according to the vehicle lamp information and the radar data.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a processor and a memory, where the memory stores machine-executable instructions capable of being executed by the processor, and the processor executes the machine-executable instructions to implement the foregoing method.
In a fourth aspect, embodiments of the present invention also provide a machine-readable storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the above-described method.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a method and a device for detecting vehicles at night by fusing radar and vehicle lamp detection, wherein firstly, images to be processed and corresponding radar data are obtained; then processing the image to be processed based on a self-adaptive threshold algorithm and a pre-acquired vehicle lamp color threshold range to obtain a first image containing vehicle lamp information; determining car light information according to the first image and a preset constraint condition; and finally, determining target vehicle information according to the vehicle lamp information and the radar data. According to the method, the images to be processed are processed through the self-adaptive threshold algorithm and the car lamp color range, more accurate car lamp information is obtained, and further the car lamp information is fused with radar detection data to obtain car information, so that the accuracy of car information detection is improved, and the driving safety is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a night vehicle detection method with fusion of radar and vehicle lamp detection according to an embodiment of the present invention;
FIG. 2 is a flow chart of another night vehicle detection method with integration of radar and vehicle lamp detection according to an embodiment of the present invention;
FIG. 3 is a flowchart of a night vehicle detection method based on the fusion of millimeter wave radar and vehicle lamp detection according to an embodiment of the present invention;
FIG. 4 is a flow chart of a data preprocessing process according to an embodiment of the present invention;
FIG. 5 is a flow chart of a vehicle light detection process according to an embodiment of the present invention;
fig. 6 is a flowchart of probability calculation in a vehicle lamp region matching process according to an embodiment of the present invention;
FIG. 7 is a flow chart of a pixel tracking process according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a vehicle position calculation process provided by an embodiment of the present invention;
fig. 9 is a schematic diagram of a radar target point determining process according to an embodiment of the present invention;
fig. 10 is a flowchart of a comparison process for determining radar points by radar target points according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a night vehicle detection device with fusion of radar and vehicle lamp detection according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With the rapid development of the automatic driving industry, the safety problem becomes more prominent. The security of intelligent automobiles relies heavily on the development of environmental awareness technology. At present, the development of the intelligent vehicle environment sensing technology mainly includes an intelligent vehicle environment sensing technology for recognizing based on lane electronic marks, an intelligent vehicle environment sensing technology based on a single sensor and an intelligent vehicle environment sensing technology based on multi-sensor information fusion. The combination of sensors such as a camera, a millimeter wave radar, a laser radar and an ultrasonic radar becomes a basic configuration for sensing an external environment by an intelligent automobile.
The millimeter wave radar is mostly used for sensing the front obstacle, but cannot realize obstacle identification. The camera vision sensor is mostly used for detecting and classifying obstacles, but cannot work all weather due to the influence of light, weather conditions and the like, and particularly the difficulty of detecting vehicles at night is greatly increased. The laser radar has high measurement precision and wide application range, can perform a target segmentation and tracking method, and can realize the identification and tracking of obstacles. The disadvantage is that the cost is too high, and the method cannot be applied to mass production of vehicle types. There is therefore a great limitation in the context-aware solutions that rely on one sensor. Vehicle detection with multi-sensor fusion is the mainstream trend of the current technology.
The detection of the front vehicle based on the vehicle-mounted camera depends on different illumination environments to a great extent, and insufficient illumination conditions are an important difficulty for night vehicle detection research. In the night camera image, the lamp position of the vehicle is obvious. However, the brightness of the current vehicle headlamp is high, so that image overexposure is easily caused, the interference of an isolation guardrail is often caused between the left lane and the right lane in an actual scene, and the difficulty in extracting the vehicle lamp is increased due to the interference of road street lamps.
Currently, there has been some research on the identification of vehicle tail lights. In the related art, a taillight detection and identification method based on an image is provided. In the method, in order to accelerate the detection speed, only the red component of a night image is detected, and different segmentation thresholds are set according to the distance by using the symmetry of the detected tail lamp to realize the extraction of the car light and the recognition of the car.
The related technology also provides a vehicle identification method in a high-speed scene at night, the shape of the vehicle lamp is defaulted to be circular, the reflecting plate is extracted according to the brightness and area information of the vehicle lamp and the lane boundary is identified, the reflecting plate in the image is removed, then the perimeter of a red area around a white light spot is calculated, if most of the red area is red, the vehicle is considered as a tail lamp pair, and finally the vehicle identification is finished according to the tail lamp pair.
Aiming at the analysis of the detection method based on the car lamp, the current detection method has the following defects: the adaptability of the tail lamp extraction threshold to the environment is poor, and a fixed threshold is basically adopted; the shapes of the car lamps are not uniform, and the robustness of a detection algorithm is not sufficient; tail lamp extraction is easily interfered by other light sources; the accuracy of vehicle detection is seriously reduced under the condition that the vehicles are mutually shielded.
Based on this, the night vehicle detection method, device and electronic device with the integration of radar and vehicle lamp detection provided by the embodiment of the invention can be applied to detection of vehicles in various driving scenes, such as night, cloudy days and the like.
In order to facilitate understanding of the embodiment, a method for detecting a vehicle at night by combining radar and vehicle lamp detection disclosed by the embodiment of the invention is first described in detail.
The embodiment of the invention provides a night vehicle detection method with detection and fusion of a radar and a vehicle lamp, as shown in figure 1, the method comprises the following steps:
and S100, acquiring an image to be processed and corresponding radar data.
The image to be processed may be obtained by shooting with a vehicle-mounted camera, and the image format is usually rgb (red green blue) format. The radar data corresponding to the image to be processed can be obtained by a millimeter wave radar while the image to be processed is shot.
Step S102, processing the image to be processed based on the adaptive threshold algorithm and the pre-acquired vehicle lamp color threshold range to obtain a first image containing vehicle lamp information.
The self-adaptive threshold algorithm is a method for carrying out image calculation by replacing a global threshold with an image local threshold. When in use, the picture with excessively large light and shadow variation or the picture with less obvious color difference in the range is particularly aimed at. Adaptive means that the computer is guaranteed to obtain the average threshold value of the image area through judgment and calculation for iteration. The vehicle lamp color threshold range can be obtained through statistics, and pixels in the vehicle lamp color threshold range in the image to be processed can be reserved.
And step S104, determining the car light information according to the first image and a preset constraint condition.
The preset constraint condition may be a geometric condition, such as the shape of the vehicle lamp, the distance between two vehicle lamps, etc.; it is also possible to use a pixel gray condition, such as determining a pixel point whose pixel gray exceeds a certain threshold as part of the car light. When the first image already contains the car light information, the car light information may be extracted from the first image based on a preset constraint condition. The vehicle light information may include vehicle light position and vehicle light brightness.
And step S106, determining target vehicle information according to the vehicle lamp information and the radar data.
The target vehicle information may include position information and traveling state information. After the vehicle light information is determined, the position information of the target vehicle may be calculated based on the positions of the vehicle lights. However, since the position information is obtained by image acquisition, and the image has a certain distortion, the position information is inaccurate. The radar data is acquired simultaneously with the image to be processed, the vehicle information acquired from the radar data comprises the position information of the current target vehicle, and the position information acquired from the image to be processed is associated with the position information acquired from the radar data to acquire more accurate target vehicle information.
The embodiment of the invention provides a night vehicle detection method integrating radar and vehicle lamp detection, which comprises the steps of firstly, obtaining an image to be processed and corresponding radar data; then processing the image to be processed based on a self-adaptive threshold algorithm and a pre-acquired vehicle lamp color threshold range to obtain a first image containing vehicle lamp information; determining car light information according to the first image and a preset constraint condition; and finally, determining target vehicle information according to the vehicle lamp information and the radar data. According to the method, the images to be processed are processed through the self-adaptive threshold algorithm and the car light color range, more accurate car light information is obtained, and further the car light information is fused with radar data to obtain the car information, so that the accuracy of car information detection is improved, and the driving safety is improved.
The embodiment of the invention also provides another night vehicle detection method integrating radar and vehicle lamp detection, which is realized on the basis of the method shown in the figure 1; as shown in fig. 2, the method comprises the steps of:
and step S200, acquiring the image to be processed and corresponding radar data.
Step S202, processing the image to be processed based on the adaptive threshold algorithm to obtain a first preprocessed image comprising a first highlight area.
Specifically, the image format of the image to be processed includes an RGB format; the image format of the image to be processed can be converted to obtain a first gray-scale image; then calculating the mean value and the standard deviation of the first gray scale image; obtaining an adaptive threshold based on the mean and the standard deviation; and carrying out binarization processing on the first gray-scale image based on the adaptive threshold value to obtain a first preprocessed image comprising a first highlight area.
And step S204, processing the image to be processed based on the vehicle lamp color threshold range to obtain a second preprocessed image comprising a second highlight area.
The above steps can be implemented based on HSV (Hue, Saturation, brightness) format. Firstly, converting an image format of an image to be processed to obtain a first preliminary image with an HSV (hue, saturation and value) image format; then, based on a pre-acquired vehicle lamp color threshold range, processing an H channel of the first preliminary image to obtain a second preliminary image, specifically, setting an H channel threshold, reserving H channel values within the threshold range, and directly setting the H values to be 0 outside the range; converting the image format of the second preliminary image to obtain a second gray scale image; further extracting a red channel of the second gray scale map to obtain a third preliminary image; and finally, carrying out binarization processing on the third preliminary image based on the adaptive threshold value to obtain a second preprocessed image comprising a second highlight area. The vehicle lamp color threshold range mainly comprises a yellow, purple and red color range.
Step S206, determining a third preprocessed image according to the first preprocessed image and the second preprocessed image.
In general, the highlighted portion of the first preprocessed image and the second preprocessed image must be left as the highlight, and the rest of the first preprocessed image and the second preprocessed image must be set to the grayscale 255. In a specific operation, for each second pixel in the second highlight area, whether a first pixel corresponding to the second pixel belongs to the first highlight area may be determined; the first pixel belongs to a first pre-processed image; the second pixel belongs to a second preprocessed image; if the corresponding first pixel belongs to the first highlight area, setting the gray value of the second pixel to be 255; if the corresponding first pixel does not belong to the first highlight area, setting the gray value of the second pixel to be 0; and determining the image composed of the second pixel and other pixels except the second pixel in the second preprocessed image as a third preprocessed image.
And S208, performing closing operation and median filtering processing on the third preprocessed image to obtain a first image containing car light information.
Step S210, obtaining an effective area of the first image based on the shooting condition of the image to be processed. The shooting condition mainly refers to the height of the vehicle-mounted camera when the vehicle-mounted camera shoots the image to be processed; this condition may affect the portion of the image to be processed that includes the headlight information. Generally, the effective search area (i.e., the effective area described above) is defined as the upper half of the image, and the head portion of the vehicle at the lower edge is removed. The head of the vehicle often reflects the front vehicle lights to cause false detection, so the head of the vehicle is also filtered, and the height of the head of the vehicle is about 0.18 times of the height of the image.
Step S212, searching a connected domain in the effective area; the connected domain includes a plurality of domains.
The Connected Component generally refers to an image area (Blob) composed of foreground pixels having the same pixel value and adjacent positions in the image. The size of the connected domain is usually set between 100-1000 pixels.
Step S214, determining an effective connected domain pair according to the center distance of any two connected domains; an active connected domain pair includes two connected domains.
Step S216, calculating a parameter value corresponding to the preset parameter of the effective connected domain pair. The parameter values may include horizontal distance, vertical distance, overlap, height similarity, width similarity, body light width ratio, body width light height ratio, etc. of the two connected domains.
Step S218, if the parameters meet the preset constraint conditions, determining that the effective connected domain pair comprises the vehicle lamp information; the preset constraint condition can be an empirical threshold obtained through statistics, and if the parameter falls within an effective range interval formed by the empirical threshold, the probability is increased, and the probability is not reduced according to the effective range; and determining whether the effective connected domain pair comprises the car light information or not based on a preset probability threshold.
Step S220, obtaining vehicle lamp information based on the effective connected domain; the vehicle lamp information comprises a vehicle lamp position and vehicle lamp brightness; repeating vehicle lights are eliminated in this process.
Step S222, determining target vehicle information according to the vehicle lamp information and the radar data.
Specifically, the lamp positions generally include a left lamp position and a right lamp position; when the target vehicle information is determined, the position of a left vehicle lamp and the middle position of a connection line of the positions of the vehicle lamps can be determined as a target pixel point; tracking a target pixel point according to a pre-obtained predicted position, and determining an effective car light position; determining the initial position of the target vehicle according to the effective vehicle lamp position and a preset coordinate conversion model; determining the position of an effective target point according to the radar data; performing data association on the preliminary position of the target vehicle and the effective target point position to obtain a matched target point; tracking a target point to obtain the position and the speed of a target vehicle; determining the state of the target vehicle according to the brightness of the vehicle lamp; the state of the target vehicle includes normal running or braking; the position, speed, and state of the target vehicle are determined as target vehicle information.
According to the method, the image information shot by the vehicle-mounted camera is combined with the millimeter wave radar information, so that the vehicle information is determined, the detection accuracy is improved, and the driving safety is improved.
The embodiment of the invention also provides a night vehicle detection method based on the fusion of the millimeter wave radar and the vehicle lamp detection (which is equivalent to the night vehicle detection method based on the fusion of the radar and the vehicle lamp detection), which can effectively improve the accuracy of night vehicle identification, improve the accuracy of night vehicle lamp identification and effectively realize the vehicle lamp classification function.
As shown in fig. 3, the method comprises the steps of:
(1) and (5) camera data acquisition.
(2) Data preprocessing: specifically, an RGB image acquired by a camera is utilized, a self-adaptive threshold algorithm is applied to a gray scale image to extract a highlight area, red and yellow areas are extracted based on HSV (hue, saturation, value) channels and vehicle lamp color statistical information, and an intersection is taken to obtain a binary image. And performing morphological processing on the obtained binary image, namely performing closed operation and median filtering, and outputting a processed image only containing the region of interest.
(3) Detecting the car lamp: and searching a connected domain with a proper size on the effective region of the preprocessed image, and calculating the matching probability of two possible car light regions based on the geometric constraint condition. And screening to obtain the car light pair with the maximum probability, and removing the repeated car lights to obtain the detected car light pair.
(4) Pixel tracking: and obtaining the midpoint coordinate of the vehicle lamp, namely the pixel coordinate of the target point according to the matched vehicle lamp pair, carrying out pixel tracking and determining the effective vehicle lamp pair.
(5) Vehicle state classification: and further classifying the effective lamp pairs into brake lamps or tail lamps according to the brightness values of the effective lamp pairs to obtain the state of the target vehicle.
(6) And (3) coordinate conversion: based on the effective car light pair, the target position under the world coordinate system is obtained by utilizing the camera calibration parameters and the coordinate conversion.
(7) Millimeter wave radar data acquisition: this process should be performed simultaneously with the camera data acquisition.
(8) Pretreatment of a target point: and acquiring target position and speed information output by the millimeter wave radar, removing interference noise points, and screening out effective target points.
(9) Target matching: and (4) performing data association with the effective target point of the millimeter wave radar within a certain range by using a matching criterion according to the target position in the step (6).
(10) Target tracking: and (4) tracking the target point matched in the step (9) by using a Kalman filtering algorithm to obtain the final target vehicle position, speed and state information as target vehicle information.
As shown in fig. 4, the specific implementation process of the step (2) is as follows:
a. inputting an RGB image, wherein the input image is image data of each frame acquired by a camera, and the image data is in an RGB image format.
b. And (3) extracting a highlight area by using an adaptive threshold, namely converting the image data in the RGB format into a gray-scale image, and calculating the mean value and the standard deviation of the gray-scale image. And obtaining an adaptive threshold value according to the mean value and the standard deviation, and carrying out binarization processing on the image by using the threshold value. Wherein the part larger than the threshold is set as a highlight area, and the part lower than the threshold is set as a non-highlight area. Therefore, the extraction of the image highlight area is completed according to the method of the self-adaptive threshold value.
c. And (4) extracting a red-yellow area based on the HSV channel according to the statistical data, and converting the image data in the original RGB format into the HSV format. The HSV images of the car lights are counted, and the colors of the car lights at night are not uniform, but are mainly distributed in the color ranges of yellow, purple and red. Based on the color distribution of the vehicle lamp, the HSV image is processed, an H channel threshold value is set, the H channel value within the threshold value range is reserved, and the H value outside the threshold value range is directly set to be 0. And after the HSV image is processed, converting the HSV image into a gray image, extracting a red channel from the gray image, and extracting the highlight area from the red channel by adopting the self-adaptive threshold method.
d. And (4) taking intersection to obtain a binary image, and taking an intersection part of the two images according to the image processing result. Namely, the highlight characteristic is reserved in the simultaneously highlighted area of the two binary images, and otherwise, the area is set as a non-highlight area.
e. And (4) performing closed operation and median filtering, namely further performing closed operation and median filtering on the image, and finally outputting the processed image only containing the region of interest, namely outputting a preprocessing result.
As shown in fig. 5, the specific implementation process of step (3) is as follows:
a. inputting the preprocessed image.
b. Searching a connected area with a proper size: the effective search area is defined as the upper half of the image, and the head part of the bicycle at the lower edge is removed. The head of the vehicle often reflects the front vehicle lights to cause false detection, so the head of the vehicle is also filtered, and the height of the head of the vehicle is about 0.18 times of the height of the image. Searching a connected domain in the effective area, only storing the area with the size of 100 to 1000 pixels, and filtering the area with the size of too large or too small.
c. Solving the matching probability of two vehicle lamp regions by using a plurality of geometric constraint conditions: for each connected region, the probability of matching to other connected regions whose center distance is within 200 pixel distances is calculated. The flow of probability calculation is shown in fig. 6. Setting the initial probability value as 0, and respectively calculating parameters of the connected domain 1 and the connected domain 2, wherein the parameters comprise horizontal distance, vertical distance, overlapping degree (overlap), height similarity (similar height), width similarity (similar height), vehicle body and vehicle lamp width ratio (near) and vehicle body and vehicle lamp height ratio (scale). And using a statistical empirical threshold, increasing the probability (corresponding to a positive effect) of the vehicle light for which the parameter falls within the range of validity (represented by range 1), and decreasing the probability (corresponding to a negative effect) for which the parameter does not fall within the range of validity (falling within range 2). And comprehensively considering the constraints of a plurality of geometric parameters to obtain the final matching probability.
The parameters, ranges and action relationships are shown in Table 1:
TABLE 1
Figure BDA0002814977120000141
Where "+" indicates an increasing probability and "-" indicates a decreasing probability.
d. Finding the car lamp pair with the maximum matching probability: and searching other vehicle lamps with the maximum matching probability with each vehicle lamp, and ensuring that the probability value is greater than 1, wherein the matching is successful, and otherwise, the matching is failed.
e. Removing the repeated vehicle lamp: if the distance between the vehicles in the adjacent lanes is too close, the same vehicle lamp in the two pairs of matched vehicle lamps is likely to be found in the matched vehicle lamps, and for the condition, the duplication removing operation is adopted. And a pair of car lamps with larger matching probability values is reserved, car lamp pairs with small probability are removed, and the detection accuracy is increased. And finally outputting the matched vehicle lamp pair. And (3) other similar schemes for matching the vehicle lamp pairs can also be adopted according to the symmetry of the vehicle lamp and the area of the vehicle lamp.
As shown in fig. 7, the specific implementation process of the step (4) is as follows:
a. buffering 3 frames: the pixel locations of the target points of 3 frames are buffered.
b. And (4) associating between frames.
c. Obtaining a predicted value at the K moment through the optimal estimation at the K-1 moment: the predicted value is obtained by kalman filtering.
d. Target matching: and performing target matching on the predicted value of the current frame obtained by Kalman filtering and the observed value of the current frame.
e. And (3) updating the state: and if the matching is successful, updating the state of the target point and confirming the target point as the tracked target.
f. And if the target is not matched successfully, judging whether the target is a new target.
g. And if the target is a new target, creating a new tracking target.
h. If not, 1 is added to the number of dropped frames.
i. And if the number of the following lost frames is larger than the maximum acceptable number of the following lost frames, deleting the target. If not, the predicted value is output as the optimal value.
And further dividing the vehicle lamp into a brake lamp and a tail lamp according to the brightness of the vehicle lamp and the threshold value obtained by statistics on the tracked target point. Obtaining the state information of the vehicle, namely the vehicle is in a braking state or a normal driving state; in actual use, the vehicle lamp state judgment can also be realized by the scheme of realizing vehicle lamp classification by other methods.
The specific implementation process of the step (6) is as follows: according to the pixel positions of the car light pairs, the pixel positions of the symmetrical points of the two car lights can be obtained. The vehicle is divided from the ground, and the symmetric point is projected to the pixel position of the ground (as shown by the position of the point A in FIG. 8).
And according to the position of the central axis of the bicycle, the pixel position from the point A to the central axis can be obtained. The pixel positions p (AB) for AB and p (BC) for bc are obtained.
According to the camera calibration model, p (AB), p (BC) can be converted into actual distances S (AB), S (BC).
The blind zone distance D3 of the camera of the vehicle and the distance D2 from the camera to the front end of the vehicle can be measured. Therefore, the relative position of the front vehicle to the front end of the vehicle can be obtained, wherein the longitudinal distance is D4 ═ S (BC) + D3-D2, and the transverse distance is D5 ═ S (AB). The transverse distance and the longitudinal distance of other vehicles relative to the self vehicle can be obtained by the same method. The vehicle traveling direction is taken as the X axis, and the front vehicle coordinates are (D4, D5).
The specific implementation process of the step (8) is as follows: and acquiring target position and speed information output by the millimeter wave radar, removing interference noise points, and screening effective target point positions to obtain vehicle distance information. And (4) performing data association with the millimeter wave radar target point within a certain range by using a matching criterion according to the target position in the step (4).
The detection range of the radar is wide, and more target information can be acquired naturally, so that the effective target area needs to be divided for the radar. In the running process of the vehicle, an area with the left width and the right width as a lane width by taking the current expected running track of the vehicle as the center can be regarded as an effective target area of the radar. The adjacent lane targets on the left and the right of the vehicle and the same lane target are to-be-detected objects.
And (4) screening and filtering invalid targets and null signals detected by the radar, so as to determine the position of the effective target point of the radar.
The specific implementation process of the step (9) is as follows: after the car lamp detection result is subjected to coordinate conversion, the position of a target in a world coordinate system has a large error, so that data fusion with a millimeter wave target point is required. As shown in fig. 9, for each target vehicle position (x0, y0) obtained in step (4), the millimeter wave radar target points within the rectangular frame area having the ranges of [ x0-5 m, x0+5m ] and [ y0-2.5m, y0+2.5m ] are searched. Here 5m and 2.5m are taken from the maximum vehicle length and the vehicle width.
And comparing every two radar points in the area to determine whether the radar points are points of the same target vehicle. Let two radar coordinate points to be compared be (x1, y1), (x2, y2), and the comparison criterion is as shown in fig. 10, when the lateral distance or the longitudinal distance is greater than the threshold, it is considered as a different target, otherwise, it is the same target, and a target point closer to the target point is reserved.
The tracking method principle of the above step (10) and step (4) is consistent, and is not described again. In the step (10), the target point position after the millimeter wave radar target point and the vehicle lamp detection target point are fused is tracked in a world coordinate system. And finally, outputting more accurate position, speed and state information of the target vehicle.
The method extracts the highlight region by applying the self-adaptive threshold algorithm to the gray level image, can better adapt to the environment and has high robustness; areas which are probably the colors of the car lamps are extracted based on the HSV channel and the statistical information of the car lamp colors, not only are red areas, more car lamps with different colors are covered, and the accuracy of car lamp position extraction is improved; calculating the matching probability of two possible car light regions based on the geometric constraint conditions, removing repeated car lights by using the probability value, and adopting a car light detection algorithm based on the matching probability calculation and the de-duplication operation, wherein the car light detection algorithm adopts a plurality of geometric constraint conditions and has lower false detection rate and car light positioning accuracy rate; after the effective car lamp pair is determined, the car lamps are further classified into brake lamps or tail lamps according to the brightness values of the car lamps, and the state of the target vehicle can be obtained; data of the vehicle lamp detection target and the millimeter wave radar target are fused, so that judgment of driving of the vehicle is better assisted; according to the target position detected by the car lamp, data association is carried out with the target point of the millimeter wave radar in a certain range, so that the positioning precision of the target car is improved, and the defects of the existing method are overcome.
Corresponding to the above method embodiment, an embodiment of the present invention further provides a night vehicle detection apparatus with fusion of radar and vehicle lamp detection, as shown in fig. 11, the apparatus includes:
a data acquisition module 1100, configured to acquire an image to be processed and corresponding radar data;
the image processing module 1102 is configured to process an image to be processed based on an adaptive threshold algorithm and a vehicle light color threshold range, so as to obtain a first image including vehicle light information;
the car light information determining module 1104 is configured to determine car light information according to the first image and a preset constraint condition;
and a vehicle information determining module 1106, configured to determine target vehicle information according to the vehicle light information and the radar data.
The night vehicle detection device with the fusion of the radar and the vehicle lamp detection, which is provided by the embodiment of the invention, has the same technical characteristics as the night vehicle detection method with the fusion of the radar and the vehicle lamp detection, so that the same technical problems can be solved, and the same technical effects can be achieved.
An embodiment of the present invention further provides an electronic device, as shown in fig. 12, the electronic device includes a processor 130 and a memory 131, the memory 131 stores machine executable instructions that can be executed by the processor 130, and the processor 130 executes the machine executable instructions to implement the above-mentioned night vehicle detection method with radar and vehicle lamp detection integrated.
Further, the electronic device shown in fig. 12 further includes a bus 132 and a communication interface 133, and the processor 130, the communication interface 133, and the memory 131 are connected by the bus 132.
The Memory 131 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 133 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 132 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 12, but that does not indicate only one bus or one type of bus.
The processor 130 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 130. The Processor 130 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA), or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 131, and the processor 130 reads the information in the memory 131 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The embodiment of the present invention further provides a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions, and when the machine-executable instructions are called and executed by a processor, the machine-executable instructions cause the processor to implement the above method for detecting a vehicle at night by fusing radar and vehicle light detection, and specific implementation may refer to method embodiments, and is not described herein again.
The radar and vehicle lamp detection integrated night vehicle detection method, device and computer program product of the electronic device provided by the embodiment of the invention comprise a computer readable storage medium storing program codes, instructions included in the program codes can be used for executing the method described in the previous method embodiment, and specific implementation can be referred to the method embodiment, and is not described herein again.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part thereof, which essentially contributes to the prior art, can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a gateway electronic device, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A night vehicle detection method integrating radar and vehicle lamp detection is characterized by comprising the following steps:
acquiring an image to be processed and corresponding radar data;
processing the image to be processed based on a self-adaptive threshold algorithm and a pre-acquired vehicle lamp color threshold range to obtain a first image containing vehicle lamp information;
determining car light information according to the first image and a preset constraint condition;
and determining target vehicle information according to the vehicle lamp information and the radar data.
2. The method according to claim 1, wherein the step of processing the image to be processed based on an adaptive threshold algorithm and a pre-obtained threshold range of vehicle light color to obtain a first image containing vehicle light information comprises:
processing the image to be processed based on the adaptive threshold algorithm to obtain a first preprocessed image comprising a first highlight area;
processing the image to be processed based on a pre-acquired vehicle lamp color threshold range to obtain a second preprocessed image comprising a second highlight area;
determining a third preprocessed image according to the first preprocessed image and the second preprocessed image;
and performing closing operation and median filtering processing on the third preprocessed image to obtain a first image containing car light information.
3. The method of claim 2, wherein the image format of the image to be processed comprises an RGB format;
the step of processing the image to be processed based on the adaptive threshold algorithm to obtain a first preprocessed image including a first highlight region includes:
converting the image format of the image to be processed to obtain a first gray scale image;
calculating the mean value and the standard deviation of the first gray scale image;
obtaining an adaptive threshold based on the mean and the standard deviation;
and carrying out binarization processing on the first gray-scale image based on the adaptive threshold value to obtain a first preprocessed image comprising a first highlight area.
4. The method according to claim 3, wherein the step of processing the image to be processed based on the vehicle light color threshold range to obtain a second pre-processed image comprising a second highlight region comprises:
converting the image format of the image to be processed to obtain a first preliminary image with an HSV (hue, saturation and value) image format;
processing an H channel of the first preliminary image based on a pre-acquired vehicle lamp color threshold range to obtain a second preliminary image;
converting the image format of the second preliminary image to obtain a second gray scale map;
extracting a red channel of the second gray scale map to obtain a third preliminary image;
and carrying out binarization processing on the third preliminary image based on the adaptive threshold value to obtain a second preprocessed image comprising a second highlight area.
5. The method of claim 2, wherein determining a third pre-processed image from the first pre-processed image and the second pre-processed image comprises:
for each second pixel in the second highlight area, judging whether a first pixel corresponding to the second pixel belongs to a first highlight area; the first pixel belongs to the first pre-processed image; the second pixel belongs to the second pre-processed image;
if the corresponding first pixel belongs to the first highlight area, setting the gray value of the second pixel to be 255;
if the corresponding first pixel does not belong to the first highlight area, setting the gray value of the second pixel to be 0;
and determining an image composed of the second pixel and other pixels except the second pixel in the second preprocessed image as a third preprocessed image.
6. The method according to claim 1, wherein the step of determining the car light information according to the first image and a preset constraint condition comprises:
obtaining an effective area of the first image based on the shooting condition of the image to be processed;
searching the effective area for a connected domain; the connected domain comprises a plurality of;
determining an effective connected domain pair according to the central distance of any two connected domains; the pair of valid connected domains comprises two connected domains;
calculating a parameter value corresponding to a preset parameter of the effective connected domain pair;
if the parameters meet preset constraint conditions, determining that the effective connected domain pair comprises car light information;
obtaining vehicle lamp information based on the effective connected domain; the car light information includes a car light position and a car light brightness.
7. The method of claim 6, wherein the vehicle light positions comprise a left vehicle light position and a right vehicle light position;
the step of determining target vehicle information according to the vehicle light information and the radar data includes:
determining the position of the left car lamp and the middle position of a connection line of the positions of the car lamps as target pixel points;
tracking the target pixel point according to a pre-obtained predicted position, and determining an effective car light position;
determining a preliminary position of the target vehicle according to the effective vehicle lamp position and a preset coordinate conversion model;
determining the position of an effective target point according to the radar data;
performing data association on the preliminary position of the target vehicle and the effective target point position to obtain a matched target point;
tracking the target point to obtain the position and the speed of a target vehicle;
determining the state of a target vehicle according to the brightness of the vehicle lamp; the state of the target vehicle comprises normal running or braking;
and determining the position, the speed and the state of the target vehicle as target vehicle information.
8. The utility model provides a radar detects night vehicle detection device who fuses with car light which characterized in that includes:
the data acquisition module is used for acquiring the image to be processed and corresponding radar data;
the image processing module is used for processing the image to be processed based on a self-adaptive threshold algorithm and a vehicle lamp color threshold range to obtain a first image containing vehicle lamp information;
the vehicle light information determining module is used for determining vehicle light information according to the first image and a preset constraint condition;
and the vehicle information determining module is used for determining target vehicle information according to the vehicle lamp information and the radar data.
9. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of any one of claims 1-7.
10. A machine-readable storage medium having stored thereon machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the method of any of claims 1-7.
CN202011414956.8A 2020-12-03 2020-12-03 Night vehicle detection method and device integrating radar and vehicle lamp detection Pending CN112348813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011414956.8A CN112348813A (en) 2020-12-03 2020-12-03 Night vehicle detection method and device integrating radar and vehicle lamp detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011414956.8A CN112348813A (en) 2020-12-03 2020-12-03 Night vehicle detection method and device integrating radar and vehicle lamp detection

Publications (1)

Publication Number Publication Date
CN112348813A true CN112348813A (en) 2021-02-09

Family

ID=74427415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011414956.8A Pending CN112348813A (en) 2020-12-03 2020-12-03 Night vehicle detection method and device integrating radar and vehicle lamp detection

Country Status (1)

Country Link
CN (1) CN112348813A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743226A (en) * 2021-08-05 2021-12-03 武汉理工大学 Daytime headlamp language recognition and early warning method and system
CN113784482A (en) * 2021-09-18 2021-12-10 合肥工业大学 Intelligent headlamp system of vehicle
CN115631160A (en) * 2022-10-19 2023-01-20 武汉海微科技有限公司 LED lamp fault detection method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150898A (en) * 2013-01-25 2013-06-12 大唐移动通信设备有限公司 Method and device for detection of vehicle at night and method and device for tracking of vehicle at night
CN108037505A (en) * 2017-12-08 2018-05-15 吉林大学 A kind of night front vehicles detection method and system
CN109523555A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 Front truck brake behavioral value method and apparatus for automatic driving vehicle
EP3579013A1 (en) * 2018-06-08 2019-12-11 KPIT Technologies Ltd. System and method for detecting a vehicle in night time
US20200125869A1 (en) * 2018-10-17 2020-04-23 Automotive Research & Testing Center Vehicle detecting method, nighttime vehicle detecting method based on dynamic light intensity and system thereof
CN111090096A (en) * 2020-03-19 2020-05-01 南京兆岳智能科技有限公司 Night vehicle detection method, device and system
CN111965636A (en) * 2020-07-20 2020-11-20 重庆大学 Night target detection method based on millimeter wave radar and vision fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150898A (en) * 2013-01-25 2013-06-12 大唐移动通信设备有限公司 Method and device for detection of vehicle at night and method and device for tracking of vehicle at night
CN109523555A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 Front truck brake behavioral value method and apparatus for automatic driving vehicle
CN108037505A (en) * 2017-12-08 2018-05-15 吉林大学 A kind of night front vehicles detection method and system
EP3579013A1 (en) * 2018-06-08 2019-12-11 KPIT Technologies Ltd. System and method for detecting a vehicle in night time
US20200125869A1 (en) * 2018-10-17 2020-04-23 Automotive Research & Testing Center Vehicle detecting method, nighttime vehicle detecting method based on dynamic light intensity and system thereof
CN111090096A (en) * 2020-03-19 2020-05-01 南京兆岳智能科技有限公司 Night vehicle detection method, device and system
CN111965636A (en) * 2020-07-20 2020-11-20 重庆大学 Night target detection method based on millimeter wave radar and vision fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
金立生;程蕾;成波;: "基于毫米波雷达和机器视觉的夜间前方车辆检测", 汽车安全与节能学报, no. 02, 30 June 2016 (2016-06-30) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743226A (en) * 2021-08-05 2021-12-03 武汉理工大学 Daytime headlamp language recognition and early warning method and system
CN113743226B (en) * 2021-08-05 2024-02-02 武汉理工大学 Daytime front car light language recognition and early warning method and system
CN113784482A (en) * 2021-09-18 2021-12-10 合肥工业大学 Intelligent headlamp system of vehicle
CN115631160A (en) * 2022-10-19 2023-01-20 武汉海微科技有限公司 LED lamp fault detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105981042B (en) Vehicle detection system and method
Narote et al. A review of recent advances in lane detection and departure warning system
CN104778444B (en) The appearance features analysis method of vehicle image under road scene
CN104732235B (en) A kind of vehicle checking method for eliminating the reflective interference of road at night time
US10081308B2 (en) Image-based vehicle detection and distance measuring method and apparatus
Alcantarilla et al. Automatic LightBeam Controller for driver assistance
CN110450706B (en) Self-adaptive high beam control system and image processing algorithm
CN112348813A (en) Night vehicle detection method and device integrating radar and vehicle lamp detection
US20150278615A1 (en) Vehicle exterior environment recognition device
CN107891808B (en) Driving reminding method and device and vehicle
KR101511853B1 (en) Night-time vehicle detection and positioning system and method using multi-exposure single camera
CN107886034B (en) Driving reminding method and device and vehicle
CN103208185A (en) Method and system for nighttime vehicle detection on basis of vehicle light identification
KR101840974B1 (en) Lane identification system for autonomous drive
EP2813973B1 (en) Method and system for processing video image
CN109447093B (en) Vehicle tail lamp detection method based on YUV image
CN106803066B (en) Vehicle yaw angle determination method based on Hough transformation
KR101026778B1 (en) Vehicle image detection apparatus
Wang et al. Lane detection based on two-stage noise features filtering and clustering
Bi et al. A new method of target detection based on autonomous radar and camera data fusion
JP2002083301A (en) Traffic monitoring device
KR20080004833A (en) Apparatus and method for detecting a navigation vehicle in day and night according to luminous state
KR20160108344A (en) Vehicle detection system and method thereof
CN107506739B (en) Night forward vehicle detection and distance measurement method
JP4969359B2 (en) Moving object recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination