CN102737236B - Method for automatically acquiring vehicle training sample based on multi-modal sensor data - Google Patents

Method for automatically acquiring vehicle training sample based on multi-modal sensor data Download PDF

Info

Publication number
CN102737236B
CN102737236B CN201210234127.0A CN201210234127A CN102737236B CN 102737236 B CN102737236 B CN 102737236B CN 201210234127 A CN201210234127 A CN 201210234127A CN 102737236 B CN102737236 B CN 102737236B
Authority
CN
China
Prior art keywords
vehicle
data
laser
candidate
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210234127.0A
Other languages
Chinese (zh)
Other versions
CN102737236A (en
Inventor
王超
赵卉菁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201210234127.0A priority Critical patent/CN102737236B/en
Publication of CN102737236A publication Critical patent/CN102737236A/en
Application granted granted Critical
Publication of CN102737236B publication Critical patent/CN102737236B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatically acquiring a vehicle training sample based on multi-modal sensor data. The method comprises the following steps of: detecting a vehicle based on laser data and positioning data, namely acquiring a two-dimensional coordinate relative to a data acquisition vehicle according to a distance and an angle of the laser data and a laser sensor calibration parameter so as to describe horizontal contour information of an object; and extracting a time sequence of parameters such as the position and direction of a candidate vehicle relative to the data acquisition vehicle by analyzing a shape and detecting and tracking a mobile object; and extracting a vision image sample, namely projecting the candidate vehicle into an image according to the position and direction of the candidate vehicle at each moment on the basis of a geological relation between the laser sensor and image acquisition equipment to produce a region of interest, correcting the region of interest by using a detector, calculating a relative view angle of each candidate vehicle relative to a camera according to the parameters such as position and direction, removing image frame samples with similar view angles, and automatically extracting sample pictures of the candidate vehicle under different view angles.

Description

A kind of based on multi-modal sensing data automatic acquisition vehicle training sample method
Technical field
The present invention relates to computer vision, robot and machine learning techniques field, particularly relate to a kind of based on multi-modal sensing data automatic acquisition vehicle training sample method.
Background technology
Vehicle detection is the auxiliary major issue of driving (ADAS) field of automotive safety.Had a large amount of correlative studys in field of vehicle detection, research proves, uses laser, radar, monocular/stereoscopic camera and Multi-sensor Fusion all can detect vehicle.
Because monocular camera cost is low, and problem of calibrating is simple, to have study widely based on the detection method of monocular vision in computer vision and robot field.When using vision sensor, the apparent and vehicle of vehicle own has a great difference the apparent of different angles, brings very large difficulty to detection.Increasing researchist attempts using the method for machine learning to detect vehicle in the recent period.
In these methods, detecting device uses a series of samples pictures to train out in advance.There is a lot of data set openings for training detecting device.PASCAL provides a lot of standardized data set for object detection.
Wherein UIUC data set is the data set being specifically designed to vehicle detection identification, comprising 550 resolution is that the vehicle pictures of 100 × 40 is as the positive sample of training, and comprise two test sets: 170 and train the vehicle of homogeneous yardstick of positive sample equal resolution, and 108 pictures comprising 139 multiple dimensioned vehicles.
All use this data set so that result of study to be described in a lot of research.But the vehicle of UIUC data centralization is all the image at visual angle, side, in road vehicle detects, the vehicle that detects majority is visual angle above or below, so this data set inapplicable.
Another one shortcoming is that UIUC data centralization picture is black and white picture, uses the feature space of this data set to detecting device to have considerable restraint.Different from UIUC data set, the data set of MIT comprises 516 positive samples pictures, and resolution is 128 × 128, is all the visual angle at front or rear.
The method good to performance now, training sample is the key factor affecting its performance.In order to study the wagon detector setting up various visual angles, the researchist of USC establishes the vehicle sample image of various visual angles and the data set of test pattern.Data set comprises the positive sample image of the vehicle from all angles that 1028 resolution are 128 × 64, and has 196 test pattern works to comprise the vehicle of 410 different scale different angles.
But all sample standard deviation of this data centralization does not comprise the posture information of vehicle, usual training data needs to carry out mark classification by hand according to training demand, this quantity to sample, shows and has considerable restraint.This becomes a bottleneck of limit algorithm development.The performance of detecting device, to the usual deficient in stability of the change of environment.
Summary of the invention
The object of the present invention is to provide a kind of based on multi-modal sensing data automatic acquisition vehicle training sample method, relate to one and automatically generate multi-angle vehicle sample image and the method comprising posture information.
The invention discloses a kind of based on multi-modal sensing data automatic acquisition vehicle training sample method, comprise the following steps:
Vehicle detection step based on laser, locator data: according to the distance of laser data, angle and laser sensor calibrating parameters, obtain the two-dimensional coordinate relative to data acquisition vehicle, to describe the profile information of object level; By the analysis of shape, and the detection of mobile object is followed the trail of, and extracts candidate's vehicle relative to the isoparametric time series of the locality of data acquisition vehicle;
Visual pattern sample extraction step: according to the locality of candidate's vehicle in each moment, according to the geometric relationship between laser sensor and image capture device, this candidate's vehicle is projected in image, produce area-of-interest, and use detecting device to revise area-of-interest, to each candidate's vehicle, the relative perspective of this candidate's vehicle relative to video camera is calculated according to parameters such as its localities, remove the picture frame sample that visual angle is close, automatically extract the samples pictures of this candidate's vehicle under different visual angles.
Further, preferred as one, the described vehicle detection step based on laser, locator data comprises further:
Data fusion: will the identical of each laser sensor be come from or close on the data fusion of time;
Cluster: will come from the data of each laser sensor, according to the distance of adjacent point-to-point transmission, carries out cluster;
Mark: cluster is divided into stationary object, mobile object or uncertain three types; Map generates: the map generating data of description collection vehicle motion track surrounding static environment;
Detect: find candidate's vehicle current carrying out in sorted laser fusion data;
Follow the trail of: interactive calculation result and tracking result before, upgrade tracking state and car body, kinematic parameter;
Checking: it is verified by the motion of tracking objects and shape information.
Further, preferred as one, described visual pattern sample extraction step comprises further:
Region of interesting extraction based on laser data: according to the locality of candidate's vehicle in each moment, according to the geometric relationship between laser sensor and image capture device, this candidate's vehicle is projected in image, extracts the area-of-interest comprising candidate's vehicle;
Area-of-interest correction based on image technique: use the detection method based on image, revise area-of-interest, finds candidate's vehicle wherein;
Vehicle sample image extracts and duplicate removal: according to correction result, removes the picture frame sample that visual angle is close, automatically extracts the samples pictures of candidate's vehicle under different visual angles.
Further, preferred as one, described multi-modal sensor comprises: multi visual angle laser sensor, multi-view image collection equipment and positioning system; Described multi visual angle laser sensor, multi-view image collection equipment are used for Monitoring Data collection vehicle surrounding environment, form the covering on multiple observation angle.
Further, preferred as one, described positioning system is used for the pose of the 6DOF of measuring vehicle.
Further, preferred as one, revise the detection method of area-of-interest detecting device employing used based on image.
Further, preferred as one, described tracking is specially: do not find the testing result that can associate and follow the trail of result for those, be considered as new tracking vehicle; And those are not associated with the tracking vehicle of testing result, then think that it disappears in the monitoring range of vehicle, remove from tracking result.
Further, preferred as one, described checking is specially: if the not motion within a period of time of certain tracking objects, be then considered as static and incorporated in cartographic information; If the motion of tracking objects and shape information there occurs erratic change at short notice, then remove this result; The tracking result of those motions and shape normal variation is only had just to be regarded as candidate's vehicle.
Further, preferred as one, the correction of described area-of-interest is specially: according to the pose of candidate's vehicle in area-of-interest, selects the detecting device based on specific pose vehicle training to revise it.
Further, preferred as one, described removal repeated sample is specially: according to tracked vehicle relative to the direction of motion of data acquisition vehicle and headstock towards, apparent identical or close sample image is screened.
The present invention, by multi-modal sensing data, automatically generates multi-angle vehicle sample image and comprises posture information, effectively can avoid manual operation, provide more freedom to the research of detection algorithm, less restriction.And, automatically generate training sample and make on-line training become possibility, make it possible to improve sorter automatically to adapt to the change of the environment such as illumination.
The present invention does not automatically need manual intervention, can obtain a large amount of vehicle samples, make training set abundanter, and the training picture obtained contains the posture information of vehicle, and convenient training is based on the sorter of different positions and pose.
Accompanying drawing explanation
When considered in conjunction with the accompanying drawings, by referring to detailed description below, more completely can understand the present invention better and easily learn wherein many adjoint advantages, but accompanying drawing described herein is used to provide a further understanding of the present invention, form a part of the present invention, schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention, wherein:
Fig. 1 is embodiment of the present invention process flow diagram;
Fig. 2 is the vehicle detection step embodiment process flow diagram based on laser, locator data;
Fig. 3 is visual pattern sample extraction embodiment process flow diagram.
Embodiment
Referring to figs. 1 through Fig. 3, embodiments of the invention are described.
For enabling above-mentioned purpose, feature and advantage become apparent more, and below in conjunction with the drawings and specific embodiments, the present invention is further detailed explanation.
As shown in Figure 1, a kind of based on multi-modal sensing data automatic acquisition vehicle training sample method, comprise the following steps:
S1, vehicle detection step based on laser, locator data: according to the distance of laser data, angle and laser sensor calibrating parameters, obtain the two-dimensional coordinate relative to data acquisition vehicle, to describe the profile information of object level; By the analysis of shape, and the detection of mobile object is followed the trail of, and extracts candidate's vehicle;
S2, visual pattern sample extraction step: according to the locality of candidate's vehicle in each moment, according to the geometric relationship between laser sensor and image capture device, this candidate's vehicle is projected in image, produce area-of-interest, and use detecting device to revise area-of-interest, to each candidate's vehicle, the relative perspective of this candidate's vehicle relative to video camera is calculated according to parameters such as its localities, remove the picture frame sample that visual angle is close, automatically extract the samples pictures of this candidate's vehicle under different visual angles.
The present invention sets up a system automatically generating vehicle sample data collection and train for training vision wagon detector.Data set comprises the vehicle sample image of multi-angle, and each image all comprises its posture information, becomes possibility like this to the training of the wagon detector of different angles vehicle.
Multi-modal sensor comprises laser sensor, image capture device and positioning system.Laser sensor comprises laser scanner, laser range finder etc., and image capture device can use camera, also can use the camera system being integrated with single or multiple cameras.Positioning system for the purpose of the positional information that can obtain the object being loaded with this system equipment, as GPS Global Positioning System (GPS), Galilean satellite positioning system, big-dipper satellite positioning system etc.
Each sensor forms the covering of various visual angles scope jointly to the surrounding of data acquisition vehicle, according to actual needs with data acquisition platform restriction, different sensor erections can be selected to realize covering the detection of different visual angles scope.
Sensing system:
The invention discloses an onboard sensor system.System comprises three kinds of sensors: laser scanner, video camera and GPS/IMU.GPS/IMU is the pose (three-dimensional position and angle) of commercial unit for the 6DOF of measuring vehicle.Laser and video camera are all used to monitor vehicle surrounding environment, and often kind of sensor can form omnibearing covering.Three Hokuyo UTM-30LX laser scanners be arranged on the front left of vehicle, front right, rear in, the omnibearing covering of formation level, but the monitoring distance of Hokuyo UTM-30LX laser scanner is shorter, usually 25m can only be reached in outdoor traffic environment, so use SICK LMS291 laser to cover the half-circle area that monitoring radius reaches 45m in forward direction center.The covering of omnibearing video sensor can use multiple camera to realize, and use the Ladybug video camera being integrated with multiple camera here, collection result has merged the picture of 6 cameras, forms panoramic picture.Block to reduce, Ladybug camera pedestal is located at the top of vehicle.Use two computer acquisition sensing datas through lock in time: a collection for laser scanner data and GPS/IMU data, another is for Ladybug video data acquiring.For every frame data, give the time of record computer-chronograph as timestamp.Time delay in data transmission procedure is considered as a steady state value, can test in advance and obtain.After transducer calibration, under all laser scanner data are transferred to same coordinate by conversion, be the local coordinate of data acquisition vehicle, the result of laser data projects on panoramic picture extracts image pattern picture herein.
Treatment scheme:
As shown in Figure 1, the present invention includes two steps:
S1, vehicle detection based on laser, locator data;
S2, vision samples pictures are extracted.
Laser scanner directly can measure the distance value of object.According to angle and transducer calibration parameter, the two-dimensional coordinate relative to vehicle can be obtained, to describe the profile information of object level.By the analysis of shape, and mobile object detection follow the trail of, can be very fast extract candidate's vehicle relative to the isoparametric time series of the locality of data acquisition vehicle.
According to the geometric relationship between laser and video camera, these candidate's vehicles are projected onto in panoramic picture again, and produce area-of-interest, this area-of-interest comprises the posture information of current time vehicle simultaneously.
But it is sparse that laser scans the point obtained on object, and can produces on special color material and reflect unsuccessfully.Particularly in dynamic traffic environment, having and a lot of block existence, may be local to the observation of surrounding vehicles.This just brings very large challenge to laser data process, and result exists some mistakes.
Be different from laser data, vedio data comprises abundant information, can be used for revising the area-of-interest provided according to laser processing results.In the present invention, the detecting device based on HOG feature is used to revise area-of-interest.
In addition, in the tracing process to candidate's vehicle, its apparent change is in the picture very slowly.The close image in a large amount of visual angles can be produced when extracting vehicle image in whole images, need a process selecting different positions and pose vehicle pictures.
As shown in Figure 2, the described vehicle detection step based on laser, locator data comprises further:
S11, data fusion: will the identical of each laser scanner be come from or close on the data fusion of time;
S12, cluster: will come from the data of each laser sensor, according to the distance of adjacent point-to-point transmission, carry out cluster;
S13, mark: cluster is divided into stationary object, mobile object or uncertain three types;
S14, map generate: the map generating data of description collection vehicle motion track surrounding static environment;
S15, detection: find candidate's vehicle in sorted laser fusion data current carrying out;
S16, tracking: interactive calculation result and tracking result before, upgrade tracking state and car body, kinematic parameter;
S17, checking: it is verified by the motion of tracking objects and shape information.
Below illustrate the above-mentioned course of work.
S1, vehicle detection based on laser, locator data:
In the present invention, disclose one and carry out based on multiple single line laser scanner data and locator data the method that road vehicle detects tracking, according to the distance of laser data, angle and laser sensor calibrating parameters, obtain the two-dimensional coordinate relative to data acquisition vehicle, to describe the profile information of object level; By the analysis of shape, and the detection of mobile object is followed the trail of, and extracts candidate's vehicle relative to the isoparametric time series of the locality of data acquisition vehicle.These data provide each moment candidate's vehicle relative to parameters such as the localities of data acquisition vehicle for S2 vision samples pictures.Based on laser scanner data and locator data vehicle detection framework as shown in Figure 2, below modules is introduced.
S11, data fusion: will the identical of each laser scanner be come from or close on the data fusion of time, in order to reduce use internal memory, also will preserve the angle information of data, the data after fusion press the range data of sequential recording from different laser sensor simultaneously.Each range data obtains angle information according to its order, simultaneously according to the calibrating parameters between laser, can be converted into the two-dimensional coordinate relative to data acquisition vehicle.Laser data after fusion can describe the profile information of the object from the viewing of data acquisition vehicle visual angle.
S12, cluster: will come from the data of each laser sensor, according to the distance of adjacent point-to-point transmission, carry out cluster.Here use the Euclidean distance of point-to-point transmission, consider the interval of angle simultaneously.If distance is greater than given threshold value, then produce new cluster.Cluster can be that to do be observation to an object, may be movement also may be static, and cluster herein is only carried out in same laser sensing data.
S13, mark: cluster is divided into stationary object, mobile object or uncertain three types.First according to the positional information that the GPS/IMU of data acquisition vehicle records, each cluster is projected in the coordinate system of the overall situation.For each cluster, if it can estimate match with the expection of the static environment of previous frame, then can think static, and if the expection of certain mobile object estimate coupling, be then considered as movement, otherwise think uncertain.But also can according to priori to cluster supplementary classification, as the size, road geometry information etc. of vehicle.Sorted laser data will generate at map and apply in moving Object Detection module.
S14, map generate: the map generating data of description collection vehicle motion track surrounding static environment.Map grid describes, the probability that the numeric representation grid of each grid is occupied by object.
S15, detection: find candidate's vehicle in sorted laser fusion data current carrying out.Here local observation with overlap observed reading be detect in two difficult parts.In order to improve the accuracy of time parameters estimation, and reduce error-detecting, it is necessary for being carried out merging by cluster result.In the present invention, the outlined box of simple vehicle defines the model of vehicle, and develops the algorithm of Cluster merging and model estimation.In addition, estimate can help to reduce the mistake in Cluster merging for detecting the expection of following the trail of result before, particularly to those on same vehicle but and discontinuous cluster.
S16, tracking: interactive calculation result and tracking result before, upgrade tracking state and car body, kinematic parameter.The testing result that can associate and follow the trail of result is not found for those, is considered as new tracking vehicle.And those are not associated with the tracking vehicle of testing result, then think that it disappears in the monitoring range of vehicle, remove from tracking result.
S17, checking: it is verified by the motion of tracking objects and shape information.If the not motion within a period of time of certain tracking objects, be then considered as static and incorporated in cartographic information.If the motion of tracking objects and shape information there occurs erratic change at short notice, then remove this result.The tracking result of those motions and shape normal variation is only had just to be regarded as candidate's vehicle.
As shown in Figure 3, described visual pattern sample extraction step comprises further:
S2, vision samples pictures are extracted:
In the present invention, a vision samples pictures extracting method is disclosed, by utilize S1 obtain candidate's vehicle relative to the isoparametric time series of the locality of data acquisition vehicle, according to the locality of candidate's vehicle in each moment, according to the geometric relationship between laser sensor and image capture device, candidate's vehicle is projected in image, produce area-of-interest, detecting device is used to revise area-of-interest, to each candidate's vehicle, the relative perspective of this candidate's vehicle relative to video camera is calculated according to parameters such as its localities, remove the picture frame sample that visual angle is close, automatically extract the samples pictures of this candidate's vehicle under different visual angles.Vision samples pictures extracts framework as shown in Figure 3, is introduced below to modules.
S21, region of interesting extraction based on laser data: according to the locality of candidate's vehicle in each moment, according to the geometric relationship between laser sensor and image capture device, this candidate's vehicle is projected in image, extracts the area-of-interest comprising candidate's vehicle;
S22, area-of-interest correction based on image technique: use the detection method based on image, area-of-interest is revised, find vehicle wherein;
S23, vehicle sample image extract and duplicate removal: according to correction result, remove the picture frame sample that visual angle is close, automatically extract the samples pictures of vehicle under different visual angles.
Below illustrate the above-mentioned course of work.
S2, visual pattern sample extraction:
Use candidate's vehicle that laser data process is obtained, in vedio data, extract the flow process of vehicle samples pictures as shown in Figure 3.
S21, region of interesting extraction based on laser data: candidate's vehicle that laser data process obtains comprises the position of vehicle, size, movable information, according to the demarcation relation between laser scanner and video camera, candidate's vehicle is projected in the image of corresponding time, obtain the area-of-interest comprising candidate's vehicle, obtain the posture information of candidate's vehicle in area-of-interest simultaneously according to the movable information of correspondence.Because the process of laser data in dynamic traffic environment exists very large difficulty, area-of-interest, usually containing wrong, needs to revise.
S22, area-of-interest correction based on image technique: in the present invention, use the Ladybug panorama camera system monitoring vehicle's surroundings environment being integrated with 6 cameras, the panoramic picture of generation geometrically exists certain distortion.Geometric distortion on each area-of-interest needs to eliminate in advance, by being projected by the pixel on area-of-interest on its tangent plane in spheroidal coordinate system, effectively can remove distortion.
Because in area-of-interest, the pose of vehicle is known, so can revise area-of-interest based on different positions and pose angle.Use and revise area-of-interest based on the detecting device of HOG feature.Use the training of USC multi-angle vehicle data to practice the sorter of different angles vehicle, training data is manual classifies.Every class chooses 200 pictures as positive sample, and training obtains 4 sorters.Carry out the detection of vehicle on the region of interest, choose score the highest and higher than the detection vehicle of given threshold value, as samples pictures.
S23, vehicle sample extraction and duplicate removal: in the tracing process of candidate's vehicle, its in picture performance change slowly.The performance in the picture of same vehicle will just have larger difference usually after a lot of frame.And when travelling on road, between vehicle, the situation of geo-stationary often has generation, the performance of such vehicle can not change in for a long time.Need to screen the identical or close vehicle pictures of performance.The direction of motion of vehicle relative data collection vehicle and its headstock are towards being show to candidate's vehicle the factor had the greatest impact in picture, for each vehicle, its direction of motion α relative to data acquisition vehicle in each moment and headstock can both calculate towards β.By discrete for these two angles be the angle grid of 10 ° × 10 °, in each angle grid to the highest image of score in same vehicle extraction makeover process as sample.And, determine the pose of vehicle in samples pictures according to the difference of these two angles, samples pictures is divided into 8 classifications.
As mentioned above, embodiments of the invention are explained, but as long as do not depart from inventive point of the present invention in fact and effect can have a lot of distortion, this will be readily apparent to persons skilled in the art.Therefore, such variation is also all included within protection scope of the present invention.

Claims (8)

1., based on a multi-modal sensing data automatic acquisition vehicle training sample method, it is characterized in that, comprise the following steps:
Vehicle detection step based on laser, locator data: according to the distance of laser data, angle and laser sensor calibrating parameters, obtain the two-dimensional coordinate relative to data acquisition vehicle, to describe the profile information of object level; By the analysis of shape, and the detection of mobile object is followed the trail of, and extracts candidate's vehicle relative to the isoparametric time series of the locality of data acquisition vehicle;
Visual pattern sample extraction step: according to the locality of candidate's vehicle in each moment, according to the geometric relationship between laser sensor and image capture device, this candidate's vehicle is projected in image, produce area-of-interest, and use detecting device to revise area-of-interest, to each candidate's vehicle, the relative perspective of this candidate's vehicle relative to video camera is calculated according to parameters such as its localities, remove the picture frame sample that visual angle is close, automatically extract the samples pictures of this candidate's vehicle under different visual angles;
The described vehicle detection step based on laser, locator data comprises further:
Data fusion: will the identical of each laser scanner be come from or close on the data fusion of time;
Cluster: will come from the data of each laser sensor, according to the distance of adjacent point-to-point transmission, carries out cluster;
Mark: cluster is divided into stationary object, mobile object or uncertain three types; Map generates: the map generating data of description collection vehicle motion track surrounding static environment;
Detect: find candidate's vehicle current carrying out in sorted laser fusion data; Follow the trail of: interactive calculation result and tracking result before, upgrade tracking state and car body, kinematic parameter;
Checking: it is verified by the motion of tracking objects and shape information;
Described visual pattern sample extraction step comprises further:
Region of interesting extraction based on laser data: laser data is followed the trail of the calibrating parameters of result according to camera and laser, project in image, extract the area-of-interest comprising candidate's vehicle;
Area-of-interest correction based on image technique: use the detection method based on image, revise area-of-interest, finds vehicle wherein;
Vehicle sample image extracts and duplicate removal: according to correction result, remove the image that same vehicle pose is identical or close, obtains vehicle sample image through screening.
2. one according to claim 1 is based on multi-modal sensing data automatic acquisition vehicle training sample method, and it is characterized in that, described multi-modal sensor comprises: multi visual angle laser scanner, multi-visual angle filming head and GPS/IMU positioning system; Described multi visual angle laser scanner, multi-visual angle filming head are used for Monitoring Data collection vehicle surrounding environment, form omnibearing covering.
3. one according to claim 2 is based on multi-modal sensing data automatic acquisition vehicle training sample method, it is characterized in that, described GPS/IMU is used for the pose of the 6DOF of measuring vehicle.
4. one according to claim 1 is based on multi-modal sensing data automatic acquisition vehicle training sample method, it is characterized in that, described detecting device adopts the detection method based on image.
5. one according to claim 1 is based on multi-modal sensing data automatic acquisition vehicle training sample method, and it is characterized in that, described tracking is specially: do not find the testing result that can associate and follow the trail of result for those, be considered as new tracking vehicle; And those are not associated with the tracking vehicle of testing result, then think that it disappears in the monitoring range of vehicle, remove from tracking result.
6. one according to claim 1 is based on multi-modal sensing data automatic acquisition vehicle training sample method, it is characterized in that, described checking is specially: if the not motion within a period of time of certain tracking objects, be then considered as static and incorporated in cartographic information; If the motion of tracking objects and shape information there occurs erratic change at short notice, then remove this result; The tracking result of those motions and shape normal variation is only had just to be regarded as candidate's vehicle.
7. one according to claim 1 is based on multi-modal sensing data automatic acquisition vehicle training sample method, it is characterized in that, the correction of described area-of-interest is specially: according to the pose of candidate's vehicle in area-of-interest, selects the detecting device based on specific pose vehicle training to revise it.
8. one according to claim 1 is based on multi-modal sensing data automatic acquisition vehicle training sample method, it is characterized in that, described removal repeated sample is specially: according to vehicle relative to the direction of motion of data acquisition vehicle and headstock towards, apparent identical or close sample image is screened.
CN201210234127.0A 2012-07-06 2012-07-06 Method for automatically acquiring vehicle training sample based on multi-modal sensor data Expired - Fee Related CN102737236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210234127.0A CN102737236B (en) 2012-07-06 2012-07-06 Method for automatically acquiring vehicle training sample based on multi-modal sensor data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210234127.0A CN102737236B (en) 2012-07-06 2012-07-06 Method for automatically acquiring vehicle training sample based on multi-modal sensor data

Publications (2)

Publication Number Publication Date
CN102737236A CN102737236A (en) 2012-10-17
CN102737236B true CN102737236B (en) 2015-06-24

Family

ID=46992705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210234127.0A Expired - Fee Related CN102737236B (en) 2012-07-06 2012-07-06 Method for automatically acquiring vehicle training sample based on multi-modal sensor data

Country Status (1)

Country Link
CN (1) CN102737236B (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104276102B (en) * 2013-07-10 2016-05-11 德尔福电子(苏州)有限公司 A kind of viewing system caliberating device detecting based on vehicle location
CN106569840B (en) * 2015-10-08 2020-10-30 上海智瞳通科技有限公司 Method for automatically acquiring sample by machine vision driving auxiliary system to improve identification precision
CN105303837A (en) * 2015-11-24 2016-02-03 东南大学 Method and system for detecting following behavior characteristic parameter of driver
CN106291736A (en) * 2016-08-16 2017-01-04 张家港长安大学汽车工程研究院 Pilotless automobile track dynamic disorder object detecting method
CN106529417A (en) * 2016-10-17 2017-03-22 北海益生源农贸有限责任公司 Visual and laser data integrated road detection method
WO2018176000A1 (en) 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US10097757B1 (en) * 2017-03-24 2018-10-09 Fotonation Limited Method for determining bias in an inertial measurement unit of an image acquisition device
CN106969923B (en) * 2017-05-26 2023-09-15 交通运输部公路科学研究所 Vehicle channel circular track testing system and method
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US10671349B2 (en) 2017-07-24 2020-06-02 Tesla, Inc. Accelerated mathematical engine
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
CN108398083B (en) * 2018-01-29 2021-03-16 湖南三德科技股份有限公司 Carriage positioning method and positioning device
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
CN108535707B (en) * 2018-03-30 2020-11-03 北京润科通用技术有限公司 Radar performance prediction model establishing method and device
US11215999B2 (en) 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11361457B2 (en) 2018-07-20 2022-06-14 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
CA3115784A1 (en) 2018-10-11 2020-04-16 Matthew John COOPER Systems and methods for training machine models with augmented data
CN112805200B (en) * 2018-10-11 2024-10-29 宝马股份公司 Snapshot image of traffic scene
US11196678B2 (en) 2018-10-25 2021-12-07 Tesla, Inc. QOS manager for system on a chip communications
JP6928917B2 (en) * 2018-11-06 2021-09-01 パナソニックIpマネジメント株式会社 Mobile management system, mobile, management device, control method, and program
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11150664B2 (en) 2019-02-01 2021-10-19 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US10997461B2 (en) 2019-02-01 2021-05-04 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US10956755B2 (en) 2019-02-19 2021-03-23 Tesla, Inc. Estimating object properties using visual image data
CN110853160B (en) * 2019-09-29 2022-01-18 广州市凌特电子有限公司 Multi-dimensional recognition system and method for expressway lane
CN111126336B (en) * 2019-12-31 2023-07-21 潍柴动力股份有限公司 Sample collection method, device and equipment
US11023730B1 (en) * 2020-01-02 2021-06-01 International Business Machines Corporation Fine-grained visual recognition in mobile augmented reality
CN113496213B (en) * 2021-06-29 2024-05-28 中汽创智科技有限公司 Method, device, system and storage medium for determining target perception data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388146A (en) * 2008-06-16 2009-03-18 上海高德威智能交通系统有限公司 Image acquisition and treatment apparatus and method, and vehicle monitoring and recording system
CN102147971A (en) * 2011-01-14 2011-08-10 赵秀江 Traffic information acquisition system based on video image processing technology

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9520040B2 (en) * 2008-11-21 2016-12-13 Raytheon Company System and method for real-time 3-D object tracking and alerting via networked sensors

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388146A (en) * 2008-06-16 2009-03-18 上海高德威智能交通系统有限公司 Image acquisition and treatment apparatus and method, and vehicle monitoring and recording system
CN102147971A (en) * 2011-01-14 2011-08-10 赵秀江 Traffic information acquisition system based on video image processing technology

Also Published As

Publication number Publication date
CN102737236A (en) 2012-10-17

Similar Documents

Publication Publication Date Title
CN102737236B (en) Method for automatically acquiring vehicle training sample based on multi-modal sensor data
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
US8872925B2 (en) Method and device for camera calibration
CN101617197B (en) Feature identification apparatus, measurement apparatus and measuring method
Broggi et al. Self-calibration of a stereo vision system for automotive applications
CN111046743B (en) Barrier information labeling method and device, electronic equipment and storage medium
CN110443898A (en) A kind of AR intelligent terminal target identification system and method based on deep learning
CN109596121B (en) Automatic target detection and space positioning method for mobile station
CN105182320A (en) Depth measurement-based vehicle distance detection method
US12061252B2 (en) Environment model using cross-sensor feature point referencing
CN114761997A (en) Target detection method, terminal device and medium
Rodríguez et al. Obstacle avoidance system for assisting visually impaired people
US20220292747A1 (en) Method and system for performing gtl with advanced sensor data and camera image
Li et al. Durlar: A high-fidelity 128-channel lidar dataset with panoramic ambient and reflectivity imagery for multi-modal autonomous driving applications
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
Kruber et al. Vehicle position estimation with aerial imagery from unmanned aerial vehicles
CN111256651B (en) Week vehicle distance measuring method and device based on monocular vehicle-mounted camera
CN114503044B (en) System and method for automatically marking objects in a 3D point cloud
KR20160125803A (en) Apparatus for defining an area in interest, apparatus for detecting object in an area in interest and method for defining an area in interest
Philipsen et al. Day and night-time drive analysis using stereo vision for naturalistic driving studies
WO2020194570A1 (en) Sign position identification system and program
Nowak et al. Vision-based positioning of electric buses for assisted docking to charging stations
JP2007011994A (en) Road recognition device
Gao et al. 3D reconstruction for road scene with obstacle detection feedback
Horani et al. A framework for vision-based lane line detection in adverse weather conditions using vehicle-to-infrastructure (V2I) communication

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150624

Termination date: 20180706