CN102737236A - Method for automatically acquiring vehicle training sample based on multi-modal sensor data - Google Patents
Method for automatically acquiring vehicle training sample based on multi-modal sensor data Download PDFInfo
- Publication number
- CN102737236A CN102737236A CN2012102341270A CN201210234127A CN102737236A CN 102737236 A CN102737236 A CN 102737236A CN 2012102341270 A CN2012102341270 A CN 2012102341270A CN 201210234127 A CN201210234127 A CN 201210234127A CN 102737236 A CN102737236 A CN 102737236A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- data
- laser
- candidate
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method for automatically acquiring a vehicle training sample based on multi-modal sensor data. The method comprises the following steps of: detecting a vehicle based on laser data and positioning data, namely acquiring a two-dimensional coordinate relative to a data acquisition vehicle according to a distance and an angle of the laser data and a laser sensor calibration parameter so as to describe horizontal contour information of an object; and extracting a time sequence of parameters such as the position and direction of a candidate vehicle relative to the data acquisition vehicle by analyzing a shape and detecting and tracking a mobile object; and extracting a vision image sample, namely projecting the candidate vehicle into an image according to the position and direction of the candidate vehicle at each moment on the basis of a geological relation between the laser sensor and image acquisition equipment to produce a region of interest, correcting the region of interest by using a detector, calculating a relative view angle of each candidate vehicle relative to a camera according to the parameters such as position and direction, removing image frame samples with similar view angles, and automatically extracting sample pictures of the candidate vehicle under different view angles.
Description
Technical field
The present invention relates to computer vision, robot and machine learning techniques field, relate in particular to and a kind ofly obtain vehicle training sample method automatically based on multi-modal sensing data.
Background technology
Vehicle detection is a major issue in automotive safety driver assistance (ADAS) field.In the vehicle detection field a large amount of correlative studys has been arranged, the research proof uses laser, radar, monocular/stereoscopic camera and Multi-sensor Fusion all can detect vehicle.
Because the monocular camera cost is low, and the demarcation problem is simple, in computer vision and robot field extensive studies is arranged based on the detection method of monocular vision.When using vision sensor, the apparent and vehicle of vehicle own has a great difference different angles apparent, brings very big difficulty to detection.Recent increasing researchist attempts using the method for machine learning to detect vehicle.
In these methods, detecting device uses a series of samples pictures to train out in advance.There is a lot of data set openings to be used to train detecting device.PASCAL provides a lot of standardized data sets to be used for object detection.
Wherein the UIUC data set is to be specifically designed to vehicle detection recognition data collection; Comprise that 550 resolution are that 100 * 40 vehicle pictures is as the positive sample of training; And comprise two test sets: 170 and train vehicle and 108 pictures that comprise 139 multiple dimensioned vehicles of the homogeneous yardstick of positive sample equal resolution.
All use this data set that result of study is described in a lot of researchs.Yet the vehicle of UIUC data centralization all is the image at visual angle, side, and in road vehicle detected, so the most visual angles for front or back of the vehicle that is detected were this data set and inapplicable.
The another one shortcoming is that UIUC data centralization picture is black and white picture, uses this data set that the feature space of detecting device is had considerable restraint.Different with the UIUC data set, the data set of MIT comprises 516 positive samples pictures, and resolution is 128 * 128, all is the visual angle at the place ahead or rear.
The method good to present performance, training sample are the key factors that influences its performance.In order to study the wagon detector of setting up various visual angles, the researchist of USC has set up the vehicle sample image of various visual angles and the data set of test pattern.Data set comprises that 1028 resolution are 128 * 64 the positive sample image of the vehicle from all angles, and has 196 test pattern workers to comprise the vehicle of 410 different scale different angles.
Yet all sample standard deviations of this data centralization do not comprise the posture information of vehicle, and training data marks classification by hand according to training demand needs usually, and this quantity to sample, performance all have considerable restraint.This becomes a bottleneck of limit algorithm development.The performance of detecting device is to the common deficient in stability of the variation of environment.
Summary of the invention
The object of the present invention is to provide and a kind ofly obtain vehicle training sample method automatically, relate to a method that generates multi-angle vehicle sample image automatically and comprise posture information based on multi-modal sensing data.
The invention discloses and a kind ofly obtain vehicle training sample method automatically, may further comprise the steps based on multi-modal sensing data:
Vehicle detection step based on laser, locator data: distance, angle and laser sensor calibrating parameters according to laser data obtain the two-dimensional coordinate with respect to data acquisition vehicle, to describe the profile information of object level; Through the analysis of shape, and the detection of mobile object tracking, extract candidate's vehicle with respect to the isoparametric time series of the locality of data acquisition vehicle;
Visual pattern sample extraction step: according to the locality of candidate's vehicle in each moment; According to the geometric relationship between laser sensor and the image capture device; This candidate's vehicle is projected in the image, produce area-of-interest, and use detecting device to revise area-of-interest; To each candidate's vehicle; According to the relative visual angle of these candidate's vehicles of calculation of parameter such as its locality with respect to video camera, remove the close picture frame sample in visual angle, automatically extract the samples pictures of this candidate's vehicle under different visual angles.
Further, as a kind of preferred, said vehicle detection step based on laser, locator data further comprises:
Data fusion: the data fusion that will come from the identical of each laser sensor or close on the time;
Cluster: will come from the data of each laser sensor, and, carry out cluster according to the distance of adjacent point-to-point transmission;
Mark: cluster is divided into perhaps uncertain three types of stationary object, mobile objects;
Map generates: generate the data of description collection vehicle motion track map of static environment on every side;
Detect: find candidate's vehicle in the sorted laser fused data in current carrying out;
Follow the trail of: related testing result and tracking result before, upgrade tracking state and car body, kinematic parameter;
Checking: motion and shape information through following the trail of object come it is verified.
Further, as a kind of preferred, said visual pattern sample extraction step further comprises:
Area-of-interest based on laser data extracts: according to the locality of candidate's vehicle in each moment; According to the geometric relationship between laser sensor and the image capture device; This candidate's vehicle is projected in the image, extract the area-of-interest that comprises candidate's vehicle;
Area-of-interest correction based on image technique: use detection method, area-of-interest is revised, find candidate's vehicle wherein based on image;
The vehicle sample image extracts and goes heavily: according to correction result, remove the close picture frame sample in visual angle, automatically extract the samples pictures of candidate's vehicle under different visual angles.
Further, as a kind of preferred, said multi-modal sensor comprises: multi visual angle laser sensor, multi-view image collection equipment and positioning system; Said multi visual angle laser sensor, multi-view image collection equipment are used for environment around the Monitoring Data collection vehicle, constitute the covering on a plurality of observation angles.
Further, as a kind of preferred, said positioning system is used for the pose of the 6DOF of measuring vehicle.
Further, as a kind of preferred, revise the used detecting device of area-of-interest and adopt detection method based on image.
Further, as a kind of preferred, said tracking is specially: for those do not find can the related result of tracking testing result, be regarded as new tracking vehicle; And those are not associated with the tracking vehicle of testing result, think that then it disappears in the monitoring range of vehicle, from follow the trail of the result, remove.
Further, as a kind of preferred, said checking is specially: if certain follows the trail of object not motion in a period of time, then be regarded as static and it is incorporated in the cartographic information; If follow the trail of the motion and the shape information of object erratic variation having taken place at short notice, has then removed this result; Have only the tracking result of those motions and shape normal variation just to be regarded as candidate's vehicle.
Further, as a kind of preferred, the correction of said area-of-interest is specially: according to the pose of candidate's vehicle in the area-of-interest, select based on the detecting device of specific pose vehicle training it to be revised.
Further, as a kind of preferred, said removal repeated sample is specially: according to followed the trail of vehicle with respect to the direction of motion of data acquisition vehicle and headstock towards, apparent identical or close sample image is screened.
The present invention generates multi-angle vehicle sample image automatically and comprises posture information through multi-modal sensing data, can effectively avoid manually-operated, more freedom is provided, restriction still less for the research of detection algorithm.And, generate training sample automatically and make online training become possibility, make it possible to improve automatically sorter to adapt to the variation of environment such as illumination.
The present invention does not automatically need manual intervention, can obtain a large amount of vehicle samples, makes training set abundanter, and the training picture that obtains contains the posture information of vehicle, convenient sorter of training based on different poses.
Description of drawings
When combining accompanying drawing to consider; Through with reference to following detailed, can more completely understand the present invention better and learn wherein many attendant advantages easily, but accompanying drawing described herein is used to provide further understanding of the present invention; Constitute a part of the present invention; Illustrative examples of the present invention and explanation thereof are used to explain the present invention, do not constitute to improper qualification of the present invention, wherein:
Fig. 1 is an embodiment of the invention process flow diagram;
Fig. 2 is based on the vehicle detection step embodiment process flow diagram of laser, locator data;
Fig. 3 is a visual pattern sample extraction embodiment process flow diagram.
Embodiment
Referring to figs. 1 through Fig. 3 embodiments of the invention are described.
For make above-mentioned purpose, feature and advantage can be more obviously understandable, below in conjunction with accompanying drawing and embodiment the present invention done further detailed explanation.
As shown in Figure 1, a kind ofly obtain vehicle training sample method automatically based on multi-modal sensing data, may further comprise the steps:
S1, based on the vehicle detection step of laser, locator data: distance, angle and laser sensor calibrating parameters according to laser data obtain the two-dimensional coordinate with respect to data acquisition vehicle, to describe the profile information of object level; Through the analysis of shape, and the detection of mobile object tracking, extract candidate's vehicle;
S2, visual pattern sample extraction step: according to the locality of candidate's vehicle in each moment; According to the geometric relationship between laser sensor and the image capture device; This candidate's vehicle is projected in the image, produce area-of-interest, and use detecting device to revise area-of-interest; To each candidate's vehicle; According to the relative visual angle of these candidate's vehicles of calculation of parameter such as its locality with respect to video camera, remove the close picture frame sample in visual angle, automatically extract the samples pictures of this candidate's vehicle under different visual angles.
The present invention sets up one to generate the system that vehicle sample data collection is used to train the training of vision wagon detector automatically.Data set comprises the vehicle sample image of multi-angle, and each image all comprises its posture information, and the training to the wagon detector of different angles vehicle becomes possibility like this.
Multi-modal sensor comprises laser sensor, image capture device and positioning system.Laser sensor comprises laser scanner, laser range finder etc., and image capture device can use camera, and also can to have used integrated single or the camera system of a plurality of cameras.Positioning system is a purpose with the positional information that can obtain the object that has loaded this system equipment, like GPS Global Positioning System (GPS), Galilean satellite positioning system, big-dipper satellite positioning system etc.
Each sensor to forming the covering of various visual angles scope around the data collection vehicle, with the data acquisition platform restriction, can select different sensor to set up the detection that realizes covering the different visual angles scope jointly according to actual needs.
Sensing system:
The invention discloses an onboard sensor system.System comprises three kinds of sensors: laser scanner, video camera and GPS/IMU.GPS/IMU is the pose (three-dimensional position and angle) that commercial unit is used for the 6DOF of measuring vehicle.Laser and video camera all are used for environment around the monitor vehicle, and every kind of sensor can both constitute omnibearing covering.Three Hokuyo UTM-30LX laser scanners are installed in the preceding left side of vehicle, the preceding right side, the back; The omnibearing covering of formation level; Yet the monitoring distance of Hokuyo UTM-30LX laser scanner is shorter; Usually in outdoor traffic environment, can only reach 25m, so use SICK LMS291 laser to cover the semicircle zone that radius of monitoring reaches 45m in the forward direction center.The covering of omnibearing video sensor can use a plurality of cameras to realize, has used here the Ladybug video camera of a plurality of cameras integrated, and collection result has merged the picture of 6 cameras, constitutes panoramic picture.Block in order to reduce, the Ladybug camera pedestal is located at the top of vehicle.Use two computer acquisition sensing datas through lock in time: a collection that is used for laser scanner data and GPS/IMU data, another is used for the Ladybug video data acquiring.For every frame data, the time of writing down computer-chronograph is as timestamp.Time-delay in the data transmission procedure is regarded as a steady state value, can test in advance to obtain.After the transducer calibration, all laser scanner data are transferred under the same coordinate through conversion, are the local coordinate of data acquisition vehicle here, and the result of laser data projects and extracts the image pattern picture on the panoramic picture.
Treatment scheme:
As shown in Figure 1, the present invention includes two steps:
S1, based on the vehicle detection of laser, locator data;
S2, vision samples pictures are extracted.
Laser scanner is the distance value of Measuring Object directly.According to angle and transducer calibration parameter, can access two-dimensional coordinate, to describe the profile information of object level with respect to vehicle.Through the analysis of shape, and the detection of mobile object follows the trail of, can be very fast extract candidate's vehicle with respect to the isoparametric time series of the locality of data acquisition vehicle.
According to the geometric relationship between laser and the video camera, these candidate's vehicles are projected in the panoramic picture again, produce area-of-interest, and this area-of-interest comprises the posture information of current time vehicle simultaneously.
Yet laser scans the point that obtains on object be sparse, and on the special color material, can produce the reflection failure.Particularly in dynamic traffic environment, a lot of existence of blocking being arranged, possibly be local to the observation of surrounding vehicles.This just handles to laser data and has brought very big challenge, and there are some mistakes in result.
Be different from laser data, vedio data comprises abundant information, can be used for revising the area-of-interest that provides according to the laser treatment result.In the present invention, use is revised area-of-interest based on the detecting device of HOG characteristic.
In addition, in the tracing process to candidate's vehicle, its apparent variation in image is very slowly.Can produce the close image in a large amount of visual angles when in whole images, extracting vehicle image, need a process of selecting different pose vehicle pictures.
As shown in Figure 2, said vehicle detection step based on laser, locator data further comprises:
S11, data fusion: the data fusion that will come from the identical of each laser scanner or close on the time;
S12, cluster: will come from the data of each laser sensor, and, carry out cluster according to the distance of adjacent point-to-point transmission;
S13, mark: cluster is divided into perhaps uncertain three types of stationary object, mobile objects;
S14, map generate: generate the data of description collection vehicle motion track map of static environment on every side;
S15, detection: find candidate's vehicle in the sorted laser fused data in current carrying out;
S16, tracking: related testing result and tracking result before, upgrade tracking state and car body, kinematic parameter;
S17, checking: motion and shape information through following the trail of object come it is verified.
Below specify the above-mentioned course of work.
S1, based on the vehicle detection of laser, locator data:
Among the present invention; A method of carrying out road vehicle detection tracking based on a plurality of single line laser scanner data and locator data is disclosed; Distance, angle and laser sensor calibrating parameters according to laser data; Obtain two-dimensional coordinate, to describe the profile information of object level with respect to data acquisition vehicle; Through the analysis of shape, and the detection of mobile object tracking, extract candidate's vehicle with respect to the isoparametric time series of the locality of data acquisition vehicle.These data are that S2 vision samples pictures provides the parameters such as locality of each moment candidate's vehicle with respect to data acquisition vehicle.Vehicle detection framework based on laser scanner data and locator data is as shown in Figure 2, introduces in the face of each module down.
S11, data fusion: the data fusion that will come from the identical of each laser scanner or close on the time; In order to reduce the use internal memory; Also will preserve the angle information of data simultaneously, the data after the fusion are by the range data of sequence record from different laser sensors.Each range data obtains angle information according to its order, simultaneously according to the calibrating parameters between the laser, can be converted into the two-dimensional coordinate (laser point) with respect to data acquisition vehicle.Laser data after the fusion can be described the profile information of the object of watching from the data acquisition vehicle visual angle.
S12, cluster: will come from the data of each laser sensor, and, carry out cluster according to the distance of adjacent point-to-point transmission.Here use the Euclidean distance of point-to-point transmission, considered the interval of angle simultaneously.If distance then produces new cluster greater than given threshold value.Cluster can be that to do be the observation to an object, possibly be that to move also possibly be static, and the cluster is here only carried out in the same laser sensing data.
S13, mark: cluster is divided into perhaps uncertain three types of stationary object, mobile objects.At first, each cluster is projected in the coordinate system of the overall situation according to the positional information of the GPS/IMU of data acquisition vehicle record.For each cluster, if it can estimate on the coupling with the expection of the static environment of previous frame, then can think staticly, if estimate coupling, then be regarded as moving, otherwise think uncertain with the expection of certain mobile object.But also can be according to priori to cluster supplementary classification, like the size of vehicle, road geometry information etc.Sorted laser data will map generate with the moving Object Detection module in use.
S14, map generate: generate the data of description collection vehicle motion track map of static environment on every side.Map is described with grid, the probability that the numeric representation grid of each grid is occupied by object.
S15, detection: find candidate's vehicle in the sorted laser fused data in current carrying out.The observed reading with overlapping of local observation here is two difficult parts in detecting.In order to improve the accuracy that characteristic parameter is estimated, and reduce error-detecting, it is necessary that cluster result is merged.In the present invention, simply defined the model of vehicle, and developed the algorithm that cluster merges and model is estimated with the profile square frame of vehicle.In addition, for before detect to follow the trail of the result expection estimate to help to reduce the mistake of cluster in merging, particularly to those on same vehicle but and discontinuous cluster.
S16, tracking: related testing result and tracking result before, upgrade tracking state and car body, kinematic parameter.For those do not find can the related result of tracking testing result, be regarded as new tracking vehicle.And those are not associated with the tracking vehicle of testing result, think that then it disappears in the monitoring range of vehicle, from follow the trail of the result, remove.
S17, checking: motion and shape information through following the trail of object come it is verified.If certain follows the trail of object not motion in a period of time, then be regarded as static and it is incorporated in the cartographic information.If follow the trail of the motion and the shape information of object erratic variation having taken place at short notice, has then removed this result.Have only the tracking result of those motions and shape normal variation just to be regarded as candidate's vehicle.
As shown in Figure 3, said visual pattern sample extraction step further comprises:
S2, vision samples pictures are extracted:
Among the present invention, disclose a vision samples pictures method for distilling, obtain candidate's vehicle with respect to the isoparametric time series of the locality of data acquisition vehicle through utilizing 1 of S;,, candidate's vehicle is projected in the image at each locality constantly according to candidate's vehicle according to the geometric relationship between laser sensor and the image capture device; Produce area-of-interest; Use detecting device to revise area-of-interest, to each candidate's vehicle, according to the relative visual angle of these candidate's vehicles of calculation of parameter such as its locality with respect to video camera; Remove the close picture frame sample in visual angle, automatically extract the samples pictures of this candidate's vehicle under different visual angles.It is as shown in Figure 3 that the vision samples pictures is extracted framework, introduces in the face of each module down.
S21, extract based on the area-of-interest of laser data: according to candidate's vehicle at each locality constantly; According to the geometric relationship between laser sensor and the image capture device; This candidate's vehicle is projected in the image, extract the area-of-interest that comprises candidate's vehicle;
S22, based on the area-of-interest correction of image technique: use detection method, area-of-interest revised, find vehicle wherein based on image;
S23, vehicle sample image extract and go heavily: according to correction result, remove the close picture frame sample in visual angle, automatically extract the samples pictures of vehicle under different visual angles.
Below specify the above-mentioned course of work.
S2, visual pattern sample extraction:
Use is handled the candidate's vehicle that obtains to laser data, and the flow process of in vedio data, extracting the vehicle samples pictures is as shown in Figure 3.
S21, extract based on the area-of-interest of laser data: laser data is handled position, size, the movable information that the candidate's vehicle that obtains comprises vehicle; Concern according to the demarcation between laser scanner and the video camera; Candidate's vehicle is projected in the image of corresponding time; Obtain comprising the area-of-interest of candidate's vehicle, obtain the posture information of candidate's vehicle in the area-of-interest simultaneously according to the movable information of correspondence.Because there is very big difficulty in the processing of laser data in the dynamic traffic environment, area-of-interest contains wrong usually, need revise.
S22, based on the area-of-interest correction of image technique: in the present invention, the Ladybug panorama camera system monitoring vehicle's surroundings environment of 6 cameras that has been integrated of use, there is certain distortion in the panoramic picture of generation on how much.Geometric distortion on each area-of-interest needs to eliminate in advance, through the pixel on the area-of-interest being projected on its tangent plane in spheroidal coordinate system, can effectively remove distortion.
Because the pose of vehicle is known in the area-of-interest, so can revise area-of-interest based on different pose angles.Use is revised area-of-interest based on the detecting device of HOG characteristic.Use the training of USC multi-angle vehicle data to practice the sorter of different angles vehicle, training data is manual classifies.Choose 200 pictures as positive sample for every type, training obtains 4 sorters.In the enterprising capable vehicle detection of area-of-interest, it is the highest and be higher than the detection vehicle of given threshold value to choose score, as samples pictures.
S23, vehicle sample extraction and go heavily: in the tracing process of candidate's vehicle, its performance in picture changes very slow.The performance of same vehicle in image will just have bigger difference usually after a lot of frames.And when travels down, situation static relatively between the vehicle often has generation, and the performance meeting of vehicle does not change in for a long time like this.Need screen showing identical or close vehicle pictures.The direction of motion of vehicle relative data collection vehicle and its headstock are towards being that candidate's vehicle is showed the factor that has the greatest impact in picture; For each vehicle, it can both calculate towards β at each direction of motion α and headstock with respect to data acquisition vehicle constantly.With these two angles discrete be 10 ° * 10 ° angle grid, in each angle grid, same vehicle is extracted score is the highest in the makeover process image as sample.And, confirm the pose of vehicle in samples pictures according to the difference of these two angles, samples pictures is divided into 8 classifications.
As stated, embodiments of the invention have been carried out explanation at length, but as long as not breaking away from inventive point of the present invention and effect in fact can have a lot of distortion, this will be readily apparent to persons skilled in the art.Therefore, such variation also all is included within protection scope of the present invention.
Claims (10)
1. one kind is obtained vehicle training sample method automatically based on multi-modal sensing data, it is characterized in that, may further comprise the steps:
Vehicle detection step based on laser, locator data: distance, angle and laser sensor calibrating parameters according to laser data obtain the two-dimensional coordinate with respect to data acquisition vehicle, to describe the profile information of object level; Through the analysis of shape, and the detection of mobile object tracking, extract candidate's vehicle with respect to the isoparametric time series of the locality of data acquisition vehicle;
Visual pattern sample extraction step: according to the locality of candidate's vehicle in each moment; According to the geometric relationship between laser sensor and the image capture device; This candidate's vehicle is projected in the image, produce area-of-interest, and use detecting device to revise area-of-interest; To each candidate's vehicle; According to the relative visual angle of these candidate's vehicles of calculation of parameter such as its locality with respect to video camera, remove the close picture frame sample in visual angle, automatically extract the samples pictures of this candidate's vehicle under different visual angles.
2. according to claim 1ly a kind ofly obtain vehicle training sample method automatically, it is characterized in that said vehicle detection step based on laser, locator data further comprises based on multi-modal sensing data:
Data fusion: the data fusion that will come from the identical of each laser scanner or close on the time;
Cluster: will come from the data of each laser sensor, and, carry out cluster according to the distance of adjacent point-to-point transmission;
Mark: cluster is divided into perhaps uncertain three types of stationary object, mobile objects;
Map generates: generate the data of description collection vehicle motion track map of static environment on every side;
Detect: find candidate's vehicle in the sorted laser fused data in current carrying out;
Follow the trail of: related testing result and tracking result before, upgrade tracking state and car body, kinematic parameter;
Checking: motion and shape information through following the trail of object come it is verified.
3. according to claim 1ly a kind ofly obtain vehicle training sample method automatically, it is characterized in that said visual pattern sample extraction step further comprises based on multi-modal sensing data:
Area-of-interest based on laser data extracts: laser data is followed the trail of the calibrating parameters of result according to camera and laser, project in the image, extract the area-of-interest that comprises candidate's vehicle;
Area-of-interest correction based on image technique: use detection method, area-of-interest is revised, find vehicle wherein based on image;
The vehicle sample image extracts and goes heavily: according to correction result, remove the identical or close image of same vehicle pose, obtain the vehicle sample image through screening.
4. according to claim 1ly a kind ofly obtain vehicle training sample method automatically, it is characterized in that said multi-modal sensor comprises: multi visual angle laser scanner, multi-visual angle filming head and GPS/IMU positioning system based on multi-modal sensing data; Said multi visual angle laser scanner, multi-visual angle filming head are used for environment around the Monitoring Data collection vehicle, constitute omnibearing covering.
5. according to claim 4ly a kind ofly obtain vehicle training sample method automatically, it is characterized in that said GPS/IMU is used for the pose of the 6DOF of measuring vehicle based on multi-modal sensing data.
6. according to claim 1ly a kind ofly obtain vehicle training sample method automatically, it is characterized in that said detecting device adopts the detection method based on image based on multi-modal sensing data.
7. according to claim 2ly a kind ofly obtain vehicle training sample method automatically, it is characterized in that said tracking is specially based on multi-modal sensing data: for those do not find can the related result of tracking testing result, be regarded as new tracking vehicle; And those are not associated with the tracking vehicle of testing result, think that then it disappears in the monitoring range of vehicle, from follow the trail of the result, remove.
8. according to claim 2ly a kind ofly obtain vehicle training sample method automatically based on multi-modal sensing data; It is characterized in that; Said checking is specially: if certain follows the trail of object not motion in a period of time, then be regarded as static and it is incorporated in the cartographic information; If follow the trail of the motion and the shape information of object erratic variation having taken place at short notice, has then removed this result; Have only the tracking result of those motions and shape normal variation just to be regarded as candidate's vehicle.
9. according to claim 3ly a kind ofly obtain vehicle training sample method automatically based on multi-modal sensing data; It is characterized in that; The correction of said area-of-interest is specially: according to the pose of candidate's vehicle in the area-of-interest, select based on the detecting device of specific pose vehicle training it to be revised.
10. according to claim 3ly a kind ofly obtain vehicle training sample method automatically based on multi-modal sensing data; It is characterized in that; Said removal repeated sample is specially: according to vehicle with respect to the direction of motion of data acquisition vehicle and headstock towards, apparent identical or close sample image is screened.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210234127.0A CN102737236B (en) | 2012-07-06 | 2012-07-06 | Method for automatically acquiring vehicle training sample based on multi-modal sensor data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210234127.0A CN102737236B (en) | 2012-07-06 | 2012-07-06 | Method for automatically acquiring vehicle training sample based on multi-modal sensor data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102737236A true CN102737236A (en) | 2012-10-17 |
CN102737236B CN102737236B (en) | 2015-06-24 |
Family
ID=46992705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210234127.0A Expired - Fee Related CN102737236B (en) | 2012-07-06 | 2012-07-06 | Method for automatically acquiring vehicle training sample based on multi-modal sensor data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102737236B (en) |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104276102A (en) * | 2013-07-10 | 2015-01-14 | 德尔福电子(苏州)有限公司 | Surround view system calibrating device based on vehicle position detection |
CN105303837A (en) * | 2015-11-24 | 2016-02-03 | 东南大学 | Method and system for detecting following behavior characteristic parameter of driver |
CN106291736A (en) * | 2016-08-16 | 2017-01-04 | 张家港长安大学汽车工程研究院 | Pilotless automobile track dynamic disorder object detecting method |
CN106529417A (en) * | 2016-10-17 | 2017-03-22 | 北海益生源农贸有限责任公司 | Visual and laser data integrated road detection method |
CN106569840A (en) * | 2015-10-08 | 2017-04-19 | 上海智瞳通科技有限公司 | Method for machine vision driving assistance system to automatically obtain sample to improve recognition accuracy |
CN106969923A (en) * | 2017-05-26 | 2017-07-21 | 交通运输部公路科学研究所 | A kind of porte-cochere Circular test test system and method |
CN108398083A (en) * | 2018-01-29 | 2018-08-14 | 湖南三德科技股份有限公司 | A kind of compartment localization method and positioning device |
CN108535707A (en) * | 2018-03-30 | 2018-09-14 | 北京润科通用技术有限公司 | A kind of radar performance prediction model method for building up and device |
CN108827341A (en) * | 2017-03-24 | 2018-11-16 | 快图有限公司 | The method of the deviation in Inertial Measurement Unit for determining image collecting device |
CN110853160A (en) * | 2019-09-29 | 2020-02-28 | 广州市凌特电子有限公司 | Multi-dimensional recognition system and method for expressway lane |
CN111126336A (en) * | 2019-12-31 | 2020-05-08 | 潍柴动力股份有限公司 | Sample collection method, device and equipment |
CN111145580A (en) * | 2018-11-06 | 2020-05-12 | 松下知识产权经营株式会社 | Mobile body, management device and system, control method, and computer-readable medium |
CN112805200A (en) * | 2018-10-11 | 2021-05-14 | 宝马股份公司 | Snapshot image of traffic scene |
CN113496213A (en) * | 2021-06-29 | 2021-10-12 | 中汽创智科技有限公司 | Method, device and system for determining target perception data and storage medium |
US11403069B2 (en) | 2017-07-24 | 2022-08-02 | Tesla, Inc. | Accelerated mathematical engine |
US11409692B2 (en) | 2017-07-24 | 2022-08-09 | Tesla, Inc. | Vector computational unit |
CN114902294A (en) * | 2020-01-02 | 2022-08-12 | 国际商业机器公司 | Fine-grained visual recognition in mobile augmented reality |
US11487288B2 (en) | 2017-03-23 | 2022-11-01 | Tesla, Inc. | Data synthesis for autonomous control systems |
US11537811B2 (en) | 2018-12-04 | 2022-12-27 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
US11561791B2 (en) | 2018-02-01 | 2023-01-24 | Tesla, Inc. | Vector computational unit receiving data elements in parallel from a last row of a computational array |
US11562231B2 (en) | 2018-09-03 | 2023-01-24 | Tesla, Inc. | Neural networks for embedded devices |
US11567514B2 (en) | 2019-02-11 | 2023-01-31 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
US11610117B2 (en) | 2018-12-27 | 2023-03-21 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
US11636333B2 (en) | 2018-07-26 | 2023-04-25 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
US11665108B2 (en) | 2018-10-25 | 2023-05-30 | Tesla, Inc. | QoS manager for system on a chip communications |
US11681649B2 (en) | 2017-07-24 | 2023-06-20 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
US11734562B2 (en) | 2018-06-20 | 2023-08-22 | Tesla, Inc. | Data pipeline and deep learning system for autonomous driving |
US11748620B2 (en) | 2019-02-01 | 2023-09-05 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
US11790664B2 (en) | 2019-02-19 | 2023-10-17 | Tesla, Inc. | Estimating object properties using visual image data |
US11816585B2 (en) | 2018-12-03 | 2023-11-14 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
US11841434B2 (en) | 2018-07-20 | 2023-12-12 | Tesla, Inc. | Annotation cross-labeling for autonomous control systems |
US11893393B2 (en) | 2017-07-24 | 2024-02-06 | Tesla, Inc. | Computational array microprocessor system with hardware arbiter managing memory requests |
US11893774B2 (en) | 2018-10-11 | 2024-02-06 | Tesla, Inc. | Systems and methods for training machine models with augmented data |
US12014553B2 (en) | 2019-02-01 | 2024-06-18 | Tesla, Inc. | Predicting three-dimensional features for autonomous driving |
US12136030B2 (en) | 2023-03-16 | 2024-11-05 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101388146A (en) * | 2008-06-16 | 2009-03-18 | 上海高德威智能交通系统有限公司 | Image acquisition and treatment apparatus and method, and vehicle monitoring and recording system |
US20100128110A1 (en) * | 2008-11-21 | 2010-05-27 | Theofanis Mavromatis | System and method for real-time 3-d object tracking and alerting via networked sensors |
CN102147971A (en) * | 2011-01-14 | 2011-08-10 | 赵秀江 | Traffic information acquisition system based on video image processing technology |
-
2012
- 2012-07-06 CN CN201210234127.0A patent/CN102737236B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101388146A (en) * | 2008-06-16 | 2009-03-18 | 上海高德威智能交通系统有限公司 | Image acquisition and treatment apparatus and method, and vehicle monitoring and recording system |
US20100128110A1 (en) * | 2008-11-21 | 2010-05-27 | Theofanis Mavromatis | System and method for real-time 3-d object tracking and alerting via networked sensors |
CN102147971A (en) * | 2011-01-14 | 2011-08-10 | 赵秀江 | Traffic information acquisition system based on video image processing technology |
Cited By (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104276102A (en) * | 2013-07-10 | 2015-01-14 | 德尔福电子(苏州)有限公司 | Surround view system calibrating device based on vehicle position detection |
CN104276102B (en) * | 2013-07-10 | 2016-05-11 | 德尔福电子(苏州)有限公司 | A kind of viewing system caliberating device detecting based on vehicle location |
CN106569840A (en) * | 2015-10-08 | 2017-04-19 | 上海智瞳通科技有限公司 | Method for machine vision driving assistance system to automatically obtain sample to improve recognition accuracy |
CN106569840B (en) * | 2015-10-08 | 2020-10-30 | 上海智瞳通科技有限公司 | Method for automatically acquiring sample by machine vision driving auxiliary system to improve identification precision |
CN105303837A (en) * | 2015-11-24 | 2016-02-03 | 东南大学 | Method and system for detecting following behavior characteristic parameter of driver |
CN106291736A (en) * | 2016-08-16 | 2017-01-04 | 张家港长安大学汽车工程研究院 | Pilotless automobile track dynamic disorder object detecting method |
CN106529417A (en) * | 2016-10-17 | 2017-03-22 | 北海益生源农贸有限责任公司 | Visual and laser data integrated road detection method |
US12020476B2 (en) | 2017-03-23 | 2024-06-25 | Tesla, Inc. | Data synthesis for autonomous control systems |
US11487288B2 (en) | 2017-03-23 | 2022-11-01 | Tesla, Inc. | Data synthesis for autonomous control systems |
CN108827341A (en) * | 2017-03-24 | 2018-11-16 | 快图有限公司 | The method of the deviation in Inertial Measurement Unit for determining image collecting device |
CN106969923B (en) * | 2017-05-26 | 2023-09-15 | 交通运输部公路科学研究所 | Vehicle channel circular track testing system and method |
CN106969923A (en) * | 2017-05-26 | 2017-07-21 | 交通运输部公路科学研究所 | A kind of porte-cochere Circular test test system and method |
US12086097B2 (en) | 2017-07-24 | 2024-09-10 | Tesla, Inc. | Vector computational unit |
US11681649B2 (en) | 2017-07-24 | 2023-06-20 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
US11893393B2 (en) | 2017-07-24 | 2024-02-06 | Tesla, Inc. | Computational array microprocessor system with hardware arbiter managing memory requests |
US11403069B2 (en) | 2017-07-24 | 2022-08-02 | Tesla, Inc. | Accelerated mathematical engine |
US11409692B2 (en) | 2017-07-24 | 2022-08-09 | Tesla, Inc. | Vector computational unit |
CN108398083A (en) * | 2018-01-29 | 2018-08-14 | 湖南三德科技股份有限公司 | A kind of compartment localization method and positioning device |
US11797304B2 (en) | 2018-02-01 | 2023-10-24 | Tesla, Inc. | Instruction set architecture for a vector computational unit |
US11561791B2 (en) | 2018-02-01 | 2023-01-24 | Tesla, Inc. | Vector computational unit receiving data elements in parallel from a last row of a computational array |
CN108535707A (en) * | 2018-03-30 | 2018-09-14 | 北京润科通用技术有限公司 | A kind of radar performance prediction model method for building up and device |
US11734562B2 (en) | 2018-06-20 | 2023-08-22 | Tesla, Inc. | Data pipeline and deep learning system for autonomous driving |
US11841434B2 (en) | 2018-07-20 | 2023-12-12 | Tesla, Inc. | Annotation cross-labeling for autonomous control systems |
US11636333B2 (en) | 2018-07-26 | 2023-04-25 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
US12079723B2 (en) | 2018-07-26 | 2024-09-03 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
US11983630B2 (en) | 2018-09-03 | 2024-05-14 | Tesla, Inc. | Neural networks for embedded devices |
US11562231B2 (en) | 2018-09-03 | 2023-01-24 | Tesla, Inc. | Neural networks for embedded devices |
US11893774B2 (en) | 2018-10-11 | 2024-02-06 | Tesla, Inc. | Systems and methods for training machine models with augmented data |
CN112805200A (en) * | 2018-10-11 | 2021-05-14 | 宝马股份公司 | Snapshot image of traffic scene |
US11665108B2 (en) | 2018-10-25 | 2023-05-30 | Tesla, Inc. | QoS manager for system on a chip communications |
CN111145580B (en) * | 2018-11-06 | 2022-06-14 | 松下知识产权经营株式会社 | Mobile body, management device and system, control method, and computer-readable medium |
CN111145580A (en) * | 2018-11-06 | 2020-05-12 | 松下知识产权经营株式会社 | Mobile body, management device and system, control method, and computer-readable medium |
US11816585B2 (en) | 2018-12-03 | 2023-11-14 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
US11537811B2 (en) | 2018-12-04 | 2022-12-27 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
US11908171B2 (en) | 2018-12-04 | 2024-02-20 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
US11610117B2 (en) | 2018-12-27 | 2023-03-21 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
US12014553B2 (en) | 2019-02-01 | 2024-06-18 | Tesla, Inc. | Predicting three-dimensional features for autonomous driving |
US11748620B2 (en) | 2019-02-01 | 2023-09-05 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
US11567514B2 (en) | 2019-02-11 | 2023-01-31 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
US11790664B2 (en) | 2019-02-19 | 2023-10-17 | Tesla, Inc. | Estimating object properties using visual image data |
CN110853160A (en) * | 2019-09-29 | 2020-02-28 | 广州市凌特电子有限公司 | Multi-dimensional recognition system and method for expressway lane |
CN111126336B (en) * | 2019-12-31 | 2023-07-21 | 潍柴动力股份有限公司 | Sample collection method, device and equipment |
CN111126336A (en) * | 2019-12-31 | 2020-05-08 | 潍柴动力股份有限公司 | Sample collection method, device and equipment |
CN114902294A (en) * | 2020-01-02 | 2022-08-12 | 国际商业机器公司 | Fine-grained visual recognition in mobile augmented reality |
CN114902294B (en) * | 2020-01-02 | 2023-10-20 | 国际商业机器公司 | Fine-grained visual recognition in mobile augmented reality |
CN113496213B (en) * | 2021-06-29 | 2024-05-28 | 中汽创智科技有限公司 | Method, device, system and storage medium for determining target perception data |
CN113496213A (en) * | 2021-06-29 | 2021-10-12 | 中汽创智科技有限公司 | Method, device and system for determining target perception data and storage medium |
US12136030B2 (en) | 2023-03-16 | 2024-11-05 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
Also Published As
Publication number | Publication date |
---|---|
CN102737236B (en) | 2015-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102737236B (en) | Method for automatically acquiring vehicle training sample based on multi-modal sensor data | |
CN106919915B (en) | Map road marking and road quality acquisition device and method based on ADAS system | |
CN101617197B (en) | Feature identification apparatus, measurement apparatus and measuring method | |
CA2950791C (en) | Binocular visual navigation system and method based on power robot | |
Gandhi et al. | Vehicle surround capture: Survey of techniques and a novel omni-video-based approach for dynamic panoramic surround maps | |
Broggi et al. | Self-calibration of a stereo vision system for automotive applications | |
CN111046743B (en) | Barrier information labeling method and device, electronic equipment and storage medium | |
US8872925B2 (en) | Method and device for camera calibration | |
JP4717760B2 (en) | Object recognition device and video object positioning device | |
CN108154472B (en) | Parking space visual detection method and system integrating navigation information | |
US20200202175A1 (en) | Database construction system for machine-learning | |
CN110443898A (en) | A kind of AR intelligent terminal target identification system and method based on deep learning | |
EP3594902B1 (en) | Method for estimating a relative position of an object in the surroundings of a vehicle and electronic control unit for a vehicle and vehicle | |
CN109596121B (en) | Automatic target detection and space positioning method for mobile station | |
CN105182320A (en) | Depth measurement-based vehicle distance detection method | |
Rodríguez et al. | Obstacle avoidance system for assisting visually impaired people | |
CN114761997A (en) | Target detection method, terminal device and medium | |
Petrovai et al. | A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices | |
Li et al. | Durlar: A high-fidelity 128-channel lidar dataset with panoramic ambient and reflectivity imagery for multi-modal autonomous driving applications | |
Kruber et al. | Vehicle position estimation with aerial imagery from unmanned aerial vehicles | |
CN111256651B (en) | Week vehicle distance measuring method and device based on monocular vehicle-mounted camera | |
Hu et al. | A high-resolution surface image capture and mapping system for public roads | |
CN114503044B (en) | System and method for automatically marking objects in a 3D point cloud | |
Philipsen et al. | Day and night-time drive analysis using stereo vision for naturalistic driving studies | |
Nowak et al. | Vision-based positioning of electric buses for assisted docking to charging stations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150624 Termination date: 20180706 |