CN104952254B - Vehicle identification method, device and vehicle - Google Patents

Vehicle identification method, device and vehicle Download PDF

Info

Publication number
CN104952254B
CN104952254B CN201410125721.5A CN201410125721A CN104952254B CN 104952254 B CN104952254 B CN 104952254B CN 201410125721 A CN201410125721 A CN 201410125721A CN 104952254 B CN104952254 B CN 104952254B
Authority
CN
China
Prior art keywords
vehicle
image
opponent
distance
opponent vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410125721.5A
Other languages
Chinese (zh)
Other versions
CN104952254A (en
Inventor
黄忠伟
姜波
芮淑娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BYD Co Ltd
Original Assignee
BYD Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BYD Co Ltd filed Critical BYD Co Ltd
Priority to CN201410125721.5A priority Critical patent/CN104952254B/en
Publication of CN104952254A publication Critical patent/CN104952254A/en
Application granted granted Critical
Publication of CN104952254B publication Critical patent/CN104952254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of vehicle identification method, device and vehicle.Wherein, methods described includes:The first image and the second image are obtained, wherein, the first image is coloured image or luminance picture, and the second image is depth image;Lines on highway is obtained according to the first image;Lines on highway is mapped in the second image to generate vehicle identification scope in the second image according to the intertexture mapping relations between the first image and the second image;And vehicle is identified according to vehicle identification scope.The method of the embodiment of the present invention, it can rapidly, accurately identify other side's car two, and the opponent vehicle based on identification generates warning message fast and reliablely, the brake reaction time of preciousness can be won for the driver of this vehicle or opponent vehicle, further lifts drive safety.

Description

Vehicle identification method, device and vehicle
Technical field
The present invention relates to technical field of vehicle, more particularly to a kind of vehicle identification method, device and vehicle.
Background technology
In Modern Traffic, freeway traffic turns into indispensable one because speed is fast, circuit is more, the volume of the flow of passengers is big Kind mode of transportation.However, the characteristics of just because of more than freeway traffic, traffic accidents occurrence frequency occupies height not Under, and casualties, property loss are often also very serious.Therefore, the vehicle recongnition technique for vehicle driving safety is continuous Emerge in large numbers, including millimetre-wave radar, laser radar technique.
But the production cost of millimetre-wave radar and laser radar fails effectively to reduce always, in passenger car field into For the exclusive product of luxurious car.Therefore, disclosed in correlation technique and utilize single common inexpensive video camera or infrared photography Machine carries out the imaging of image to the vehicle at this front side or rear, and Land use models identification technology is to the vehicle and vehicle in image Width is identified.But entered using the image of the video camera of general visible or the video camera of near infrared light on daytime The pattern-recognition of driving need to pass through large amount of complex and prolonged calculating, tend not to the fast reaction need for meeting vehicle identification Ask.And far red light video camera is not only expensive, also tend to use in the period of high temperature and area.
In addition, in order to improve vehicle identification precision, also disclosed in correlation technique same using two kinds of video cameras or two The technical scheme of video camera.But this technical scheme still can not get around pattern-recognition and calculate the problem of complicated, and two Camera optical axis is misaligned will to cause bigger identification blind area occur, adds mechanical checkout process between two video cameras.
In addition, in order to reduce the amount of calculation for identifying object and reduce the blind area of identification, one is disclosed in correlation technique Kind object detecting system, the object detecting system include TOF (Time-Of-Flight, the flight time) biographies for being capable of directly ranging The imaging sensor of sensor and the luminance picture for shooting object to be detected, wherein TOF sensor and imaging sensor is same Formed and be staggered in one chip package so that two kinds of sensors can use same group of optical mirror slip (Lens).Related skill A kind of object detecting method is also disclosed in art, when the TOF sensor in the object detecting system detects Ben Che and object When distance is less than warning distance, that is, it is engraved in the image for being used for display and showing of imaging sensor shooting and changes the detected material The color of the imaging of body, and give a warning and sound or send vibratory alarm.But when this car in the process of moving, around this car Object only rely on detecting distance and give a warning and will cause puzzlement and the worry of driver, such as run into the in the same direction adjacent of this car To be sent during the common situations such as the passing vehicle in track, the vehicle of meeting of the opposing lane of this car, guardrail on lane side need not The warning wanted.
The content of the invention
It is contemplated that at least solves one of technical problem in correlation technique to a certain extent.Therefore, the present invention One purpose is to propose a kind of vehicle identification method, and this method can rapidly, accurately identify other side's car two, and be based on The opponent vehicle of identification generates warning message fast and reliablely, can win preciousness for the driver of this vehicle or opponent vehicle Brake reaction time, further lift drive safety.
Second object of the present invention is to propose a kind of vehicle identifier.
Third object of the present invention is to propose a kind of vehicle.
Fourth object of the present invention is to propose another vehicle.
To reach above-mentioned purpose, the vehicle identification method of first aspect present invention embodiment, including:Obtain the first image and Second image, wherein, described first image is coloured image or luminance picture, and second image is depth image;According to institute State the first image and obtain lines on highway;According to the intertexture mapping relations between described first image and the second image by the public affairs Bus diatom is mapped in second image to generate the vehicle identification scope in second image;And according to institute Vehicle identification scope is stated vehicle is identified.
Vehicle identification method according to embodiments of the present invention, has the advantages that:(1)Because the depth image utilized Imaging component with coloured image is same encapsulation chip, can use same group of optical mirror slip, compared to depth image and coloured silk The imaging component of color image is separately encapsulated, identification blind area is reduced using the situation of not same group of optical mirror slip, eliminates two Mechanical checkout process between the separated encapsulation of kind.And the CMOS's utilized according to the Moore's Law of semi-conductor industry, the present invention TOF sensor and imaging sensor intertexture array chip will be provided with sufficiently low production cost in limited period;(2)Even if by Acute variation is occurred to coloured image or luminance picture in daytime, night natural lighting change and is unfavorable for vehicle identification, because The imaging of depth image make use of make it that natural lighting is small on vehicle identification influence, without the use of the computationally intensive pattern of complexity Recognition methods, the identification of vehicle calculate compare using coloured image or luminance picture can it is higher with simple and quick and precision, can Quickly to generate warning message, the lower computing chip of cost can be used to complete to calculate;(3)Because depth image and cromogram As almost shooting simultaneously so that using between the vehicle identification scope of color image recognition and the vehicle using depth image identification The time error very little calculated is completed, ensure that recognition accuracy;(4)Due to make use of coloured image or luminance picture to identify car Diatom, the identification range of vehicle is defined, eliminate the identification interference of the vehicle or object beyond identification range, can to give birth to Into warning message it is accurate and necessary, the normally travel that unnecessary warning message disturbs driver will not be generated;(5)Can be quick Reliably, vehicle is identified with higher precision, and the vehicle based on identification generates warning message fast and reliablely, can be this vehicle or The driver of opponent vehicle wins the brake reaction time of preciousness, further lifts drive safety.
To reach above-mentioned purpose, the vehicle identifier of second aspect of the present invention embodiment, including:Image collection module, For obtaining the first image and the second image, wherein, described first image is coloured image or luminance picture, second image For depth image;Lane line acquisition module, for obtaining lines on highway according to described first image;Identification range generates mould Block, described in the lines on highway is mapped to according to the intertexture mapping relations between described first image and the second image To generate the vehicle identification scope in second image in second image;And identification module, for according to the car Vehicle is identified for identification range.
Vehicle identifier according to embodiments of the present invention, has the advantages that:(1)Because the depth image utilized Imaging component with coloured image is same encapsulation chip, can use same group of optical mirror slip, compared to depth image and coloured silk The imaging component of color image is separately encapsulated, identification blind area is reduced using the situation of not same group of optical mirror slip, eliminates two Mechanical checkout process between the separated encapsulation of kind.And the CMOS's utilized according to the Moore's Law of semi-conductor industry, the present invention TOF sensor and imaging sensor intertexture array chip will be provided with sufficiently low production cost in limited period;(2)Even if by Acute variation is occurred to coloured image or luminance picture in daytime, night natural lighting change and is unfavorable for vehicle identification, because The imaging of depth image make use of make it that natural lighting is small on vehicle identification influence, without the use of the computationally intensive pattern of complexity Recognition methods, the identification of vehicle calculate compare using coloured image or luminance picture can it is higher with simple and quick and precision, can Quickly to generate warning message, the lower computing chip of cost can be used to complete to calculate;(3)Because depth image and cromogram As almost shooting simultaneously so that using between the vehicle identification scope of color image recognition and the vehicle using depth image identification The time error very little calculated is completed, ensure that recognition accuracy;(4)Due to make use of coloured image or luminance picture to identify car Diatom, the identification range of vehicle is defined, eliminate the identification interference of the vehicle or object beyond identification range, can to give birth to Into warning message it is accurate and necessary, the normally travel that unnecessary warning message disturbs driver will not be generated;(5)Can be quick Reliably, vehicle is identified with higher precision, and the vehicle based on identification generates warning message fast and reliablely, can be this vehicle or The driver of opponent vehicle wins the brake reaction time of preciousness, further lifts drive safety.
To reach above-mentioned purpose, the vehicle of third aspect present invention embodiment, including second aspect of the present invention embodiment Vehicle identifier.
Vehicle according to embodiments of the present invention, due to being provided with vehicle identifier, in motion can rapidly, it is high-precision Degree ground identification other side car two, and the opponent vehicle based on identification generates warning message fast and reliablely, can be this vehicle or right The driver of square vehicle wins the brake reaction time of preciousness, further lifts drive safety.
To reach above-mentioned purpose, the vehicle of fourth aspect present invention embodiment, including:Passed with TOF sensor and image The camera of sensor intertexture array chip.
Vehicle according to embodiments of the present invention, due to taking the photograph with TOF sensor and imaging sensor intertexture array chip As head, in motion, the coloured image/luminance picture and depth image of opponent vehicle can be almost shot simultaneously.
Brief description of the drawings
Fig. 1 is the flow chart of vehicle identification method according to an embodiment of the invention;
Fig. 2 is the TOF sensor of the CMOS disclosed in conference proceedgins and the picture of imaging sensor intertexture array chip Element forms schematic diagram;
Fig. 3 is the TOF sensor of the CMOS disclosed in conference proceedgins and the electricity of imaging sensor intertexture array chip Sub- schematic wiring diagram;
Fig. 4 is the position schematic top plan view that camera according to an embodiment of the invention is arranged on vehicle;
Fig. 5 is the flow chart according to an embodiment of the invention that lines on highway is obtained according to the first image;
Fig. 6 be it is according to an embodiment of the invention be used for search luminance threshold on quantify brightness probability statistical distribution Histogram schematic diagram;
Fig. 7 is the bianry image MB0 of the protrusion lane line of establishment according to an embodiment of the invention schematic diagram;
Fig. 8 is bianry image MB1 according to an embodiment of the invention schematic diagram;
Fig. 9 is bianry image MB2 according to an embodiment of the invention schematic diagram;
Figure 10 is bianry image MB3 according to an embodiment of the invention(Lines on highway)Schematic diagram;
Figure 11 is the error schematic diagram of vehicle identification scope according to an embodiment of the invention;
Figure 12 is the merging process schematic diagram of line segment according to an embodiment of the invention;
Figure 13 is the merging process schematic diagram of line segment according to an embodiment of the invention;
Figure 14 is the merging process schematic diagram of line segment according to an embodiment of the invention;
Figure 15 is the flow chart according to an embodiment of the invention that vehicle is identified according to vehicle identification scope;
Figure 16 is according to an embodiment of the invention according to the second image and vehicle identification scope identification opponent vehicle Flow chart;
Figure 17 is depth image A1 corresponding to shooting time T1 according to an embodiment of the invention;
Figure 18 is depth image A2 corresponding to shooting time T2 according to an embodiment of the invention;
Figure 19 is the time diffusion depth image of prominent mobile object according to an embodiment of the invention;
Figure 20 is the time diffusion depth image of opponent vehicle according to an embodiment of the invention;
Figure 21 is lookup other side's car according to an embodiment of the invention in shooting time T1 depth image view A1 Edge schematic diagram;
Figure 22 is lookup other side's car according to an embodiment of the invention in shooting time T2 depth image view A2 Edge schematic diagram;
Figure 23 is the structural representation of vehicle identifier according to an embodiment of the invention;
Figure 24 is the structural representation of lane line acquisition module 200 according to an embodiment of the invention;
Figure 25 is the structural representation of identification module 400 according to an embodiment of the invention;
Figure 26 is the structural representation of identification module 400 according to an embodiment of the invention.
Embodiment
Embodiments of the invention are described below in detail, the example of the embodiment is shown in the drawings, wherein from beginning to end Same or similar label represents same or similar element or the element with same or like function.Below with reference to attached The embodiment of figure description is exemplary, it is intended to for explaining the present invention, and is not considered as limiting the invention.
Below with reference to the accompanying drawings vehicle identification method according to embodiments of the present invention, device and vehicle are described.
Fig. 1 is the flow chart of vehicle identification method according to an embodiment of the invention.As shown in figure 1, vehicle identification side Method comprises the following steps:
S101, the first image and the second image are obtained, wherein, the first image is coloured image or luminance picture, the second figure As being depth image.
In an embodiment of the present invention, the first image and the second image with TOF sensor and imaging sensor by handing over The camera for knitting array chip obtains.
Specifically, TOF sensor and imaging sensor intertexture array chip are simply introduced below.
Samsung Electronics of South Korea have developed the CMOS sensings that can obtain depth image and common RGB color image simultaneously Device, obtaining both images category simultaneously with a cmos sensor, first, the said firm was on 2 22nd, 2012 in U.S. in the whole world at that time Paper publishing has been carried out to the cmos sensor in state " ISSCC2012 " meeting.The cmos sensor of this type belongs to CMOS TOF sensor and the category of imaging sensor intertexture array chip.It as fig. 2 shows the exploitation of Samsung Electronics of South Korea The cmos sensor composition, wherein each pixel cell includes 1 Z pixel (TOF sensor, for producing depth image) With 8 rgb pixels (imaging sensor, for producing RGB color image), the i.e. corresponding 2 R pixels of 1 Z pixel, 4 G pixels With 2 B pixels.As Fig. 3 shows the electronics wiring of the cmos sensor, when using same group of optical mirror slip, CMOS biographies Sensor can almost shoot an amplitude deepness image (its horizontal resolution is 480, vertical resolution 360) and a width RGB simultaneously Coloured image (its horizontal resolution is 1920, vertical resolution 720).
It should be understood that captured piece image is actual for (or horizontal, the vertical parsing of given row/column length Degree), a numerical matrix of given numerical value change scope and given value arrangements mode.
Therefore, CMOS TOF sensor and imaging sensor intertexture array chip once it is determined that, Z pixels and RGB (or YUV Brightness, aberration etc.) pixel ratio and Rankine-Hugoniot relations i.e. can determine that, its shoot the coloured image (or luminance picture) and depth Intertexture mapping relations between image also determine that.
In an embodiment of the present invention, as shown in figure 4, TOF sensor and imaging sensor intertexture array with CMOS The camera C1 (being illustrated by the small square frame with band in figure) of chip is typically mounted on this vehicle Car1 (by the big square frame of white in figure Signal) vehicle body center line on.Camera C1 may be mounted at Car1 headstocks and to imaging in front of headstock, can also be arranged on Car1 The tailstock is simultaneously imaged to tailstock rear, can also respectively install camera C1 simultaneously in front of headstock in Car1 headstock and the tailstock It is imaged with tailstock rear.Here, camera C1 installation site and imaging direction are not limited.I.e. vehicle of the invention is known Method for distinguishing is all applicable to different installation site and imaging direction, therefore does not distinguish hereafter and only illustrate to install One camera C1 situation.
S102, lines on highway is obtained according to the first image.
Specifically, the coloured image or luminance picture that are photographed according to camera obtains lines on highway.
In one embodiment of the invention, specifically included as shown in figure 5, obtaining lines on highway according to the first image:
S1021, gray level image is generated according to the first image, and luminance threshold is generated according to gray level image.
Specifically, the common coloured image of people can use multiple color standard to realize colored display in display device, Such as RGB (Red Green Blue) standard, YUV (Y represent brightness, U V represent aberration) standard etc..Therefore, it is if captured Coloured image use YUV standards, then can directly extract Y-signal to create the gray level image on brightness;It is if captured Coloured image using RGB standards (or by gamma calibration R ' G ' B '), then by formula Y=0.299R '+0.587G '+ 0.114B ' creates the gray level image on brightness.
Further, because the brightness of lane line and the brightness of highway pavement have notable difference(The brightness of lane line compared with It is high), therefore some luminance thresholds can be obtained by searching, luminance threshold can utilize " statistics with histogram -- bimodal " algorithm come Lookup obtains;Gray level image can also be divided into multiple subgraphs and " statistics with histogram -- bimodal " is performed to each subgraph and calculated Method searches to obtain multiple luminance thresholds, to tackle the situation of highway pavement or lane line brightness change.
More specifically, as shown in Figure 6, it is assumed that the quantization brightness excursion of pixel is 0 to 255 in gray level image, Nogata Figure has just counted distribution probability (or statistical number) of all pixels of gray level image or its subgraph on quantifying brightness change, its In, a probability distribution peak is distributed in the relatively low some set of pixels comprising highway pavement of brightness, and brightness is higher includes It is distributed in another probability distribution peak in some set of pixels of lane line, the quantization at the lowest point between probability distribution " bimodal " is bright Degree is luminance threshold, for example, as shown in fig. 6, luminance threshold is 170.Therefore, as long as along quantization luminance axis point in histogram Do not search a peak value and its position respectively from two, then from a lowest point value and its position is searched between two peaks, should The lowest point position is exactly luminance threshold.
S1022, bianry image is created according to gray level image and luminance threshold.
Specifically, in gray level image, gray level image pixel value comprising lane line of the brightness higher than luminance threshold is set 1 is set to, other gray level image pixel values comprising highway pavement of the brightness less than luminance threshold are arranged to 0, so as to create The bianry image MB0 of prominent lane line, as shown in Figure 7.
S1023, initial lines on highway is identified according to bianry image, and road driveway is obtained according to initial lines on highway Line.
In one embodiment of the invention, obtaining lines on highway according to initial lines on highway includes:To initial public Bus diatom is screened to obtain two straightways, and two straightways are extended or merged, to obtain road driveway Line.
Specifically, because the lane line close to this vehicle Car1 is always close to straight, therefore Hough straight lines can be utilized Detection algorithm (Hough transform) identifies the lane line close to this vehicle Car1 in bianry image MB0.It is generally straight with Hough Line detection algorithms will detect the straightway that more pixel value is 1, in straightway as shown in Figure 7, except lane line is also possible to Include guardrail, deceleration strip etc..Therefore need to pick out from the multiple straightways detected with long length and with level Line into the N bar straightways of larger acute angle, be illustrated in figure 8 it is selected after 6 straightways (straightway is examined as shown in heavy line It is acute angle to consider the straightway of right half-image and the complementary angle of horizontal line institute angulation in figure, and horizontal line and acute angle are shown in dotted line), throw Other straightways are abandoned, so as to create bianry image MB1.Again because the lane line closest to this vehicle Car1 generally has left and right 2 Bar, and the track line segment of the left and right 2 and the acute angle that horizontal line is formed have in the N bar straightways of long length relatively surely Fixed acute angle degree excursion, therefore the track line segment of the left and right 2 can be picked out from above-mentioned N bars straightway, so as to Bianry image MB2 is created, as shown in Figure 9.
More specifically, the obtained track straightway of left and right 2 will be selected(2 track straightways i.e. shown in Fig. 8) Lane line original coordinates set can be created by extending in gray level image coordinate system, and the lane line original coordinates set includes the extension The track straightway of left and right 2 by gray level image each pixel coordinate set, so as to create bianry image MB3, such as Shown in Figure 10.It is the schematic diagram of lines on highway shown in Figure 10.
In addition, when the lane line apart from this vehicle Car1 remotely is bend, if will select according to the method described above To the track straight line elongated segment of left and right 2 to form lines on highway, larger error will be present with actual conditions.For example, such as Shown in Figure 11, thick real segment (marked as 1,2) be Hough line detection algorithms identification lane line, thin phantom line segments (label For 3,4) be the selected 2 track straightways in left and right extended line, real segment 5 and phantom line segments 6 are highway car in practice Diatom, then larger by producing if the lane line subsequently formed according to line segment 1,2,3 and 4 generates vehicle identification scope The vehicle identification scope for mistake in region between error, i.e. line segment 3 and line segment 5, and the region between line segment 4 and line segment 6 is The vehicle identification scope of omission.Therefore, in the case where bend be present, not to the track straightway of the left and right 2(That is the He of line segment 1 Line segment 2)Extended, but straightway is searched near the upper end of the 2 track straightways in left and right, the conjunction for line section of going forward side by side And;The upper end of line segment after merging continues to search for, merges straightway, until the length of the line segment of merging reaches the limit of setting Value.Wherein, the merging process of line segment is as shown in Figure 12,13,14.The track line segment of left and right 2 for completing to merge passes through gray level image The set of coordinate of each pixel be lane line original coordinates set.Shown in Figure 14, as obtained in the case where bend be present The schematic diagram for the lines on highway got.
S103, lines on highway is mapped into the second figure according to the intertexture mapping relations between the first image and the second image With the generation vehicle identification scope in the second image as in.
Specifically, because lane line is close to highway pavement, thickness is small, and lane line has non-with highway pavement in depth image Very close to change in depth (be not brightness change), it is difficult to lane line and highway pavement are made a distinction in depth image, because This, can be according to intertexture mapping relations of the coloured image (or luminance picture, gray level image) between depth image by step The lines on highway obtained in S102 is mapped in depth image to obtain vehicle identification scope.
, can be in set depth image per a line more specifically, for example, by taking simplest equal proportion intertexture mapping relations as an example Z pixels correspond to the N row Y pixels (vertical resolution of gray level image is N times of depth image) of gray level image in step S1021, And each row Z pixels correspond to the M row Y pixels of gray level image in step S1021, and (the horizontal resolution of gray level image is depth map M times of picture).Further, set according to the ratio of above-mentioned intertexture mapping relations, for being included in lane line original coordinates set Each coordinate (comprising raw line coordinate and original row coordinate), the row that is mapped will be rounded after its raw line coordinate divided by N Coordinate, the row coordinate mapped is rounded after its original row coordinate divided by M.Can according to the row coordinate of the row coordinate of mapping and mapping To create the lane line mapping point set on depth image(That is vehicle identification scope).Thus, closest to this vehicle Car1 The lane line of left and right 2 be just mapped to depth image, the left and right 2 of mapping from coloured image (or luminance picture, gray level image) Depth image portion pixel region between bar lane line is vehicle identification scope.
S104, vehicle is identified according to vehicle identification scope.
Specifically, the opponent vehicle beyond this vehicle Car1 appears in coloured image (or luminance picture) and depth image When middle, the back or front of usual opponent vehicle apart from this vehicle Car1 recently and (or other are farther with highway pavement Things) forming strong brightness and depth difference, (the inside each several part of the back or front of opponent vehicle has almost same Depth, but substantially it is higher by road surface), and depth image also directly contains opponent vehicle range information.Therefore, can be according to vehicle Vehicle is identified identification range.
In one embodiment of the invention, as shown in figure 15, vehicle is identified into one according to vehicle identification scope Step includes:
S1041, opponent vehicle is identified according to the second image and vehicle identification scope.
In one embodiment of the invention, as shown in figure 16, other side is identified according to the second image and vehicle identification scope Vehicle specifically includes:
S201, two two the second images shot at different moments are obtained, and protruded and moved according to two the second image creations The time diffusion depth image of dynamic object.
Specifically, usual this vehicle Car1 and opponent vehicle distance always change, and are shown as in depth image The depth pixel value or opponent vehicle of opponent vehicle change in the position of depth image coordinate system with the time.Therefore, can be with Opponent vehicle is identified using on the time diffusion algorithm of depth image.For example, shooting time be respectively T1, T2 (T1 earlier than T2) two amplitude deepness image A1, A2 (between two thin dotted lines be vehicle identification scope in figure respectively as shown in Figure 17,18, For illustrative ease, schematic diagram does not draw the things beyond vehicle), by each pixel in each pixel a1 depth value and A2 in A1 A2 depth value subtracts each other and taken absolute value (wherein a1 and a2 have same depth image coordinate), so as to create prominent movement The time diffusion depth image MC (as shown in figure 19) of object.
S202, according to the opponent vehicle in the range of the time diffusion depth image acquisition vehicle identification of prominent mobile object Time diffusion depth image.
Specifically, the time diffusion depth image MC by A1 or A2 vehicle identification range applications to prominent mobile object In, the pixel value beyond vehicle identification scope in MC is arranged to 0, so as to create the opponent vehicle in the range of vehicle identification Time diffusion depth image MD (as shown in figure 20).In fig. 20, represented with filling the polygon picture frame protrusion of grid in MD State the subtraction value of the depth value for the depth pixel that opponent vehicle in A1, A2 includes.
S203, the time diffusion depth image of opponent vehicle is projected along line direction, column direction, with opponent vehicle Time diffusion depth image in obtain opponent vehicle four edges row sequence number, row sequence number.
Specifically, the polygon picture frame of the filling grid in S202 is both horizontally and vertically subjected to projection behaviour in MD Make, easily search the row sequence number and row sequence number at the edge of upper and lower, left and right four for the polygon picture frame for obtaining the filling grid, i.e., RowHigh, RowLow, ColLeft, ColRight, as shown in figure 20.
S204, according to the row sequence number at four edges, row sequence number, obtained respectively in two the second images corresponding to other side's car Four edges row sequence number, row sequence number.
Specifically, it is assumed that the imaging configuration of depth image make it that the value of the depth pixel of more remote things is bigger, will RowHigh, RowLow, ColLeft, ColRight are applied in above-mentioned A1, A2, as shown in Figure 21,22 (for convenience, in figure Only draw opponent vehicle).Between from ColLeft to ColRight, the upper of opponent vehicle is searched downwards by RowHigh Edge, top edge are characterized in that the value of the depth pixel for belonging to road surface (or other farther things) on the top edge is obvious More than the value of the depth pixel for belonging to opponent vehicle under the top edge.Then, other side has been found at RowHigh in A1 The top edge of vehicle, and do not find the top edge of opponent vehicle at RowHigh in A2, obtain RowTop1=RowHigh;Due to The imaging of depth image also complies with Perspective Principles, and same thing is more remote from video camera then to have the more top row sequence number of image, because This opponent vehicle is more farther than the distance at the T2 moment (if not finding other side's car at RowHigh in A1 in the distance at T1 moment Top edge, and the top edge of opponent vehicle has been found at RowHigh in A2, then distance of the opponent vehicle at the T1 moment Than the T2 moment distance closer to).Then, the top edge RowTop2 of opponent vehicle has been found below RowHigh in A2, As shown in figure 22.
More specifically, because RowTop1 and RowTop2 be not in same a line, opponent vehicle difference is understood according to Perspective Principles Lower edge in above-mentioned A1, A2 is not also not equal to RowBottom2 in same a line, i.e. RowBottom1.And above-mentioned filling grid The lower edge row sequence number RowLow of polygon picture frame be necessarily equal to RowBottom1 or RowBottom2, and above-mentioned reasoning Know that opponent vehicle is more farther than the distance in the T2 times in the distance of T1 times, therefore RowLow=RowBottom2 is learnt in reasoning, As shown in figure 22.
Further, the left hand edge of opponent vehicle is searched to the right by ColLeft, left hand edge is characterized in a left side for the left hand edge The value of the depth pixel that belongs to road surface (or other farther things) be significantly greater than the right side of the left hand edge and belong to opponent vehicle Depth pixel value.Then, the left hand edge of opponent vehicle is not found at ColLeft in A1, and in A2 at ColLeft The left hand edge of opponent vehicle has been found, has obtained ColLeft=ColLeft2.Then, ColLeft rights have been found pair in A1 The left hand edge ColLeft1 of square vehicle.
Afterwards, the right hand edge of opponent vehicle is searched to the left by ColRight, right hand edge is characterized in the right side of the right hand edge The value for the depth pixel for belonging to road surface (or other farther things) is significantly greater than the opponent vehicle that belongs on the left side of the right hand edge The value of depth pixel.Then, the right hand edge of opponent vehicle is not found at ColRight in A1, and in A2 at ColRight The right hand edge of opponent vehicle has been found, has obtained ColRight=ColRight2.Then, ColRight lefts are found in A1 The right hand edge ColRight1 of opponent vehicle.
In A1, the two complete depth pictures of row that are included ColLeft1, ColRight1 using RowTop1 and RowLow Element is cut into two line segments, is referred to as ColLeft1 line segments, ColRight1 line segments.ColLeft1 line segments, ColRight1 line segments one It is partially contained in opponent vehicle, a part is included on road surface, so, pixel B L, BR is respectively present by the vehicle portion of line segment Divide and separated with road surface part, therefore BL, BR line are exactly the lower edge of opponent vehicle.Calculate each picture on ColLeft1 line segments The PD (i.e. the absolute value that the value of two adjacent pixels of the left and right of the pixel is subtracted each other) of element, belong to vehicle on ColLeft1 line segments The PD of pixel is much larger than the PD for the pixel for belonging to road surface on zero, ColLeft1 line segments close to zero, thus can find BL.Similarly Also BR can be found.That a line serial number RowBottom1 where desirable BL or BR.
Thus, RowTop1, RowBottom1, ColLeft1, ColRight1 are found in A1, RowTop2, RowBottom2, ColLeft2, ColRight2 are found in A2, i.e., all having recognized opponent vehicle at T1, T2 moment exists The position of depth image, that is, realize the identification to opponent vehicle.
S1042, obtain the distance of opponent vehicle and this vehicle.
In one embodiment of the invention, the distance for obtaining opponent vehicle and this vehicle specifically includes:Respectively in two width Two groups of pixel values that opponent vehicle includes are obtained in second image;The minimum value in two groups of pixel values, minimum value point are obtained respectively Not Wei two corresponding opponent vehicle and this vehicle at different moments the first distance and second distance.Herein it should be noted that Here by minimum value is chosen to obtain the distance of opponent vehicle and this vehicle be based on an assumed condition, that is, assume other side Pixel value in two groups of pixel values that vehicle includes is smaller, and opponent vehicle is nearer apart from this vehicle.Certainly, the assumed condition also may be used To take inverse relationship, corresponding minimum value will change maximum into.
Specifically, the value of the pixel included due to the opponent vehicle in depth image just represents opponent vehicle and this vehicle The distance between Car1, therefore, the minimum value in the value for the pixel that opponent vehicle includes in above-mentioned image A1, A2 can be searched (depth value at opponent vehicle distance Car1 nearest position) respectively as the opponent vehicle T1, T2 moment and Car1 away from From value D1, D2.
However it remains at the time of the distance of this vehicle Car1 and opponent vehicle keeps constant.For example, in T1 and T2 The method according to described above is carved to identify to obtain the range information D1 and D2 of opponent vehicle and this vehicle, but in more late T3 The distance of Shi Keben vehicles Car1 and opponent vehicle keeps constant, i.e., the upper and lower side of the opponent vehicle identified at T2 the and T3 moment Edge row sequence number RowTop2=RowTop3 or RowBottom2=RowBottom3, then make D3=D2.
Thus, created based on two amplitude deepness images and vehicle identification scope that shoot at different moments on opponent vehicle Time diffusion depth image, the time diffusion depth image based on the establishment are shot at different moments to search the opponent vehicle at this Two amplitude deepness images in edge so as to identify the position of the opponent vehicle, the shooting time opponent vehicle based on identification Comprising the value of depth pixel find the distance between the opponent vehicle and this vehicle.
S1043, the relative velocity between opponent vehicle and this vehicle is obtained according to the distance of opponent vehicle and this vehicle.
In one embodiment of the invention, opponent vehicle and this vehicle are obtained according to the distance of opponent vehicle and this vehicle Between relative velocity specifically include:Opponent vehicle and this car are obtained at different moments according to the first distance, second distance and two Relative velocity between, two of which are the first moment and the second moment at different moments, and the first moment earlier than the second moment, First moment corresponding first distance, the second moment correspond to second distance.
Specifically, had been obtained in S1042 T1, T2 moment opponent vehicle and this vehicle Car1 distance value D1, D2, then, in T2, the relative speed relationship of carving copy vehicle Car1 and opponent vehicle is:V=(D2-D1)/(T2-T1).
S1044, the collision of opponent vehicle and this vehicle is obtained according to the distance and relative velocity of opponent vehicle and this vehicle Time.
In one embodiment of the invention, touching for opponent vehicle and this vehicle is obtained according to second distance and relative velocity Hit the time.
Specifically, at the T2 moment, the collision time TC of this vehicle Car1 and opponent vehicle is:
TC=D2/ | V |=D2* | (T2-T1)/(D2-D1) |, wherein, relative velocity V takes definitely during calculating collision time Value.
S1045, according to the distance of opponent vehicle and this vehicle, relative velocity and collision time generate configured information and/or Warning message.
Specifically, before the step of in be calculated this vehicle Car1 and opponent vehicle distance, relative velocity with And collision time, then these information can be generated configured information and/or warning message.Then, according to the configured information And/or warning message warns to driver.
Specifically, configured information and/or warning message can (such as instrument be set by this vehicle Car1 display device Standby, display etc.) it is used directly to indicate that this vehicle driver pays attention to keeping and the safe distance of opponent vehicle, holding safe speed Even reduce speed etc..
Can be this vehicle Car1 and opponent vehicle distance D, relatively fast in addition, in one embodiment of the invention Spend V and collision time TC and set multiple threshold values, the situation that threshold value is then exceeded according to D or V or TC generates different grades of finger Show information and/or warning message.For example, when TC is less than very first time threshold value, the configured information and/or report of the first estate are generated Alert information, when TC is less than the second time threshold(Second time threshold is less than very first time threshold value), the finger of the second grade of generation Show information and/or warning message, wherein, the configured information of the configured information of the second grade and/or warning message than the first estate And/or warning message is more urgent.
In one embodiment of the invention, driver is warned according to configured information and/or warning message, can be with Realized by the plurality of devices on this vehicle Car1 or part.For example, can by this vehicle Car1 instrumentation, display, Sound device, vibratory equipment, air-conditioning blowing apparatus etc. perform it is distinguishing operation warn this vehicle driver pay attention to keep with it is right The safe distance of square vehicle, safe speed is kept even to reduce speed etc..In another example configured information and/or warning message may be used also To be used for performing light flash by this vehicle Car1 headlight, taillight or loudspeaker control device, the operation such as blow a whistle warn other side The driver of vehicle pays attention to keeping even reducing speed etc. with the safe distance of this vehicle, holding safe speed.Due to this vehicle The reaction speed of Car1 electronic equipment is always faster than the reaction speed of driver, therefore such warning operation will be without suspected of this The driver of vehicle or opponent vehicle wins the brake reaction time of preciousness, further lifts drive safety.
The vehicle identification method of the embodiment of the present invention, has the advantages that:(1)Because the depth image and coloured silk that utilize The imaging component of color image is same encapsulation chip, can use same group of optical mirror slip, compared to depth image and cromogram The imaging component of picture is separately encapsulated, identification blind area is reduced using the situation of not same group of optical mirror slip, eliminates two kinds points Mechanical checkout process between the dress of Kaifeng.And the CMOS utilized according to the Moore's Law of semi-conductor industry, the present invention TOF is passed Sensor and imaging sensor intertexture array chip will be provided with sufficiently low production cost in limited period;(2)Even if due to white My god, night natural lighting change acute variation is occurred to coloured image or luminance picture and is unfavorable for vehicle identification because utilizing It is small that the imaging of depth image make it that natural lighting influences on vehicle identification, without the use of the computationally intensive pattern-recognition of complexity Method, the identification of vehicle calculate compare using coloured image or luminance picture can with simple and quick and precision higher, Ke Yigeng Warning message is generated soon, the lower computing chip of cost can be used to complete to calculate;(3)Because depth image and coloured image are several Shoot simultaneously so that completed using between the vehicle identification scope of color image recognition and the vehicle using depth image identification The time error very little of calculating, ensure that recognition accuracy;(4)Due to make use of coloured image or luminance picture to identify track Line, the identification range of vehicle is defined, eliminate the identification interference of the vehicle or object beyond identification range, can to generate Warning message it is accurate and necessary, the normally travel that unnecessary warning message disturbs driver will not be generated;(5)Quickly can may be used Lean on, vehicle is identified with higher precision, and the vehicle based on identification generates warning message fast and reliablely, can be this vehicle or right The driver of square vehicle wins the brake reaction time of preciousness, further lifts drive safety.
In order to realize above-described embodiment, the present invention also proposes a kind of vehicle identifier.
Figure 23 is the structural representation of vehicle identifier according to an embodiment of the invention.
As shown in figure 23, vehicle identifier includes:Image collection module 100, lane line acquisition module 200, identification model Enclose generation module 300 and identification module 400.
Image collection module 100 is used to obtain the first image and the second image, wherein, the first image is coloured image or bright Image is spent, the second image is depth image.
In an embodiment of the present invention, image collection module 100 passes through with TOF sensor and imaging sensor intertexture battle array The camera of row chip obtains the first image and the second image.
Specifically, TOF sensor and imaging sensor intertexture array chip are simply introduced below.
Samsung Electronics of South Korea have developed the CMOS sensings that can obtain depth image and common RGB color image simultaneously Device, obtaining both images category simultaneously with a cmos sensor, first, the said firm was on 2 22nd, 2012 in U.S. in the whole world at that time Paper publishing has been carried out to the cmos sensor in state " ISSCC2012 " meeting.The cmos sensor of this type belongs to CMOS TOF sensor and the category of imaging sensor intertexture array chip.It as fig. 2 shows the exploitation of Samsung Electronics of South Korea The cmos sensor composition, wherein each pixel cell includes 1 Z pixel (TOF sensor, for producing depth image) With 8 rgb pixels (imaging sensor, for producing RGB color image), the i.e. corresponding 2 R pixels of 1 Z pixel, 4 G pixels With 2 B pixels.As Fig. 3 shows the electronics wiring of the cmos sensor, when using same group of optical mirror slip, CMOS biographies Sensor can almost shoot an amplitude deepness image (its horizontal resolution is 480, vertical resolution 360) and a width RGB simultaneously Coloured image (its horizontal resolution is 1920, vertical resolution 720).
It should be understood that captured piece image is actual for (or horizontal, the vertical parsing of given row/column length Degree), a numerical matrix of given numerical value change scope and given value arrangements mode.
Therefore, CMOS TOF sensor and imaging sensor intertexture array chip once it is determined that, Z pixels and RGB (or YUV Brightness, aberration etc.) pixel ratio and Rankine-Hugoniot relations i.e. can determine that, its shoot the coloured image (or luminance picture) and depth Intertexture mapping relations between image also determine that.
In an embodiment of the present invention, as shown in figure 4, TOF sensor and imaging sensor intertexture array with CMOS The camera C1 (being illustrated by the small square frame with band in figure) of chip is typically mounted on this vehicle Car1 (by the big square frame of white in figure Signal) vehicle body center line on.Camera C1 may be mounted at Car1 headstocks and to imaging in front of headstock, can also be arranged on Car1 The tailstock is simultaneously imaged to tailstock rear, can also respectively install camera C1 simultaneously in front of headstock in Car1 headstock and the tailstock It is imaged with tailstock rear.Here, camera C1 installation site and imaging direction are not limited.I.e. vehicle of the invention is known Method for distinguishing is all applicable to different installation site and imaging direction, therefore does not distinguish hereafter and only illustrate to install One camera C1 situation.
Lane line acquisition module 200 is used to obtain lines on highway according to the first image.
Specifically, lane line acquisition module 200 photographs according to camera coloured image or luminance picture obtains public affairs Bus diatom.
In one embodiment of the invention, as shown in figure 24, lane line acquisition module 200 includes:Luminance threshold generates Unit 210, bianry image creating unit 220 and acquiring unit 230.
Wherein, luminance threshold generation unit 210 is used to generate gray level image according to the first image, and is given birth to according to gray level image Into luminance threshold.
Specifically, the common coloured image of people can use multiple color standard to realize colored display in display device, Such as RGB (Red Green Blue) standard, YUV (Y represent brightness, U V represent aberration) standard etc..Therefore, it is if captured Coloured image use YUV standards, then luminance threshold generation unit 210 can directly extract Y-signal to create on brightness Gray level image;If captured coloured image is using RGB standards (or R ' G ' B ' by gamma calibrations), luminance threshold Generation unit 210 creates the gray level image on brightness according to formula Y=0.299R '+0.587G '+0.114B '.
Further, because the brightness of lane line and the brightness of highway pavement have notable difference(The brightness of lane line compared with It is high), therefore some luminance thresholds can be obtained by searching, luminance threshold can utilize " statistics with histogram -- bimodal " algorithm come Lookup obtains;Gray level image can also be divided into multiple subgraphs and " statistics with histogram -- bimodal " is performed to each subgraph and calculated Method searches to obtain multiple luminance thresholds, to tackle the situation of highway pavement or lane line brightness change.
More specifically, as shown in Figure 6, it is assumed that the quantization brightness excursion of pixel is 0 to 255 in gray level image, Nogata Figure has just counted distribution probability (or statistical number) of all pixels of gray level image or its subgraph on quantifying brightness change, its In, a probability distribution peak is distributed in the relatively low some set of pixels comprising highway pavement of brightness, and brightness is higher includes It is distributed in another probability distribution peak in some set of pixels of lane line, the quantization at the lowest point between probability distribution " bimodal " is bright Degree is luminance threshold, for example, as shown in fig. 6, luminance threshold is 170.Therefore, as long as along quantization luminance axis point in histogram Do not search a peak value and its position respectively from two, then from a lowest point value and its position is searched between two peaks, should The lowest point position is exactly luminance threshold.
Bianry image creating unit 220 is used to create bianry image according to gray level image and luminance threshold.
Specifically, in gray level image, brightness is included lane line by bianry image creating unit 220 higher than luminance threshold Gray level image pixel value be arranged to 1, by brightness be less than luminance threshold the other gray level image pixels for including highway pavement Value is arranged to 0, so as to create the bianry image MB0 of prominent lane line, as shown in Figure 7.
Acquiring unit 230 is used to identify initial lines on highway according to bianry image, and is obtained according to initial lines on highway Take lines on highway.
In one embodiment of the invention, acquiring unit 230 is additionally operable to:Initial highway is identified according to bianry image obtaining After lane line, initial lines on highway is screened to obtain two straightways, and two straightways are prolonged Long or merging, to obtain the lines on highway.
Specifically, because the lane line close to this vehicle Car1 is always close to straight, therefore acquiring unit 230 can profit The lane line close to this vehicle Car1 is identified in bianry image MB0 with Hough line detection algorithms (Hough transform).Generally The straightway that more pixel value is 1 will be detected with Hough line detection algorithms, in straightway as shown in Figure 7, except car Diatom may also contain guardrail, deceleration strip etc..Therefore need to pick out with long length from the multiple straightways detected And the N bar straightways with horizontal line into larger acute angle, be illustrated in figure 8 it is selected after 6 straightways (straightway is for example solid Shown in line, consider that the straightway of right half-image and the complementary angle of horizontal line institute angulation are acute angle, horizontal line and acute angle such as void in figure Shown in line), other straightways are abandoned, so as to create bianry image MB1.Again due to leading to closest to this vehicle Car1 lane line Often there is left and right 2, and the acute angle that the track line segment of the left and right 2 is formed with horizontal line is in the N bar straightways of long length With more stable acute angle degree excursion, therefore the track line segment of the left and right 2 can be selected from above-mentioned N bars straightway Out, so as to create bianry image MB2, as shown in Figure 9.
More specifically, the obtained track straightway of left and right 2 will be selected(2 track straightways i.e. shown in Fig. 8) Lane line original coordinates set can be created by extending in gray level image coordinate system, and the lane line original coordinates set includes the extension The track straightway of left and right 2 by gray level image each pixel coordinate set, so as to create bianry image MB3, such as Shown in Figure 10.It is the schematic diagram of lines on highway shown in Figure 10.
In addition, when the lane line apart from this vehicle Car1 remotely is bend, if will select according to the method described above To the track straight line elongated segment of left and right 2 to form lines on highway, larger error will be present with actual conditions.For example, such as Shown in Figure 11, thick real segment (marked as 1,2) be Hough line detection algorithms identification lane line, thin phantom line segments (label For 3,4) be the selected 2 track straightways in left and right extended line, real segment 5 and phantom line segments 6 are highway car in practice Diatom, then larger by producing if the lane line subsequently formed according to line segment 1,2,3 and 4 generates vehicle identification scope The vehicle identification scope for mistake in region between error, i.e. line segment 3 and line segment 5, and the region between line segment 4 and line segment 6 is The vehicle identification scope of omission.Therefore, in the case where bend be present, not to the track straightway of the left and right 2(That is the He of line segment 1 Line segment 2)Extended, but straightway is searched near the upper end of the 2 track straightways in left and right, the conjunction for line section of going forward side by side And;The upper end of line segment after merging continues to search for, merges straightway, until the length of the line segment of merging reaches the limit of setting Value.Wherein, the merging process of line segment is as shown in Figure 12,13,14.The track line segment of left and right 2 for completing to merge passes through gray level image The set of coordinate of each pixel be lane line original coordinates set.Shown in Figure 14, as obtained in the case where bend be present The schematic diagram for the lines on highway got.
Identification range generation module 300 is used for public according to the intertexture mapping relations framework between the first image and the second image Bus diatom is mapped in the second image to generate vehicle identification scope in the second image.
Specifically, because lane line is close to highway pavement, thickness is small, and lane line has non-with highway pavement in depth image Very close to change in depth (be not brightness change), it is difficult to lane line and highway pavement are made a distinction in depth image, because This, be able to will be obtained according to intertexture mapping relations of the coloured image (or luminance picture, gray level image) between depth image Lines on highway be mapped in depth image to obtain vehicle identification scope.
, can be in set depth image per a line more specifically, for example, by taking simplest equal proportion intertexture mapping relations as an example The N row Y pixels (vertical resolution of gray level image is N times of depth image) of Z pixel corresponding grey scale images, and each row Z The M row Y pixels of pixel corresponding grey scale image (the horizontal resolution of gray level image is M times of depth image).Further, according to The ratio setting of above-mentioned intertexture mapping relations, (raw line is included for each coordinate included in lane line original coordinates set Coordinate and original row coordinate), the row coordinate that is mapped, its original row coordinate divided by M will be rounded after its raw line coordinate divided by N The row coordinate mapped is rounded afterwards.The car on depth image can be created according to the row coordinate of the row coordinate of mapping and mapping Diatom mapping point set(That is vehicle identification scope).Thus, closest to the lane line of this vehicle Car1 left and right 2 just from colour Image (or luminance picture, gray level image) has been mapped to depth image, the depth image portion between the lane line of left and right 2 of mapping It is vehicle identification scope to divide pixel region.
Identification module 400 is used to vehicle be identified according to vehicle identification scope.
Specifically, the opponent vehicle beyond this vehicle Car1 appears in coloured image (or luminance picture) and depth image When middle, the back or front of usual opponent vehicle apart from this vehicle Car1 recently and (or other are farther with highway pavement Things) forming strong brightness and depth difference, (the inside each several part of the back or front of opponent vehicle has almost same Depth, but substantially it is higher by road surface), and depth image also directly contains opponent vehicle range information.Therefore, can be according to vehicle Vehicle is identified identification range.
In one embodiment of the invention, as shown in figure 25, identification module 400 includes:Recognition unit 410, distance obtain Take unit 420, relative velocity acquiring unit 430, collision time acquiring unit 440 and information generating unit 450.
Recognition unit 410 is used to identify opponent vehicle according to the second image and vehicle identification scope.
In one embodiment of the invention, recognition unit 410 is specifically used for:Obtain two second shot at different moments Image, and according to the time diffusion depth image of two the second image creations protrusion mobile objects;And according to prominent mobile pair The time diffusion depth image of elephant obtains the time diffusion depth image of the opponent vehicle in the range of vehicle identification.
Specifically, usual this vehicle Car1 and opponent vehicle distance always change, and are shown as in depth image The depth pixel value or opponent vehicle of opponent vehicle change in the position of depth image coordinate system with the time.Therefore, can be with Opponent vehicle is identified using on the time diffusion algorithm of depth image.For example, shooting time be respectively T1, T2 (T1 earlier than T2) two amplitude deepness image A1, A2 (between two thin dotted lines be vehicle identification scope in figure respectively as shown in Figure 17,18, For illustrative ease, schematic diagram does not draw the things beyond vehicle), by each pixel in each pixel a1 depth value and A2 in A1 A2 depth value subtracts each other and taken absolute value (wherein a1 and a2 have same depth image coordinate), so as to create prominent movement The time diffusion depth image MC (as shown in figure 19) of object.
More specifically, by A1 or A2 vehicle identification range applications to the time diffusion depth image MC of prominent mobile object In, the pixel value beyond vehicle identification scope in MC is arranged to 0, so as to create the opponent vehicle in the range of vehicle identification Time diffusion depth image MD (as shown in figure 20).In fig. 20, represented with filling the polygon picture frame protrusion of grid in MD State the subtraction value of the depth value for the depth pixel that opponent vehicle in A1, A2 includes.
Further, recognition unit 410 is additionally operable to:To the time diffusion depth image of opponent vehicle along line direction, row side To being projected, to obtain the row sequence number at four edges of opponent vehicle, row in the time diffusion depth image of opponent vehicle Sequence number;According to the row sequence number at four edges, row sequence number, obtained respectively in two the second images corresponding to four of opponent vehicle Row sequence number, the row sequence number at edge.
Specifically, the polygon picture frame for filling grid is easily looked into MD both horizontally and vertically carry out projection operation Find the row sequence number and row sequence number at the edge of upper and lower, left and right four of the polygon picture frame of the filling grid, i.e. RowHigh, RowLow, ColLeft, ColRight, as shown in figure 20.
It is assumed that the imaging configuration of depth image make it that the value of the depth pixel of more remote things is bigger, by RowHigh, RowLow, ColLeft, ColRight are applied in above-mentioned A1, A2, (for convenience, are only drawn in figure pair as shown in Figure 21,22 Square vehicle).Between from ColLeft to ColRight, the top edge of opponent vehicle is searched downwards by RowHigh, on Edge is characterized in that the value of the depth pixel for belonging to road surface (or other farther things) on the top edge is significantly greater than and is somebody's turn to do The value of the depth pixel for belonging to opponent vehicle under top edge.Then, opponent vehicle has been found at RowHigh in A1 Top edge, and do not find the top edge of opponent vehicle at RowHigh in A2, obtain RowTop1=RowHigh;Due to depth map The imaging of picture also complies with Perspective Principles, and same thing is more remote from video camera then to have the more top row sequence number of image, therefore other side Vehicle is more farther than the distance at the T2 moment (if not finding the upper of opponent vehicle at RowHigh in A1 in the distance at T1 moment Edge, and found the top edge of opponent vehicle at RowHigh in A2, then opponent vehicle in the distance ratio at T1 moment in T2 The distance at moment closer to).Then, the top edge RowTop2 of opponent vehicle, such as Figure 22 have been found below RowHigh in A2 It is shown.
More specifically, because RowTop1 and RowTop2 be not in same a line, opponent vehicle difference is understood according to Perspective Principles Lower edge in above-mentioned A1, A2 is not also not equal to RowBottom2 in same a line, i.e. RowBottom1.And above-mentioned filling grid The lower edge row sequence number RowLow of polygon picture frame be necessarily equal to RowBottom1 or RowBottom2, and above-mentioned reasoning Know that opponent vehicle is more farther than the distance in the T2 times in the distance of T1 times, therefore RowLow=RowBottom2 is learnt in reasoning, As shown in figure 22.
Further, the left hand edge of opponent vehicle is searched to the right by ColLeft, left hand edge is characterized in a left side for the left hand edge The value of the depth pixel that belongs to road surface (or other farther things) be significantly greater than the right side of the left hand edge and belong to opponent vehicle Depth pixel value.Then, the left hand edge of opponent vehicle is not found at ColLeft in A1, and in A2 at ColLeft The left hand edge of opponent vehicle has been found, has obtained ColLeft=ColLeft2.Then, ColLeft rights have been found pair in A1 The left hand edge ColLeft1 of square vehicle.
Afterwards, the right hand edge of opponent vehicle is searched to the left by ColRight, right hand edge is characterized in the right side of the right hand edge The value for the depth pixel for belonging to road surface (or other farther things) is significantly greater than the opponent vehicle that belongs on the left side of the right hand edge The value of depth pixel.Then, the right hand edge of opponent vehicle is not found at ColRight in A1, and in A2 at ColRight The right hand edge of opponent vehicle has been found, has obtained ColRight=ColRight2.Then, ColRight lefts are found in A1 The right hand edge ColRight1 of opponent vehicle.
In A1, the two complete depth pictures of row that are included ColLeft1, ColRight1 using RowTop1 and RowLow Element is cut into two line segments, is referred to as ColLeft1 line segments, ColRight1 line segments.ColLeft1 line segments, ColRight1 line segments one It is partially contained in opponent vehicle, a part is included on road surface, so, pixel B L, BR is respectively present by the vehicle portion of line segment Divide and separated with road surface part, therefore BL, BR line are exactly the lower edge of opponent vehicle.Calculate each picture on ColLeft1 line segments The PD (i.e. the absolute value that the value of two adjacent pixels of the left and right of the pixel is subtracted each other) of element, belong to vehicle on ColLeft1 line segments The PD of pixel is much larger than the PD for the pixel for belonging to road surface on zero, ColLeft1 line segments close to zero, thus can find BL.Similarly Also BR can be found.That a line serial number RowBottom1 where desirable BL or BR.
Thus, RowTop1, RowBottom1, ColLeft1, ColRight1 are found in A1, RowTop2, RowBottom2, ColLeft2, ColRight2 are found in A2, i.e., all having recognized opponent vehicle at T1, T2 moment exists The position of depth image, that is, realize the identification to opponent vehicle.
Distance acquiring unit 420 is used for the distance for obtaining opponent vehicle and this vehicle.
In one embodiment of the invention, distance acquiring unit 420 is specifically used for:Obtained respectively in two the second images Take two groups of pixel values that opponent vehicle includes;The minimum value in two groups of pixel values is obtained respectively, and minimum value is respectively two differences The first distance and second distance of opponent vehicle corresponding to moment and this vehicle.Herein it should be noted that here by selection Minimum value come obtain the distance of opponent vehicle and this vehicle be based on an assumed condition, that is, assume that opponent vehicle includes two Pixel value in group pixel value is smaller, and opponent vehicle is nearer apart from this vehicle.Certainly, the assumed condition can also be negated to pass System, corresponding minimum value will change maximum into.
Specifically, the value of the pixel included due to the opponent vehicle in depth image just represents opponent vehicle and this vehicle The distance between Car1, therefore, the minimum value in the value for the pixel that opponent vehicle includes in above-mentioned image A1, A2 can be searched (depth value at opponent vehicle distance Car1 nearest position) respectively as the opponent vehicle T1, T2 moment and Car1 away from From value D1, D2.
However it remains at the time of the distance of this vehicle Car1 and opponent vehicle keeps constant.For example, in T1 and T2 The method according to described above is carved to identify to obtain the range information D1 and D2 of opponent vehicle and this vehicle, but in more late T3 The distance of Shi Keben vehicles Car1 and opponent vehicle keeps constant, i.e., the upper and lower side of the opponent vehicle identified at T2 the and T3 moment Edge row sequence number RowTop2=RowTop3 or RowBottom2=RowBottom3, then make D3=D2.
Relative velocity acquiring unit 430 is used to obtain opponent vehicle and this vehicle according to the distance of opponent vehicle and this vehicle Between relative velocity.
In one embodiment of the invention, relative velocity acquiring unit 430 is specifically used for:According to the first distance, second Distance and two relative velocities obtained at different moments between opponent vehicle and this vehicle, when two of which is first at different moments Carve and the second moment, and the first moment earlier than the second moment, the first moment corresponding first distance, the second moment corresponding second away from From.
Specifically, distance acquiring unit 420 has been obtained for the distance in T1, T2 moment opponent vehicle and this vehicle Car1 Value D1, D2, then, in T2, the relative speed relationship of carving copy vehicle Car1 and opponent vehicle is:V=(D2-D1)/(T2-T1).
Collision time acquiring unit 440 is used for distance and relative velocity acquisition other side's car according to opponent vehicle and this vehicle Collision time with this vehicle.
In one embodiment of the invention, collision time acquiring unit 440 is specifically used for:According to second distance and relatively Speed obtains the collision time of opponent vehicle and this vehicle.
Specifically, at the T2 moment, the collision time TC of this vehicle Car1 and opponent vehicle is:
TC=D2/ | V |=D2* | (T2-T1)/(D2-D1) |, wherein, relative velocity V takes definitely during calculating collision time Value.
Information generating unit 450 is used to be generated according to the distance of opponent vehicle and this vehicle, relative velocity and collision time Configured information and/or warning message.
Specifically, by the way that this vehicle Car1 and opponent vehicle distance, relative velocity and collision time is calculated, So information generating unit 450 is used to these information generating configured information and/or warning message.
In addition, in one embodiment of the invention, as shown in figure 26, identification module 400 also includes:Alarm unit 460.
Alarm unit 460 is used to warn driver according to configured information and/or warning message.
Specifically, alarm unit 460 can pass through this vehicle Car1 display device (such as instrumentation, display etc.) It is used directly to indicate that this vehicle driver pays attention to keeping even reducing speed with the safe distance of opponent vehicle, holding safe speed Deng.
In addition, in one embodiment of the invention, alarm unit 460 can be this vehicle Car1 and opponent vehicle away from Multiple threshold values are set from D, relative velocity V and collision time TC, the situation generation of threshold value is then exceeded not according to D or V or TC The configured information and/or warning message of ad eundem.For example, when TC is less than very first time threshold value, the instruction of the first estate is generated Information and/or warning message, when TC is less than the second time threshold(Second time threshold is less than very first time threshold value), generation the The configured information and/or warning message of two grades, wherein, the configured information and/or warning message of the second grade compare the first estate Configured information and/or warning message it is more urgent.
In one embodiment of the invention, alarm unit 460 enters according to configured information and/or warning message to driver Row warning, it can be realized by the plurality of devices on this vehicle Car1 or part.For example, it can be set by this vehicle Car1 instrument Standby, display, sound device, vibratory equipment, air-conditioning blowing apparatus etc. perform distinguishing operation and warn this vehicle driver note Meaning keeps even reducing speed etc. with the safe distance of opponent vehicle, holding safe speed.In another example configured information and/or report Alert information such as can also be used for performing light flash, blow a whistle at the behaviour by this vehicle Car1 headlight, taillight or loudspeaker control device The driver for making warning opponent vehicle pays attention to the safe distance of holding and this vehicle, keeps safe speed or even reduces speed etc.. Because the reaction speed of this vehicle Car1 electronic equipment is always faster than the reaction speed of driver, therefore such warning operation The brake reaction time of preciousness will be won without the driver suspected of this vehicle or opponent vehicle, further lift drive safety.
The vehicle identifier of the embodiment of the present invention, has the advantages that:(1)Because the depth image and coloured silk that utilize The imaging component of color image is same encapsulation chip, can use same group of optical mirror slip, compared to depth image and cromogram The imaging component of picture is separately encapsulated, identification blind area is reduced using the situation of not same group of optical mirror slip, eliminates two kinds points Mechanical checkout process between the dress of Kaifeng.And the CMOS utilized according to the Moore's Law of semi-conductor industry, the present invention TOF is passed Sensor and imaging sensor intertexture array chip will be provided with sufficiently low production cost in limited period;(2)Even if due to white My god, night natural lighting change acute variation is occurred to coloured image or luminance picture and is unfavorable for vehicle identification because utilizing It is small that the imaging of depth image make it that natural lighting influences on vehicle identification, without the use of the computationally intensive pattern-recognition of complexity Method, the identification of vehicle calculate compare using coloured image or luminance picture can with simple and quick and precision higher, Ke Yigeng Warning message is generated soon, the lower computing chip of cost can be used to complete to calculate;(3)Because depth image and coloured image are several Shoot simultaneously so that completed using between the vehicle identification scope of color image recognition and the vehicle using depth image identification The time error very little of calculating, ensure that recognition accuracy;(4)Due to make use of coloured image or luminance picture to identify track Line, the identification range of vehicle is defined, eliminate the identification interference of the vehicle or object beyond identification range, can to generate Warning message it is accurate and necessary, the normally travel that unnecessary warning message disturbs driver will not be generated;(5)Quickly can may be used Lean on, vehicle is identified with higher precision, and the vehicle based on identification generates warning message fast and reliablely, can be this vehicle or right The driver of square vehicle wins the brake reaction time of preciousness, further lifts drive safety.
In order to realize above-described embodiment, the present invention also proposes a kind of vehicle, and the vehicle includes the vehicle of the embodiment of the present invention Identification device.
The vehicle of the embodiment of the present invention, in motion can rapidly, accurately due to being provided with vehicle identifier Other side's car two is identified, and the opponent vehicle based on identification generates warning message fast and reliablely, can be this vehicle or other side's car Driver win preciousness brake reaction time, further lift drive safety.
In order to realize above-described embodiment, the present invention also proposes a kind of vehicle, and the vehicle has TOF sensor and image sensing The camera of device intertexture array chip.
The vehicle of the embodiment of the present invention, due to the camera with TOF sensor and imaging sensor intertexture array chip, In motion, the coloured image/luminance picture and depth image of opponent vehicle can be almost shot simultaneously.
In the description of the invention, it is to be understood that term " " center ", " longitudinal direction ", " transverse direction ", " length ", " width ", " thickness ", " on ", " under ", "front", "rear", "left", "right", " vertical ", " level ", " top ", " bottom " " interior ", " outer ", " up time The orientation or position relationship of the instruction such as pin ", " counterclockwise ", " axial direction ", " radial direction ", " circumference " be based on orientation shown in the drawings or Position relationship, it is for only for ease of and describes the present invention and simplify description, rather than indicates or imply that signified device or element must There must be specific orientation, with specific azimuth configuration and operation, therefore be not considered as limiting the invention.
In addition, term " first ", " second " are only used for describing purpose, and it is not intended that instruction or hint relative importance Or the implicit quantity for indicating indicated technical characteristic.Thus, define " first ", the feature of " second " can be expressed or Implicitly include at least one this feature.In the description of the invention, " multiple " are meant that at least two, such as two, three It is individual etc., unless otherwise specifically defined.
In the present invention, unless otherwise clearly defined and limited, term " installation ", " connected ", " connection ", " fixation " etc. Term should be interpreted broadly, for example, it may be fixedly connected or be detachably connected, or integrally;Can be that machinery connects Connect or electrically connect;Can be joined directly together, can also be indirectly connected by intermediary, can be in two elements The connection in portion or the interaction relationship of two elements, limited unless otherwise clear and definite.For one of ordinary skill in the art For, the concrete meaning of above-mentioned term in the present invention can be understood as the case may be.
In the present invention, unless otherwise clearly defined and limited, fisrt feature can be with "above" or "below" second feature It is that the first and second features directly contact, or the first and second features pass through intermediary mediate contact.Moreover, fisrt feature exists Second feature " on ", " top " and " above " but fisrt feature are directly over second feature or oblique upper, or be merely representative of Fisrt feature level height is higher than second feature.Fisrt feature second feature " under ", " lower section " and " below " can be One feature is immediately below second feature or obliquely downward, or is merely representative of fisrt feature level height and is less than second feature.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show The description of example " or " some examples " etc. means specific features, structure, material or the spy for combining the embodiment or example description Point is contained at least one embodiment or example of the present invention.In this manual, to the schematic representation of above-mentioned term not Identical embodiment or example must be directed to.Moreover, specific features, structure, material or the feature of description can be with office Combined in an appropriate manner in one or more embodiments or example.In addition, in the case of not conflicting, the skill of this area Art personnel can be tied the different embodiments or example and the feature of different embodiments or example described in this specification Close and combine.
Although embodiments of the invention have been shown and described above, it is to be understood that above-described embodiment is example Property, it is impossible to limitation of the present invention is interpreted as, one of ordinary skill in the art within the scope of the invention can be to above-mentioned Embodiment is changed, changed, replacing and modification.

Claims (19)

  1. A kind of 1. vehicle identification method, it is characterised in that including:
    The first image and the second image are obtained, wherein, described first image is coloured image or luminance picture, second image For depth image;
    Lines on highway is obtained according to described first image;
    The lines on highway is mapped to described according to the intertexture mapping relations between described first image and the second image With the generation vehicle identification scope in second image in two images;And
    Vehicle is identified according to the vehicle identification scope, wherein, it is described that vehicle is known according to vehicle identification scope Do not further comprise:
    According to second image and vehicle identification scope identification opponent vehicle, wherein, it is described according to the second image and institute Vehicle identification scope identification opponent vehicle is stated to specifically include:Two two the second images shot at different moments of acquisition, and according to Two second image creations protrude the time diffusion depth image of mobile object;According to the time of the prominent mobile object Differential depth image obtains the time diffusion depth image of the opponent vehicle in the range of the vehicle identification;
    Obtain the distance of the opponent vehicle and this vehicle;
    Relative velocity between the opponent vehicle and described vehicle is obtained according to the distance of the opponent vehicle and this vehicle;
    Touching for the opponent vehicle and described vehicle is obtained according to the distance and relative velocity of the opponent vehicle and this vehicle Hit the time;And
    Configured information and/or report are generated according to distance, relative velocity and the collision time of the opponent vehicle and described vehicle Alert information.
  2. 2. the method as described in claim 1, it is characterised in that described first image and the second image with TOF by sensing The camera of device and imaging sensor intertexture array chip obtains.
  3. 3. the method as described in claim 1, it is characterised in that described specifically to be wrapped according to the first image acquisition lines on highway Include:
    Gray level image is generated according to described first image, and luminance threshold is generated according to the gray level image;
    Bianry image is created according to the gray level image and the luminance threshold;And
    Initial lines on highway is identified according to the bianry image, and the highway car is obtained according to the initial lines on highway Diatom.
  4. 4. method as claimed in claim 3, it is characterised in that described that the highway is obtained according to the initial lines on highway Lane line includes:
    The initial lines on highway is screened to obtain two straightways, and two straightways are extended or Merge, to obtain the lines on highway.
  5. 5. the method as described in claim 1, it is characterised in that described to be identified according to the second image and the vehicle identification scope Opponent vehicle also includes:
    The time diffusion depth image of the opponent vehicle is projected along line direction, column direction, with the opponent vehicle Time diffusion depth image in obtain the opponent vehicle four edges row sequence number, row sequence number;
    It is described right corresponding to acquisition in two second images respectively according to row sequence number, the row sequence number at four edges Row sequence number, the row sequence number at four edges of square vehicle.
  6. 6. method as claimed in claim 5, it is characterised in that the distance of the acquisition opponent vehicle and this vehicle is specific Including:
    Two groups of pixel values that the opponent vehicle includes are obtained in two second images respectively;
    The minimum value in two groups of pixel values is obtained respectively, and the minimum value is respectively described two institutes corresponding at different moments State the first distance and second distance of opponent vehicle and described vehicle.
  7. 7. method as claimed in claim 6, it is characterised in that described to be obtained according to the distance of the opponent vehicle and this vehicle Relative velocity between the opponent vehicle and described vehicle specifically includes:
    Described two obtain according to the described first distance, the second distance and at different moments the opponent vehicle and described car Relative velocity between, be the first moment and the second moment at different moments wherein described two, and first moment earlier than Second moment, first moment correspond to first distance, and second moment corresponds to the second distance.
  8. 8. method as claimed in claim 7, it is characterised in that the distance and phase according to the opponent vehicle and this vehicle The collision time that the opponent vehicle and described vehicle are obtained to speed specifically includes:
    The collision time of the opponent vehicle and described vehicle is obtained according to the second distance and the relative velocity.
  9. 9. method as claimed in claim 8, it is characterised in that also include:
    Driver is warned according to the configured information and/or warning message.
  10. A kind of 10. vehicle identifier, it is characterised in that including:
    Image collection module, for obtaining the first image and the second image, wherein, described first image is coloured image or brightness Image, second image are depth image;
    Lane line acquisition module, for obtaining lines on highway according to described first image;
    Identification range generation module, for according to the intertexture mapping relations between described first image and the second image by the public affairs Bus diatom is mapped in second image to generate the vehicle identification scope in second image;And
    Identification module, for vehicle to be identified according to the vehicle identification scope, wherein, the identification module includes:
    Recognition unit, for identifying opponent vehicle according to second image and the vehicle identification scope, wherein, the identification Unit is specifically used for obtaining two two the second images shot at different moments, and is protruded according to two second image creations The time diffusion depth image of mobile object, and according to obtaining the time diffusion depth image of the prominent mobile object The time diffusion depth image of opponent vehicle in the range of vehicle identification;
    Distance acquiring unit, for obtaining the distance of the opponent vehicle and this vehicle;
    Relative velocity acquiring unit, for according to the distance of the opponent vehicle and this vehicle obtain the opponent vehicle with it is described Relative velocity between this vehicle;
    Collision time acquiring unit, the other side is obtained for the distance according to the opponent vehicle and this vehicle and relative velocity The collision time of vehicle and described vehicle;And
    Information generating unit, for being given birth to according to distance, relative velocity and the collision time of the opponent vehicle and described vehicle Into configured information and/or warning message.
  11. 11. device as claimed in claim 10, it is characterised in that described first image and the second image with TOF by passing The camera of sensor and imaging sensor intertexture array chip obtains.
  12. 12. device as claimed in claim 10, it is characterised in that the lane line acquisition module includes:
    Luminance threshold generation unit, for generating gray level image according to described first image, and generated according to the gray level image Luminance threshold;
    Bianry image creating unit, for creating bianry image according to the gray level image and the luminance threshold;And
    Acquiring unit, for identifying initial lines on highway according to the bianry image, and according to the initial lines on highway Obtain the lines on highway.
  13. 13. device as claimed in claim 12, it is characterised in that the acquiring unit is specifically used for:
    After initial lines on highway is identified according to the bianry image, the initial lines on highway is screened to obtain Two straightways are taken, and two straightways are extended or merged, to obtain the lines on highway.
  14. 14. device as claimed in claim 13, it is characterised in that the recognition unit is additionally operable to:
    The time diffusion depth image of the opponent vehicle is projected along line direction, column direction, with the opponent vehicle Time diffusion depth image in obtain the opponent vehicle four edges row sequence number, row sequence number;
    It is described right corresponding to acquisition in two second images respectively according to row sequence number, the row sequence number at four edges Row sequence number, the row sequence number at four edges of square vehicle.
  15. 15. device as claimed in claim 14, it is characterised in that the distance acquiring unit is specifically used for:
    Two groups of pixel values that the opponent vehicle includes are obtained in two second images respectively;
    The minimum value in two groups of pixel values is obtained respectively, and the minimum value is respectively described two institutes corresponding at different moments State the first distance and second distance of opponent vehicle and described vehicle.
  16. 16. device as claimed in claim 15, it is characterised in that the relative velocity acquiring unit is specifically used for:
    Described two obtain according to the described first distance, the second distance and at different moments the opponent vehicle and described car Relative velocity between, be the first moment and the second moment at different moments wherein described two, and first moment earlier than Second moment, first moment correspond to first distance, and second moment corresponds to the second distance.
  17. 17. device as claimed in claim 16, it is characterised in that the collision time acquiring unit is specifically used for:
    The collision time of the opponent vehicle and described vehicle is obtained according to the second distance and the relative velocity.
  18. 18. the device as any one of claim 10 to 17, it is characterised in that the identification module also includes:
    Alarm unit, for being warned according to the configured information and/or warning message to driver.
  19. 19. a kind of vehicle, it is characterised in that including the vehicle identifier as any one of claim 10 to 18.
CN201410125721.5A 2014-03-31 2014-03-31 Vehicle identification method, device and vehicle Active CN104952254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410125721.5A CN104952254B (en) 2014-03-31 2014-03-31 Vehicle identification method, device and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410125721.5A CN104952254B (en) 2014-03-31 2014-03-31 Vehicle identification method, device and vehicle

Publications (2)

Publication Number Publication Date
CN104952254A CN104952254A (en) 2015-09-30
CN104952254B true CN104952254B (en) 2018-01-23

Family

ID=54166874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410125721.5A Active CN104952254B (en) 2014-03-31 2014-03-31 Vehicle identification method, device and vehicle

Country Status (1)

Country Link
CN (1) CN104952254B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI609807B (en) * 2016-05-17 2018-01-01 緯創資通股份有限公司 Image evaluation method and electronic apparatus thereof
CN107563256A (en) * 2016-06-30 2018-01-09 北京旷视科技有限公司 Aid in driving information production method and device, DAS (Driver Assistant System)
CN106803359A (en) * 2016-09-27 2017-06-06 蔚来汽车有限公司 Emergency method for early warning and system based on front truck driving information in the same direction
CN107886770B (en) * 2016-09-30 2020-05-22 比亚迪股份有限公司 Vehicle identification method and device and vehicle
CN107886036B (en) * 2016-09-30 2020-11-06 比亚迪股份有限公司 Vehicle control method and device and vehicle
CN107886729B (en) * 2016-09-30 2021-02-23 比亚迪股份有限公司 Vehicle identification method and device and vehicle
CN107886030A (en) * 2016-09-30 2018-04-06 比亚迪股份有限公司 Vehicle identification method, device and vehicle
CN108528433B (en) * 2017-03-02 2020-08-25 比亚迪股份有限公司 Automatic control method and device for vehicle running
CN108528431B (en) * 2017-03-02 2020-03-31 比亚迪股份有限公司 Automatic control method and device for vehicle running
CN108528450B (en) * 2017-03-02 2020-06-19 比亚迪股份有限公司 Automatic control method and device for vehicle running
CN108536134B (en) * 2017-03-02 2019-12-20 比亚迪股份有限公司 Automatic control method and device for vehicle running
CN108528449B (en) * 2017-03-02 2020-02-21 比亚迪股份有限公司 Automatic control method and device for vehicle running
CN108528432B (en) * 2017-03-02 2020-11-06 比亚迪股份有限公司 Automatic control method and device for vehicle running
CN108528448B (en) * 2017-03-02 2020-08-25 比亚迪股份有限公司 Automatic control method and device for vehicle running
CN109426760A (en) * 2017-08-22 2019-03-05 聚晶半导体股份有限公司 A kind of road image processing method and road image processing unit
CN109426771A (en) * 2017-08-24 2019-03-05 日立汽车系统株式会社 The device and method that the wisp region of vehicle periphery is identified
CN109472180A (en) * 2017-09-07 2019-03-15 聚晶半导体股份有限公司 Road image processing method and road image processing unit
TWI662484B (en) * 2018-03-01 2019-06-11 國立交通大學 Object detection method
CN110386065B (en) * 2018-04-20 2021-09-21 比亚迪股份有限公司 Vehicle blind area monitoring method and device, computer equipment and storage medium
CN108725318B (en) * 2018-07-28 2020-11-24 惠州华阳通用电子有限公司 Automobile safety early warning method and device and computer readable storage medium
CN109146906B (en) * 2018-08-22 2021-03-23 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN109035841B (en) * 2018-09-30 2020-10-09 上海交通大学 Parking lot vehicle positioning system and method
CN110379191B (en) * 2018-11-08 2020-12-22 北京京东尚科信息技术有限公司 Method and device for pushing road information for unmanned equipment
CN112581484B (en) * 2019-09-29 2024-08-06 比亚迪股份有限公司 Rugged road detection method, rugged road detection device, storage medium, electronic apparatus, and vehicle
CN112758088A (en) * 2019-11-05 2021-05-07 深圳市大富科技股份有限公司 Dangerous source reminding method and advanced driving assistance system
CN110942631B (en) * 2019-12-02 2020-10-27 北京深测科技有限公司 Traffic signal control method based on flight time camera
WO2021237738A1 (en) * 2020-05-29 2021-12-02 深圳市大疆创新科技有限公司 Automatic driving method and apparatus, and distance determination method and apparatus
CN112419750B (en) * 2020-09-11 2022-02-22 博云视觉(北京)科技有限公司 Method for detecting silent low-point outlet channel overflow event
CN114119464B (en) * 2021-10-08 2023-06-16 厦门微亚智能科技有限公司 Deep learning-based lithium battery cell top cover weld appearance detection method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101333A (en) * 2006-07-06 2008-01-09 三星电子株式会社 Apparatus and method for producing assistant information of driving vehicle for driver
CN101391589A (en) * 2008-10-30 2009-03-25 上海大学 Vehicle intelligent alarming method and device
CN103167276A (en) * 2011-12-19 2013-06-19 富泰华工业(深圳)有限公司 Vehicle monitoring system and vehicle monitoring method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010262387A (en) * 2009-04-30 2010-11-18 Fujitsu Ten Ltd Vehicle detection device and vehicle detection method
CN101643053B (en) * 2009-09-01 2011-02-16 长安大学 Integrative monitoring system of automobilism action of driver
CN102194328B (en) * 2010-03-02 2014-04-23 鸿富锦精密工业(深圳)有限公司 Vehicle management system, method and vehicle control device with system
KR101665567B1 (en) * 2010-05-20 2016-10-12 삼성전자주식회사 Temporal interpolation of three dimension depth image method and apparatus
US8686873B2 (en) * 2011-02-28 2014-04-01 Toyota Motor Engineering & Manufacturing North America, Inc. Two-way video and 3D transmission between vehicles and system placed on roadside
CN102509074B (en) * 2011-10-18 2014-01-29 Tcl集团股份有限公司 Target identification method and device
CN103208006B (en) * 2012-01-17 2016-07-06 株式会社理光 Object motion mode identification method and equipment based on range image sequence
CN103366565B (en) * 2013-06-21 2015-04-15 浙江理工大学 Method and system of detecting pedestrian running red light based on Kinect

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101101333A (en) * 2006-07-06 2008-01-09 三星电子株式会社 Apparatus and method for producing assistant information of driving vehicle for driver
CN101391589A (en) * 2008-10-30 2009-03-25 上海大学 Vehicle intelligent alarming method and device
CN103167276A (en) * 2011-12-19 2013-06-19 富泰华工业(深圳)有限公司 Vehicle monitoring system and vehicle monitoring method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A 1.5Mpixel RGBZ CMOS Image Sensor for Simultaneous Color and Range Image Capture;Wonjoo Kim;《IEEE International Solid-State Circuits Conference》;20120403(第22期);全文 *
An RGBD Data Based Vehicle Detection Algorithm for Vehicle Following Systems;changchen zhao;《2013 8th IEEE Conference on Industrial Electronics and Applications(ICIEA)》;20130725;第1506-1510页 *
Large-Scale Multi-resolution Surface Reconstruction from RGB-D Sequences;Frank Steinbrucker;《2013 IEEE International Conference on Computer Vision(ICCV)》;20140303;第3264-3271页 *
Tracking people within groups with RGB-D data;Matteo Munaro;《2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)》;20121224;第2101页第1栏第1-3段、第2栏第1-2段 *

Also Published As

Publication number Publication date
CN104952254A (en) 2015-09-30

Similar Documents

Publication Publication Date Title
CN104952254B (en) Vehicle identification method, device and vehicle
JP5254102B2 (en) Environment recognition device
JP6197291B2 (en) Compound eye camera device and vehicle equipped with the same
JP6519262B2 (en) Three-dimensional object detection device, three-dimensional object detection method, three-dimensional object detection program, and mobile device control system
JP6274557B2 (en) Moving surface information detection apparatus, moving body device control system using the same, and moving surface information detection program
JP3263699B2 (en) Driving environment monitoring device
CN107045619B (en) exterior environment recognition device
CN105825495B (en) Article detection device and object detecting method
CN108734697A (en) shape measuring apparatus and method
JP5591730B2 (en) Environment recognition device
JP5718920B2 (en) Vehicle periphery monitoring device
US7974445B2 (en) Vehicle periphery monitoring device, vehicle, and vehicle periphery monitoring program
CN106295493B (en) exterior environment recognition device
WO2014103433A1 (en) Vehicle periphery monitoring device
JP2007241898A (en) Stopping vehicle classifying and detecting device and vehicle peripheral monitoring device
JP5164351B2 (en) Object detection apparatus and object detection method
CN102303563B (en) System and method for prewarning front vehicle collision
CN106295494A (en) Exterior environment recognition device
JP2015011619A (en) Information detection device, mobile equipment control system, mobile body and program for information detection
CN106461387A (en) Stereo camera device and vehicle provided with stereo camera device
JP5073700B2 (en) Object detection device
JP2010224918A (en) Environment recognition device
JP4813304B2 (en) Vehicle periphery monitoring device
JP6683245B2 (en) Image processing device, image processing method, image processing program, object recognition device, and device control system
JP2015090546A (en) Outside-vehicle environment recognition device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant