CN108528432B - Automatic control method and device for vehicle running - Google Patents
Automatic control method and device for vehicle running Download PDFInfo
- Publication number
- CN108528432B CN108528432B CN201710120713.5A CN201710120713A CN108528432B CN 108528432 B CN108528432 B CN 108528432B CN 201710120713 A CN201710120713 A CN 201710120713A CN 108528432 B CN108528432 B CN 108528432B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- image
- lane
- target vehicle
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 88
- 238000003384 imaging method Methods 0.000 claims description 38
- 238000001514 detection method Methods 0.000 claims description 28
- 238000006073 displacement reaction Methods 0.000 claims description 27
- 238000013507 mapping Methods 0.000 claims description 27
- 238000009434 installation Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 7
- 230000008859 change Effects 0.000 description 29
- 238000010586 diagram Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 13
- 230000003044 adaptive effect Effects 0.000 description 12
- 238000002955 isolation Methods 0.000 description 8
- 239000007787 solid Substances 0.000 description 7
- 238000010924 continuous production Methods 0.000 description 4
- 230000007613 environmental effect Effects 0.000 description 4
- 238000005286 illumination Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 101100234408 Danio rerio kif7 gene Proteins 0.000 description 2
- 101100221620 Drosophila melanogaster cos gene Proteins 0.000 description 2
- 101100398237 Xenopus tropicalis kif11 gene Proteins 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W10/00—Conjoint control of vehicle sub-units of different type or different function
- B60W10/18—Conjoint control of vehicle sub-units of different type or different function including control of braking systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/08—Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
- B60W30/09—Taking automatic action to avoid collision, e.g. braking and steering
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W30/00—Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
- B60W30/14—Adaptive cruise control
- B60W30/143—Speed control
- B60W30/146—Speed limiting
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/04—Traffic conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
- B60W40/06—Road conditions
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0002—Automatic control, details of type of controller or control system architecture
- B60W2050/0014—Adaptive controllers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0043—Signal treatments, identification of variables or parameters, parameter estimation or state estimation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0062—Adapting control system settings
- B60W2050/0075—Automatic parameter input, automatic initialising or calibrating means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2552/00—Input parameters relating to infrastructure
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2554/00—Input parameters relating to objects
- B60W2554/80—Spatial relation or speed relative to objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2555/00—Input parameters relating to exterior conditions, not covered by groups B60W2552/00, B60W2554/00
- B60W2555/60—Traffic rules, e.g. speed limits or right of way
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W2710/00—Output or target parameters relating to a particular sub-units
- B60W2710/18—Braking system
Landscapes
- Engineering & Computer Science (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a vehicle running automatic control method and a vehicle running automatic control device, wherein the method comprises the following steps: identifying a front target vehicle and a rear target vehicle according to a first image and a second image of an environment in front of a subject vehicle acquired from a front-facing 3D camera; acquiring tunnel information and speed limit information; changing the setting of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information; and adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle. Therefore, the driving safety of the vehicle in the tunnel is improved.
Description
Technical Field
The invention relates to the technical field of vehicle control, in particular to a vehicle running automatic control method and device.
Background
Currently, the adaptive cruise system of a vehicle has a growing interest, in which a user sets a desired vehicle speed, the system obtains an exact position of a preceding vehicle using a low power radar or an infrared beam, and if the preceding vehicle is decelerated or a new target is detected, the system transmits an execution signal to an engine or a brake system to reduce the vehicle speed so that the vehicle and the preceding vehicle maintain a safe driving distance. When the front road is empty, the vehicle is accelerated to return to the set vehicle speed, and the radar system can automatically monitor the next target. The self-adaptive cruise control system of the vehicle replaces a user to control the speed of the vehicle, avoids frequent cancellation and setting of cruise control, enables the self-adaptive cruise control system to be suitable for more road conditions, and provides a more relaxed driving mode for the user.
However, in the case where a plurality of target vehicles travel in a tunnel, a distance measurement sensor such as a laser radar does not recognize a lane line well. Therefore, only the subject vehicle on which the lidar is mounted is likely to recognize the target vehicle of the own lane as being in a non-own lane, and is likely to recognize the target vehicle of the non-own lane as being in the own lane, which may cause the adaptive cruise system of the subject vehicle to perform erroneous braking or a delay in braking, and the traveling safety of the subject vehicle is low.
Disclosure of Invention
The object of the present invention is to solve at least to some extent one of the above mentioned technical problems.
To this end, a first object of the present invention is to propose an automatic control method for vehicle travel, which improves the safety of vehicle travel in tunnels.
A second object of the present invention is to provide an automatic vehicle running control device.
In order to achieve the above object, an embodiment of a first aspect of the present invention provides a vehicle running automatic control method, including: acquiring a first image and a second image of an environment in front of a subject vehicle from a front-facing 3D camera, wherein the first image is a color or brightness image, and the second image is a depth image; acquiring a front road lane line according to the first image; acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line; mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges; identifying a front target vehicle according to all the front vehicle identification ranges; generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane lines; acquiring point cloud data from a laser radar, and projecting the point cloud data into the third image according to the installation parameters of the laser radar to acquire rear target point cloud data; identifying rear target vehicles according to all rear vehicle identification ranges and the rear target point cloud data; acquiring tunnel information and speed limit information; changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information; and adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
The vehicle driving automatic control method comprises the steps of obtaining a first image and a second image of the front environment of a main vehicle from a front 3D camera, obtaining a front road lane line, obtaining a third image and a rear road lane line according to imaging parameters of the first image and the front road lane line, mapping the front road lane line to the second image according to an interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges, identifying a front target vehicle according to the front vehicle identification ranges, generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane line, obtaining rear target point cloud data from a laser radar, identifying a rear target vehicle according to the rear target point cloud data, finally obtaining tunnel information and speed limit information, changing the setting of the cruising upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information, and adjusting the cruising upper limit and the cruising safety distance of the front 3D camera according to the tunnel information And performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle. Therefore, the main vehicle can control the braking of the main vehicle according to the environmental information of the highway lane line, unnecessary braking adjustment is reduced, the risk of rear-end collision is effectively reduced, and the running safety of the main vehicle in the tunnel is improved.
In order to achieve the above object, a second aspect of the present invention provides a vehicle running automatic control device, including a first acquisition module configured to acquire a first image and a second image of an environment in front of a subject vehicle from a front-facing 3D camera, wherein the first image is a color or brightness image, and the second image is a depth image; the second acquisition module is used for acquiring a front highway lane line according to the first image; the third acquisition module is used for acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line; the first generation module is used for mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges; the first identification module is used for identifying the front target vehicles according to all the front vehicle identification ranges; a second generating module, configured to generate a plurality of rear vehicle identification ranges according to the third image and the rear road lane; the fourth acquisition module is used for acquiring point cloud data from a laser radar and projecting the point cloud data into the third image according to the installation parameters of the laser radar so as to acquire rear target point cloud data; the second identification module is used for identifying rear target vehicles according to all the rear vehicle identification ranges and the rear target point cloud data; the fifth acquisition module is used for acquiring tunnel information and speed limit information; the first adjusting module is used for changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information; the second adjusting module is used for adjusting the focal length of the front 3D camera according to the tunnel information; and the control module is used for carrying out in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
The automatic vehicle driving control device comprises a front 3D camera, a front road lane line, a third image and a rear road lane line, wherein the first image and the second image of the front environment of a main vehicle are obtained from the front 3D camera, the front road lane line is obtained according to imaging parameters of the first image and the front road lane line, the front road lane line is mapped into the second image according to the interleaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges, a front target vehicle is identified according to the front vehicle identification ranges, a plurality of rear vehicle identification ranges are generated according to the third image and the rear road lane line, rear target point cloud data are obtained from a laser radar, a rear target vehicle is identified according to the rear target point cloud data, finally tunnel information and speed limit information are obtained, the cruise upper limit and cruise safe distance setting of the main vehicle are changed according to the tunnel information and the speed limit information, and the focus of the front 3D camera is adjusted according to the tunnel information And performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle. Therefore, the main vehicle can control the braking of the main vehicle according to the environmental information of the highway lane line, unnecessary braking adjustment is reduced, the risk of rear-end collision is effectively reduced, and the running safety of the main vehicle in the tunnel is improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a vehicle running automatic control method according to a first embodiment of the invention;
fig. 2 is a flowchart of a vehicle running automatic control method according to a second embodiment of the invention;
fig. 3 is a flowchart of a vehicle running automatic control method according to a third embodiment of the invention;
fig. 4 is a flowchart of a vehicle running automatic control method according to a fourth embodiment of the invention;
fig. 5 is a flowchart of a vehicle running automatic control method according to a fifth embodiment of the invention;
FIG. 6 is a flowchart of a vehicle running automatic control method according to a sixth embodiment of the invention
Fig. 7 is a flowchart of a vehicle running automatic control method according to a seventh embodiment of the invention;
fig. 8 is a scene diagram of a vehicle running automatic control method according to an embodiment of the invention;
fig. 9 is a scene diagram of a vehicle running automatic control method according to another embodiment of the invention;
fig. 10 is a schematic configuration diagram of a vehicle running automatic control apparatus according to a first embodiment of the invention;
fig. 11 is a schematic configuration diagram of an automatic control device for vehicle running according to a second embodiment of the present invention;
fig. 12 is a schematic configuration diagram of an automatic control device for vehicle running according to a third embodiment of the invention;
fig. 13 is a schematic configuration diagram of an automatic control device for vehicle running according to a fourth embodiment of the invention;
fig. 14 is a schematic configuration diagram of a vehicular running automatic control apparatus according to a fifth embodiment of the invention;
fig. 15 is a schematic configuration diagram of an automatic control device for vehicle running according to a sixth embodiment of the invention; and
fig. 16 is a schematic configuration diagram of an automatic control device for vehicle running according to a seventh embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a vehicle running automatic control method and apparatus according to an embodiment of the present invention with reference to the drawings.
Fig. 1 is a flowchart of a vehicle running automatic control method according to an embodiment of the present invention.
Generally, a laser radar is assembled in front, at the side or at the rear of a vehicle to complete different functions of forward-looking collision avoidance, side collision avoidance, rear-looking collision avoidance and the like, and in practical application, the laser radar analyzes the distance and the relative speed between a front vehicle and a current main vehicle according to a feedback signal aiming at a transmitted signal of the laser radar so as to automatically adjust the speed of the vehicle and ensure the driving safety.
Specifically, after a vehicle provided with the laser radar gets on the road, the laser radar selects a following vehicle, and then the following vehicle is used as a target vehicle to monitor, so that no matter the front vehicle accelerates, decelerates, stops or starts, the subject vehicle can know and take corresponding measures in real time. However, since the laser radar is a single pulse, the type and the property of the target vehicle to be detected cannot be accurately determined, and when the target vehicle is running in a tunnel, the lane line cannot be well identified, so that identification delay and the like are easily caused, and potential safety hazards are caused in driving.
In order to solve the problems, the invention provides an automatic vehicle running control method, which ensures the running safety of a main vehicle in a tunnel.
The following describes the automatic control method for vehicle running according to the present invention with reference to specific embodiments. As shown in fig. 1, the vehicle travel automatic control method includes:
s101, acquiring a first image and a second image of the environment in front of the main vehicle from the front 3D camera, wherein the first image is a color or brightness image, and the second image is a depth image.
Specifically, a 3D camera is arranged in front of a current host vehicle in advance to acquire a first image and a second image of an environment in front of the host vehicle, wherein the first image is a color or brightness image, and the second image is a depth image.
In practical applications, the first image and the second image of the environment in front of the subject vehicle may be acquired from the front 3D camera in various ways according to the structure of the imaging device of the front 3D camera.
As one possible implementation, a first image of the environment in front of the subject vehicle is acquired from an image sensor of the front 3D camera, and a second image of the environment in front of the subject vehicle is acquired from a Time of Flight (TOF) sensor of the front 3D camera.
Where an image sensor refers to an array or collection of luminance pixel sensors, such as red, green, blue (RGB) or luminance, chrominance (YUV) luminance pixel sensors, for example, which are limited in their ability to accurately determine the distance between the luminance pixel sensor and the object being inspected, are commonly used to obtain a luminance image of an environment.
TOF sensors refer to an array or set of TOF pixel sensors, which may be light sensors, phase detectors, etc., that can detect the time of flight of light from a pulsed light source, a modulated light source, traveling between the TOF pixel sensor and a detected object to detect the distance of the object and acquire a depth image.
In addition, in practical applications, both the image sensor and the TOF sensor may be fabricated using Complementary Metal Oxide Semiconductor (CMOS) processes, and the luminance pixel sensor and the TOF pixel sensor may be scaled on the same substrate, for example, 8 luminance pixel sensors and 1 TOF pixel sensor fabricated in an 8:1 ratio constitute one large interleaved pixel, where the light sensing area of 1 TOF pixel sensor may be equal to the light sensing area of 8 luminance pixel sensors, and 8 luminance pixel sensors may be arranged in an array of 2 rows and 4 columns.
For example, an array of 360 rows and 480 columns of the active interleaved pixels described above can be fabricated on a 1 inch optical target surface substrate, 720 rows and 1920 columns of active luminance pixel sensor arrays, and 360 rows and 480 columns of active TOF pixel sensor arrays can be acquired, whereby the same camera of image sensor and TOF sensor can acquire color or luminance images and depth images simultaneously.
Thereby, the same front-facing 3D camera, which acquires the first image and the second image about the environment in front of the subject vehicle, can be manufactured using CMOS processes, and the front-facing 3D camera will have a sufficiently low production cost for a limited period of time according to moore's law of the semiconductor industry.
And S102, acquiring a front road lane line according to the first image.
Specifically, since the first image is a color or luminance image, the position of the highway lane line can be identified only by using the luminance difference between the highway lane line and the road surface. Therefore, in an actual implementation, the highway lane line may be acquired through the luminance information of the first image.
Specifically, if the first image is a luminance image, the front road lane line is identified from a luminance difference between the front road lane line and the road surface in the first image.
And if the first image is a color image, converting the color image into a brightness image, and identifying the front highway lane line according to the brightness difference between the front highway lane line and the road surface in the first image.
Since the conversion method from the color image to the luminance image is familiar to those skilled in the art, the detailed process of converting the color image to the luminance image is not described herein.
And S103, acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line.
The imaging parameters of the first image may include an imaging pixel coordinate system of a camera that obtains the first image, a focal length, and a position and an orientation of the camera in a physical world coordinate system of the host vehicle, that is, a projection relationship may be established between any image pixel coordinate of the first image and the physical world coordinate system of the host vehicle through the imaging parameters, and the method for establishing the projection relationship is familiar to those skilled in the art.
The third image is a top view of all pixel positions of the projected front road lane line, and therefore the position of the front road lane line in the third image is the position of the road lane line in front of the host vehicle relative to the origin of the physical world coordinate system of the host vehicle.
Further, since the rear highway lane line is a continuation of the front highway lane line, the rear highway lane line is also acquired on the basis of the acquired first highway lane line.
And S104, mapping the front road lane lines into the second image according to the interleaved mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges.
Specifically, the first image and the second image are a color or brightness image and a depth image acquired by the same front 3D camera, so that the first image and the second image have an interleaving mapping relationship, and due to the interleaving mapping relationship between the first image and the second image, the row-column coordinates of each pixel of the first image can determine at least one row-column coordinate of one pixel in the second image through proportional adjustment, so that each edge pixel position of a front highway lane line acquired according to the first image can determine at least one pixel position in the second image, and thus the front highway lane line with the proportional adjustment is acquired in the second image.
Furthermore, according to the visual field viewed by human eyes, a front vehicle identification range is uniquely established for every two adjacent front road lane lines according to the equal-proportion front road lane lines acquired from the second image.
And S105, identifying the front target vehicle according to all the front vehicle identification ranges.
Specifically, after the recognition range of the front lane is acquired, the vehicle located within the recognition range of the front lane is acquired as the front target vehicle.
And S106, generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane line.
Specifically, according to the field of view viewed by human eyes, according to the rear road lane lines in the third image, each two adjacent rear road lane lines uniquely create one rear vehicle identification range.
And S107, acquiring point cloud data from the laser radar, and projecting the point cloud data into a third image according to the installation parameters of the laser radar to acquire rear target point cloud data.
The laser receiver array with a small beam angle demodulates and senses reflected light of an object irradiated by the laser radar to obtain high-precision distance resolution, and then accurate mechanical rotation scanning is assisted to obtain point cloud data with high pitch resolution and azimuth resolution. The point cloud data of the lidar thus visually looks like a contour diagram.
Specifically, the installation parameters of the position and the orientation of the physical world coordinate system of the laser radar on the host vehicle can be acquired and recorded through offline detection of the host vehicle, and information such as a distance, a pitch angle and an azimuth angle included in point cloud data of an environment behind the host vehicle acquired by the laser radar can be converted into information relative to the origin of the physical world coordinate system of the host vehicle, namely, the point cloud data is projected into a third image to acquire rear target point cloud data.
For example, the normal line of the laser radar coincides with the Y-axis of the physical world coordinate system of the subject vehicle, the distance from the origin of the normal line of the laser radar to the origin of the physical world coordinate system of the subject vehicle is-2 m, the distance of certain rear target point cloud data acquired by the laser radar is 10m, the pitch angle is 2 degrees, and the azimuth angle is 30 degrees (namely the included angle between the projection line of the connecting line of the target vehicle and the origin on the XY plane and the Y axis), the X, Y, Z coordinates of the point cloud data at the origin of the physical world coordinate system of the subject vehicle are (10m sin30 ° cos2 °, -2m-10m cos30 ° cos2 °, 10m sin2 °) i.e. (4.997m, -10.655m, 0.349m), namely, the point cloud data is projected into a third image according to the installation parameters of the laser radar to obtain rear target point cloud data (4.997m, -10.655m, 0.349 m).
And S108, identifying rear target vehicles according to all the rear vehicle identification ranges and the rear target point cloud data.
Specifically, since the set of rear target vehicle parameters of the laser radar is projected into the third image according to the installation parameters of the laser radar to acquire a plurality of rear target point cloud data, the target vehicles whose rear target point cloud data fall within the corresponding vehicle identification range are marked as rear target vehicles.
And S109, acquiring tunnel information and speed limit information.
It is understood that the tunnel information and the speed limit information may be tunnel entrance information and its corresponding speed limit information, or tunnel exit information and its corresponding speed limit information.
Therefore, there are various ways to obtain the tunnel information and the speed limit information, and the tunnel information and the speed limit information can be selectively set according to the actual application requirements, for example, as follows:
in a first example, tunnel entrance information and speed limit information are acquired from a navigation system.
In a second example, tunnel exit information and speed limit information are obtained from a navigation system.
In a third example, tunnel entrance information and speed limit information are identified from the first image and the second image.
In a fourth example, tunnel exit information and speed limit information are identified from the first image and the second image.
And S110, changing the setting of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information.
It is understood that the settings for changing the cruising speed upper limit and cruising safety distance of the subject vehicle are different for different tunnel information.
As an example, when the tunnel information is tunnel exit information, the settings of the cruising speed upper limit and cruising safety distance of the subject vehicle are changed according to the tunnel exit information, speed limit information, and user setting information.
And S111, adjusting the front 3D camera according to the tunnel information, and performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
It will be appreciated that different tunnel information adjusts the focal length of the front 3D camera differently, for example as follows:
first, when the tunnel information is tunnel entrance information, the focal length of the front 3D camera is reduced.
In a second example, when the tunnel information is tunnel exit information, the focal length of the front 3D camera is increased.
And further, performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle, the rear target vehicle and the steering lamp.
Thus, in one embodiment of the present invention, the forward target vehicle range may also be generated from the forward target vehicle, and the forward target vehicle range may be mapped into the first image according to the interleaved mapping relationship between the first image and the second image to generate the forward headlight recognition region.
It can be understood that, since the driving safety is related to the driving state of the front target vehicle, for example, when the front target vehicle is moving straight, the subject vehicle can normally drive, and if the front target vehicle suddenly decelerates and changes lanes, the subject vehicle needs to be braked to avoid the occurrence of a rear-end collision. Since the lamps of the preceding target vehicle through which the traveling state of the preceding target vehicle can pass react, in the present embodiment, in order to know the lamps of the preceding target vehicle, it is necessary to determine the preceding lamp recognition area.
Specifically, since the front headlight identification area is located within the front target vehicle range, the front target vehicle range is generated according to the front target vehicle, due to the interleaving mapping relationship between the first image and the second image, the row-column coordinates of each pixel of the front target vehicle range in the second image are subjected to proportional adjustment, the row-column coordinates of at least one pixel in the first image can be determined, and the imaging of the headlight of the target vehicle is included in the corresponding front target vehicle range, so that the front headlight identification area is generated in the first image.
In practical applications, the manner of generating the range of the front target vehicle according to the front target vehicle is different according to different application scenarios, and the following examples are given:
the first example:
and generating a front target vehicle range according to a closed area defined by the target boundary of the front target vehicle.
In this example, as a possible implementation manner, a boundary detection method (e.g., a boundary detection method such as Canny, Sobel, Laplace, etc.) in an image processing algorithm is adopted to detect a target boundary of a front target vehicle for recognition.
In the depth image, the depth sub-image formed by reflecting the light on the back or front of the same target vehicle to the TOF sensor contains consistent distance information, so that the distance information of the target vehicle can be acquired by only identifying the position of the depth sub-image formed by the target vehicle in the depth image. Wherein a sub-image refers to a combination of a part of the pixels of an image.
The depth sub-image formed by reflecting light on the back or front of the same target vehicle to the TOF sensor contains consistent distance information, and the depth sub-image formed by reflecting light on the road surface to the TOF sensor contains continuously-changed distance information, so that the depth sub-image containing the consistent distance information and the depth sub-image containing the continuously-changed distance information necessarily form abrupt differences at the junction of the two, and the junction of the abrupt differences forms a target boundary of the target vehicle in the depth image.
The second example is:
a forward target vehicle range is generated from an extended enclosed area of the target boundary of the forward target vehicle.
In this example, as a possible implementation manner, a boundary detection method in an image processing algorithm is adopted to detect the target boundary of the front target vehicle for identification.
The third example:
and generating a front target vehicle range according to a closed area formed by connecting a plurality of pixel positions of the front target vehicle.
The vehicle identification range is determined by all pixel positions of the lane line, so that the detection of the target boundary of the target vehicle in the vehicle identification range reduces the boundary interference formed by road facilities such as an isolation zone, a light pole, a protection pile and the like.
Further, the turn lights of the corresponding front target vehicle are identified according to the front lamp identification area.
Specifically, after the front vehicle lamp identification area is acquired, in order to accurately know the specific form state of the front target vehicle, the turn lamp of the corresponding front target vehicle is identified according to the front vehicle lamp identification area.
It should be noted that, according to different specific application requirements, the manner of identifying the turn signal of the corresponding front target vehicle according to the front vehicle light identification area is different.
As one possible implementation, the turn signal of the corresponding preceding target vehicle is identified according to the color, the flashing frequency or the flashing sequence of the tail light in the preceding vehicle light identification area.
In this embodiment, the fact that both the longitudinal displacement and the lateral displacement of the preceding target vehicle are small at the initial stage of lane change means that the size change of the headlight recognition region of the preceding target vehicle is also small, and only the change in the brightness of the image formed at the turn signal is large due to flickering.
Therefore, a time-differentiated vehicle light recognition area sub-image of the front target vehicle is created by continuously acquiring a plurality of color or brightness images at different time instants and performing time-differentiation processing on the vehicle light recognition area of the front target vehicle. The time differentiated vehicle light identification area sub-images will highlight the continuously flashing vehicle light sub-images of the preceding target vehicle.
And then projecting the time differential car lamp identification area sub-image to a column coordinate axis, performing one-dimensional search to obtain the initial and end column coordinate positions of the car lamp sub-image of the target vehicle, projecting the initial and end column coordinate positions to the time differential car lamp identification area sub-image, searching the initial and end row coordinate positions of the car lamp sub-image, projecting the initial and end row and column coordinate positions of the car lamp sub-image to the plurality of color or brightness images at different moments to confirm the color, the flashing frequency or the flashing sequence of the car lamp of the front target vehicle, thereby determining the row and column coordinate positions of the flashing car lamp sub-image.
Further, when the line and column coordinate positions of the flickering headlamp subimages are only on the left side of the headlamp identification area of the front target vehicle, it can be determined that the front target vehicle turns on a left turn lamp, when the line and column coordinate positions of the flickering headlamp subimages are only on the right side of the headlamp identification area of the front target vehicle, it can be determined that the front target vehicle turns on a right turn lamp, and when the line and column coordinate positions of the flickering headlamp subimages are on both sides of the headlamp identification area of the front target vehicle, it can be determined that the target vehicle turns on a double-flashing warning lamp.
In addition, in the implementation manner, the longitudinal displacement or the transverse displacement of the front target vehicle is large in the lane changing process of the front target vehicle, so that the size of the vehicle lamp identification area of the front target vehicle is also large in change, longitudinal displacement or transverse displacement compensation can be performed on a plurality of vehicle lamp identification areas of the front target vehicle which are continuously acquired at different moments, the vehicle lamp identification areas are scaled into vehicle lamp identification areas with the same size, time differentiation processing is performed on the adjusted vehicle lamp identification areas of the front target vehicle to create time differential vehicle lamp identification area sub-images of the front target vehicle, the time differential vehicle lamp identification area sub-images are projected to a column coordinate axis, one-dimensional search is performed to acquire the initial and end column coordinate positions of the vehicle lamps of the front target vehicle, and the initial and end column coordinate positions are projected to the time differential vehicle lamp identification area sub-images, and searching the line coordinate positions of the start and the end of the vehicle lamp subimage, projecting the line and column coordinate positions of the start and the end of the vehicle lamp subimage to the plurality of color or brightness images at different moments to confirm the color, the flashing frequency or the flashing sequence of the vehicle lamp of the front target vehicle, thereby determining the line and column coordinate positions of the flashing vehicle lamp subimage and finally finishing the identification of the left steering lamp, the right steering lamp or the double-flashing warning lamp.
Examples are as follows:
in a first example, the working condition that the target vehicle in the non-self-lane in front decelerates and changes to the self-lane is recognized according to the motion parameters of the target vehicle in front and a steering lamp, so that the motion parameter control system of the main vehicle performs braking adjustment in the tunnel in advance, and the lamp system of the main vehicle reminds the target vehicle behind. Therefore, the motion parameter control system and the vehicle lamp system of the main vehicle can be adjusted earlier to remind the rear target vehicle, more braking or adjusting time is provided for the rear target vehicle, the rear-end collision risk is effectively reduced, and the running safety of the main vehicle and passengers of the main vehicle is improved.
In a second example, the operating condition that the target vehicle in the front road changes into the non-self-road in the front road in a deceleration way is identified according to the motion parameters of the target vehicle in the front and the steering lamp, so that the motion parameter control system of the main vehicle does not perform braking adjustment in the tunnel. Therefore, the motion parameter control system of the host vehicle can reduce unnecessary braking adjustment, thereby reducing the risk of rear-end collision caused by the unnecessary braking adjustment of the host vehicle.
To sum up, in the automatic vehicle driving control method according to the embodiment of the present invention, the front 3D camera acquires the first image and the second image of the environment in front of the subject vehicle, acquires the front road lane line, acquires the third image and the rear road lane line according to the imaging parameters of the first image and the front road lane line, maps the front road lane line to the second image according to the interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle identification ranges, identifies the front target vehicle according to the front vehicle identification ranges, generates a plurality of rear vehicle identification ranges according to the third image and the rear road lane line, acquires the rear target point cloud data from the laser radar, identifies the rear target vehicle according to the rear target point cloud data, finally acquires the tunnel information and the speed limit information, changes the settings of the cruise upper limit and the safety distance of the subject vehicle according to the tunnel information and the speed limit information, and adjusts the front 3D camera according to the tunnel information And D, performing intra-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle by the focal length of the camera. Therefore, the main vehicle can control the braking of the main vehicle according to the environmental information of the highway lane line, unnecessary braking adjustment is reduced, the risk of rear-end collision is effectively reduced, and the running safety of the main vehicle in the tunnel is improved.
Based on the above description, it should be noted that, according to different application scenarios, different techniques may be adopted to identify the front road lane line according to the brightness difference between the front road lane line and the road surface in the first image. The following description is made with reference to specific examples.
Fig. 2 is a flowchart of a vehicle running automatic control method according to a second embodiment of the present invention, and as shown in fig. 2, the step S102 includes:
s201, creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold value.
Specifically, in real life, the highway lane lines include both solid line lane lines and broken line lane lines, and for convenience of description, the following description will first take an example of recognizing the solid line lane lines as an example.
Specifically, a brightness threshold is preset by using the brightness difference between the highway lane line and the road surface in the first image, wherein the preset brightness threshold is obtained by searching for some brightness thresholds, and the brightness thresholds can be obtained by searching for the brightness thresholds by using a histogram statistics-bimodal algorithm.
Furthermore, a preset brightness threshold and a preset brightness image are used for creating a binary image of the protruded highway lane line, the brightness image can be further divided into a plurality of brightness sub-images, a histogram statistics-double peak algorithm is executed on each brightness sub-image to search for the plurality of brightness thresholds, the brightness thresholds and the corresponding brightness sub-images are used for creating the binary sub-images of the protruded highway lane line, and the binary sub-images are used for creating the complete binary image of the protruded highway lane line, so that the condition of brightness change of the road surface or the lane line can be met.
The specific implementation steps of finding the luminance threshold and creating the binary image of the highway lane line may be obtained by those skilled in the art based on the prior art, and are not described herein again.
S202, detecting all edge pixel positions of the straight-line lane line or all edge pixel positions of the curve solid-line lane line in the binary image according to a preset detection algorithm.
Specifically, after obtaining the binary image of the road lane line in front, the pixels of the curve where the solid line lane line is arranged in a straight line in the luminance image also account for most of the imaging pixels of the solid line lane line because the curvature radius of the road lane line cannot be too small and the imaging pixels of the road lane line at a position close to the road lane line are more due to the camera projection principle.
Therefore, all edge pixel positions of the solid line lane line of the straight road or most initial straight line edge pixel positions of the solid line lane line of the curved road can be detected in the binary image of the prominent highway lane line by using a preset detection algorithm, such as a straight line detection algorithm like a Hough transform algorithm.
Of course, if the filtering process is not performed, the straight line detection also detects most straight line edge pixel positions of the isolation belt and the telegraph pole in the binary image. The slope range of the lane line in the binary image can be set according to the length-width ratio of the image sensor, the focal length of the camera lens, the road width range of the road design specification and the installation position of the image sensor on the main vehicle, so that the straight line of the non-lane line is filtered and eliminated according to the slope range.
Since the edge pixel positions of the solid line lane line of the curve are always continuously changed, the connected pixel positions of the edge pixel positions at the two ends of the detected initial straight line are searched and merged into the initial straight line edge pixel set, the searching and merging into the connected pixel positions are repeated, and finally, all the edge pixel positions of the solid line lane line of the curve are uniquely determined.
S203, detecting all edge pixel positions of the straight road dotted line lane line or detecting all edge pixel positions of the curve dotted line lane line in the binary image according to a preset detection algorithm.
To fully explain the recognition of the front road lane line based on the brightness difference between the front road lane line and the road surface in the first image, the recognition of the broken line lane line will be first described as an example.
The straight line detection algorithm described in step S201 may also detect most of the initial straight line edge pixel positions of the dashed line lane line, and may connect edge pixels of other shorter lane lines belonging to the dashed line lane line by an extension line or a search and merge method of the most of the initial straight line edge pixel positions of the dashed line lane line, so as to obtain all the edge pixel positions of the dashed line lane line. The method for extending the line is used for obtaining all edge pixel positions of the straight line dashed line lane line, the method for searching and combining is used for obtaining all edge pixel positions of the curve dashed line lane line, and the priori knowledge of whether the dashed line lane line is a straight line or a curve needs to be obtained by selecting the method for extending the line or the method for searching and combining, and of course, the priori knowledge can be obtained by detecting the solid line lane line.
As an implementation manner, all edge pixel positions of the solid line lane line may be projected to an initial straight line edge pixel position of the dotted line lane line according to prior knowledge of the solid line lane line, a principle that the lane lines are parallel to each other in reality, and projection parameters of the image sensor and the camera, so as to connect the initial straight line edge pixel position of the dotted line lane line and edge pixel positions of other shorter lane lines belonging to the dotted line lane line, thereby obtaining all edge pixel positions of the dotted line lane line.
As another implementation, prior knowledge of a straight road or a curved road does not need to be obtained, and since the lateral offset of the dashed lane line can be almost ignored in a short continuous time but the longitudinal offset is large in the process of the vehicle cruising on the straight road or the curve cruising at a constant steering angle, the dashed lane line can be superimposed into a solid lane line in continuous binary images of a plurality of prominent highway lane lines at different times, and then all edge pixel positions of the dashed lane line are obtained by the identification method of the solid lane line.
The longitudinal offset of the dotted lane line is influenced by the speed of the main vehicle, so that the minimum number of continuous binary images of the protruded road lane line at different moments can be dynamically determined according to the speed of the vehicle obtained from the wheel speed sensor, the dotted lane line is superposed into a solid lane line, and all edge pixel positions of the dotted lane line are obtained.
In summary, in the automatic control method for vehicle driving according to the embodiment of the present invention, a binary image of a front highway lane line is created according to the luminance information of the first image and a preset luminance threshold, all edge pixel positions of a straight solid lane line or all edge pixel positions of a curved solid lane line are detected in the binary image according to a preset detection algorithm, and all edge pixel positions of a straight dashed lane line or all edge pixel positions of a curved dashed lane line are detected in the binary image according to a preset detection algorithm. Therefore, the broken line and the solid line lane line of the straight road and the curve road in the highway lane line can be accurately identified.
It should be noted that, according to different application scenarios, different techniques may be adopted to obtain the third image and the rear highway lane line according to the imaging parameters of the first image and the front highway lane line. The following description will be made more clearly with reference to specific examples.
Fig. 3 is a flowchart of a vehicle running automatic control method according to a third embodiment of the present invention, and as shown in fig. 3, the above step S103 includes:
s301, projecting all pixel positions of the front road lane line to a physical world coordinate system of the main vehicle according to the imaging parameters of the first image to establish a third image.
And S302, accumulating the position of the front road lane line in the third image through continuous time and obtaining the position of the rear road lane line through displacement relative to the origin of the physical world coordinate system of the host vehicle.
Specifically, the third image is created by projecting all the pixel positions of the acquired front road lane line to the physical world coordinate system of the host vehicle, and the third image may be a top view of all the pixel positions of the projected front road lane line, so that the position of the front road lane line in the third image is the position of the road lane line in front of the host vehicle relative to the origin of the physical world coordinate system of the host vehicle.
Since the front road lane line acquired at a certain time will be located behind the subject vehicle after a certain time has elapsed, the position of the front road lane line in the third image will acquire the position of the rear road lane line of the subject vehicle through continuous time accumulation and displacement from the origin of the physical world coordinate system of the subject vehicle.
For example, the Y-axis distance of the point a of the road lane line ahead at the time T1 from the origin of the physical world coordinate system of the host vehicle is D1 (the X-axis distance is D2), the host vehicle travels at a constant speed V for a time T along the Y-axis, that is, the displacement of the point a of the road lane line from the origin of the physical world coordinate system of the host vehicle is V × T (for example, V × T is 2 × D1) at the time T2 + T, and the distance of the point a of the road lane line from the origin of the physical world coordinate system of the host vehicle is D1-V × T is-D1 at the time T2, that is, the position of the road lane line behind the host vehicle is obtained (the X-axis distance is still D2).
For the case of variable speed running of the subject vehicle, the variation curve of V versus T may be acquired by the wheel speed sensor, and the displacement of variable speed running may be acquired using the integral of V versus T. For the situation that the main vehicle runs on the arc-shaped curve, the curvature radius of the curve is calculated by using the coordinates of the front road lane line in the physical world coordinate system of the main vehicle, and then the coordinates of the elapsed time T of the front road lane line relative to the origin of the physical world coordinate system of the main vehicle can be calculated by using the curvature radius and the arc-shaped displacement of the integral of the running time T of the main vehicle, namely the position of the rear road lane line of the main vehicle is obtained.
In summary, according to the automatic control method for vehicle driving in the embodiment of the present invention, all pixel positions of the front road lane line are projected to the physical world coordinate system of the host vehicle according to the imaging parameters of the first image to create a third image, and the position of the front road lane line in the third image is accumulated for a continuous time and is displaced from the origin of the physical world coordinate system of the host vehicle to obtain the position of the rear road lane line. Therefore, the position of the rear lane line is accurately known, the vehicle is conveniently and relatively controlled according to the position of the rear lane line, and a foundation is laid for ensuring the driving safety.
Since in practical applications, whether it is a target vehicle ahead of the subject vehicle or a target vehicle behind the subject vehicle, there are a plurality of driving states and driving positions, and different driving states and driving positions are directly related to specific control operations on the subject vehicle, for example, for a target vehicle other than the host vehicle, as long as it remains running in the non-host vehicle, there is no influence on the driving safety of the subject vehicle regardless of acceleration and deceleration thereof, but once it changes lane to the host vehicle, the subject vehicle needs to perform a deceleration operation or the like.
The following describes in detail how to recognize the preceding target vehicle and the following target vehicle, respectively.
Fig. 4 is a flowchart of a vehicle running automatic control method according to a fourth embodiment of the present invention, and as shown in fig. 4, the above step S105 includes:
s401 marks the front own lane and the front non-own lane in all the front vehicle recognition ranges.
Specifically, according to the equal-proportion front road lane lines acquired in the second image, the slope of the initial straight line of each front road lane line is obtained by comparing the number of rows and the number of columns occupied by the initial straight line portion of each front road lane line, a front vehicle identification range created according to the front road lane line where the initial straight lines of the two road lane lines with the largest slope are located is marked with the label of the front local lane, and other created front vehicle identification ranges are marked with the labels of the non-local lanes in front.
Thus, the front road lane lines may be mapped into the second image according to the interleaved mapping relationship between the first image and the second image to generate a number of front vehicle recognition ranges in the second image, and labels of the front own lane and the front non-own lane for all the front vehicle recognition ranges.
S402, the target vehicle of the front self-lane is identified according to the vehicle identification range marking the label of the front self-lane.
Specifically, after the front own-lane tag is acquired, the front own-lane target vehicle can be identified within the front own-lane identification range according to the vehicle identification range marking the front own-lane tag.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the distance and position changes, so that the front own-lane target vehicle can be actually identified in the vehicle identification range for marking the own-lane label.
And S403, identifying the front non-own-lane target vehicle according to the vehicle identification range marking the front non-own-lane label.
Specifically, after the forward non-own-lane tag is acquired, the forward non-own-lane target vehicle can be identified within the forward non-own-lane identification range according to the vehicle identification range marking the forward non-own-lane tag.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the change of the distance and the position, so that the front non-own-lane target vehicle can be actually identified in the vehicle identification range marked with the non-own-lane label.
And S404, identifying the front lane-changing target vehicle according to the front vehicle identification range of the two-two combination.
Specifically, since the preceding own-lane target vehicle and the preceding non-own-lane target vehicle can be recognized, the preceding lane-change target vehicle is recognized from the preceding vehicle recognition ranges combined two by two based on the same recognition method.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different moments to detect the change of the distance and the position, and then the front lane-changing target vehicle is actually identified according to the front vehicle identification range combined in pairs.
Fig. 5 is a flowchart of a vehicle running automatic control method according to a fifth embodiment of the present invention, and as shown in fig. 5, the above step S108 includes:
and S501, marking labels of a rear own lane and a rear non-own lane for all rear vehicle identification ranges.
Specifically, according to the rear road lane lines in the third image, the slope of the initial straight line of each rear road lane line is obtained by comparing the number of rows and the number of columns occupied by the initial straight line portion of each rear road lane line, the created rear vehicle identification range marks the label of the rear road lane according to the rear road lane line where the initial straight lines of the two road lane lines with the largest absolute value of the slope are located, and the other created rear vehicle identification ranges mark the labels of the rear non-road lanes.
Thus, the rear highway lane lines can be mapped into the second image according to the interweaving mapping relation between the first image and the second image so as to generate a plurality of rear vehicle identification ranges in the second image, and labels of the rear lane and the rear non-own lane are marked for the rear vehicle identification ranges.
And S502, identifying the target vehicle of the lane behind the mark according to the vehicle identification range of the lane label behind the mark and the point cloud data of the rear target point.
Specifically, after the rear own-lane tag is acquired, the rear own-lane target vehicle can be identified within the rear own-lane identification range according to the vehicle identification range of the rear own-lane tag marked.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the distance and position changes, so that the rear own-lane target vehicle can be actually identified in the vehicle identification range for marking the own-lane label.
And S503, identifying the non-self-lane target vehicles behind the mark according to the vehicle identification range of the non-self-lane mark behind the mark and the rear target point cloud data.
Specifically, after the rear non-own-lane tag is acquired, the rear non-own-lane target vehicle can be identified within the rear non-own-lane identification range according to the vehicle identification range of the rear non-own-lane tag marked.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different time points to detect the change of the distance and the position, so that the rear non-own-lane target vehicle can be actually identified in the vehicle identification range marked with the non-own-lane label.
And S504, identifying and marking the rear lane-changing target vehicle according to the rear vehicle identification range and the rear target point cloud data which are combined in pairs.
Specifically, since the rear own-lane target vehicle and the rear non-own-lane target vehicle can be recognized, the rear lane-change target vehicle is recognized from the rear vehicle recognition ranges combined two by two based on the same recognition method.
Specifically, since the distance and position of the target vehicle relative to the TOF sensor always changes over time, the distance and position of the road surface, the isolation belt relative to the TOF sensor is approximately unchanged over time. Therefore, a time differential depth image can be created by using two depth images acquired at different moments to detect the change of the distance and the position, and then the rear lane-changing target vehicle is actually identified according to the rear vehicle identification range combined in pairs.
Of course, in addition to the above-described method for identifying the target vehicle, other methods may be used to obtain the target vehicle, as a possible implementation manner, after the target boundary of the front target vehicle is identified after step S109, the target boundary detected in each vehicle identification range is respectively projected onto the row coordinate axis of the image, and one-dimensional search is performed on the row coordinate axis, so that the number of rows and the range of row coordinates occupied by the longitudinal target boundaries of all the front target vehicles in the vehicle identification range, and the number of columns and the position of row coordinates occupied by the transverse target boundaries can be determined.
The longitudinal target boundary refers to a target boundary which occupies a large number of pixel rows and a small number of columns, and the transverse target boundary refers to a target boundary which occupies a large number of pixel rows and a large number of columns.
Furthermore, according to the column number and the row coordinate position occupied by all the transverse target boundaries in the vehicle identification range, the column coordinate positions of all the longitudinal target boundaries (namely the column coordinate starting positions and the column coordinate end positions of the corresponding transverse target boundaries) are searched in the vehicle identification range, and the target boundaries of different target vehicles are distinguished according to the principle that the target boundaries contain consistent distance information, so that the positions and the distance information of all front target vehicles in the vehicle identification range are determined.
Therefore, the detection of the target boundary of the front target vehicle can uniquely determine the position of the depth sub-image formed by the front target vehicle in the depth image, so as to uniquely determine the distance information of the front target vehicle.
The boundary detection method according to this example can simultaneously detect a plurality of preceding target vehicles and their distance information, and further identify a preceding own-lane target vehicle within the vehicle identification range marked with the own-lane tag, identify a preceding non-own-lane target vehicle within the vehicle identification range marked with the non-own-lane tag, and identify a preceding lane-change target vehicle within the vehicle identification ranges combined two by two.
Based on the same principle, the rear target vehicle can also be identified, and the details are not repeated herein.
As another implementation manner of acquiring the rear target vehicle, after acquiring a plurality of rear target point cloud data in step S107 described above, the point cloud data is projected into the third image according to the installation parameters of the laser radar to acquire the rear target point cloud data, for example, X, Y, Z coordinate data regarding the origin of the physical world coordinate system of the subject vehicle, and the rear vehicle identification range in the third image is X, Y coordinate data regarding the origin of the physical world coordinate system of the subject vehicle, so that data outside the rear vehicle identification range in the rear target point cloud data may be discarded first to reduce the data amount of the rear target point cloud data.
In practical application, the reduced rear target point cloud data still contains point cloud data formed by partially reflecting laser on the ground, and the point cloud data formed by partially reflecting laser on the ground can be discarded through Z coordinate data in the rear target point cloud data, so that only point cloud data formed by reflecting laser on a plurality of rear target vehicles are basically reserved, and the rear target vehicle point cloud data is obtained.
Further, the point cloud data set of the same target vehicle may be obtained by clustering the two-dimensional contour of the rear target vehicle point cloud data within the vehicle identification range of the own-lane tag behind the mark, for example, according to X, Y coordinates, for example, using a clustering method such as k-means familiar to those skilled in the art, so that the rear own-lane target vehicle is identified according to the vehicle identification range of the own-lane tag behind the mark and the rear target point cloud data.
Similarly, according to the method, the rear non-self-lane target vehicles can be identified according to the vehicle identification range marked with the rear non-self-lane labels and the rear target point cloud data, and the rear lane-changing target vehicles can be identified according to the rear vehicle identification range and the rear target point cloud data combined in pairs.
In summary, the vehicle driving automatic control method according to the embodiment of the invention accurately identifies the front and rear target vehicles, so as to control the driving of the subject vehicle according to the motion parameters of the front target vehicle, the turn signal lamps and the rear target vehicle, and provide guarantee for ensuring driving safety.
Based on the above embodiments, in order to more clearly describe the driving control of the host vehicle in the tunnel, the following description will be given by combining the control implementation processes of different working conditions of the host vehicle when the tunnel information is taken as the tunnel entrance information and the tunnel exit information, respectively.
Fig. 6 is a flowchart of a vehicle running automatic control method according to a sixth embodiment of the present invention, and as shown in fig. 6, the above step S113 includes:
s601, when the tunnel information is tunnel entrance information, after the settings of the cruising speed upper limit and cruising safety distance of the main vehicle are changed according to the tunnel information and the speed limit information, deceleration control is executed.
Specifically, the vehicle navigation system is generally capable of providing distance information from a current subject vehicle position to an entrance of a tunnel before the subject vehicle enters the tunnel, and is capable of providing speed limit information of the tunnel, and providing distance information from the current subject vehicle position to the speed limit sign. The cruising speed of the main vehicle before entering the tunnel is usually higher than the tunnel speed limit, namely the vehicle speed needs to be controlled before entering the tunnel.
For example, tunnel entrance information and speed limit information may be acquired from a navigation system periodically through a bus system of a subject vehicle, a difference value between a current vehicle speed value and a tunnel speed limit value is calculated, if the difference value is a positive value, a comfortable sliding distance from the subject vehicle to the tunnel speed limit is calculated, when the distance from the subject vehicle to the tunnel entrance or the speed limit sign reaches, for example, 1.2 times the comfortable sliding distance, the setting of the cruise vehicle speed upper limit (i.e., the tunnel speed limit value) and the setting of the cruise safety distance (e.g., the cruise safety distance is updated to half of the original value) may be updated, and the comfortable sliding deceleration may be performed by reducing the power output.
It should be noted that, as another implementation form, it is also possible to identify tunnel entrance information and speed limit information based on a first image and a second image, which are a color image and a depth image, respectively, change the settings of the cruise vehicle speed upper limit and the cruise safe distance according to the identified tunnel entrance information and speed limit information, and perform necessary deceleration control.
For example, the entrance of the highway tunnel is a section of the tunnel, the section always intersects with the lane and the lane line, and the imaging brightness of the lane line inside the tunnel entrance is small and is not obvious in the first image, while the imaging brightness of the lane line outside the tunnel is obvious in the first image, so that the farthest pixel position for identifying the imaging of the lane line in the first image is equivalent to the imaging position of the tunnel entrance in the first image.
The imaging of the tunnel entrance in the first image is affected by illumination and brightness, but the depth imaging in the second image is not affected by illumination and brightness, so that the highway lane lines identified by the first image are mapped into the second image to generate a plurality of vehicle identification ranges in the second image, and a depth value, such as a (distance from the host vehicle to the tunnel entrance) at the tunnel entrance is obtained according to the farthest pixel position of the vehicle identification range, and the tunnel entrance (tunnel section) has an approximately consistent depth value.
For example, the complete shape of the tunnel entrance section can be obtained by taking the pixel position belonging to the depth value range of A +/-1 m, namely the shape formed by the depth pixel of the tunnel outer wall around the tunnel entrance, and the depth value range of the tunnel entrance in A +/-1 m is that the hole has no reflection, and the depth pixel of the tunnel outer wall and the tunnel entrance hole pixel position form strong contrast, so that the tunnel entrance hole pixel position can be easily extracted to determine the height, the width and the shape of the tunnel entrance, and the tunnel entrance information is identified.
It is understood that the speed limit sign is red in color due to the outer circle, and the speed limit information is suitably recognized using the second image of the color image. For example, most of non-red image information is filtered by red chroma in the second image, the pixel position of the speed limit sign in the second image is determined by a circular or elliptical Hough transform algorithm familiar to a person skilled in the art in the filtered red image, the pixel position can be projected into the first image to determine the depth value, namely the distance, of the speed limit sign to the host vehicle, and finally the speed limit value in a red outer circle is recognized by a digital template matching method familiar to the person skilled in the art according to the pixel position of the speed limit sign, so that the speed limit information is recognized.
Further, for example, the cruise system calculates the difference value of the current vehicle speed value minus the identified tunnel speed limit value, and if the difference value is a positive value, the comfortable sliding distance of the host vehicle from the speed reduction to the tunnel speed limit is calculated; the cruise system may start updating the setting of the cruise vehicle speed upper limit (i.e., updating to the tunnel speed limit value) and updating the setting of the cruise safe distance (e.g., updating the cruise safe distance to half of the original value) when the distance from the host vehicle to the speed limit sign reaches, for example, 1.2 times the above-mentioned comfortable coasting distance, and reduce the power output (or recovery of the electric vehicle start braking energy) to perform the comfortable coasting deceleration.
And S602, when the tunnel information is tunnel entrance information, reducing the front 3D camera, and identifying the working condition that the front non-self-lane target vehicle decelerates and changes lane to the self-lane according to the motion parameters of the front target vehicle and the steering lamp, so that the motion parameter control system of the main vehicle performs braking adjustment in the tunnel in advance, and the lamp system of the main vehicle reminds the rear target vehicle.
It can be understood that the main vehicle usually has a working condition of running at a medium-low speed in the tunnel, and at this time, the road environment near the main vehicle has a greater influence on the control of the main vehicle, and the focal length of the front 3D camera is reduced to obtain the environment image of a wider viewing angle near the front of the main vehicle, for example, the focal length reduction can be realized by an electric focusing lens, and the light source for assisting the 3D camera imaging can be switched to a wide-near-light illumination type matched with the 3D camera. Further improving the driving safety.
It should be noted that, with reference to the left and right lane lines of the front own lane, no matter the front target vehicle is in a straight lane or a curve when changing lanes, no matter the front target vehicle changes lanes left or right, the identification can be accurate, so as to provide an accurate motion control basis for the adaptive cruise system of the subject vehicle.
Fig. 7 is a flowchart of a vehicle running automatic control method according to a seventh embodiment of the present invention, and as shown in fig. 7, the above step S113 includes:
s701, identifying tunnel exit information and speed limit information according to the first image and the second image, and changing the settings of the cruising speed upper limit and cruising safety distance of the main vehicle according to the tunnel exit information, the speed limit information and user setting information.
Specifically, tunnel exit information and speed limit information are identified from the first image and the second image. Based on the above description, the exit of the highway tunnel is a cross section of the tunnel, the cross section always intersects with the lane and the lane line, the lane line inside the exit of the tunnel is obvious in imaging in the first image, and the lane line outside the tunnel is usually not obvious in overexposure imaging in the first image, so that the step of identifying the farthest pixel position of the imaging of the lane line in the first image is equivalent to identifying the imaging position of the exit of the tunnel in the first image.
In the third step of the above-mentioned vehicle identification method, the road lane line identified by the first image is mapped into the second image to generate a plurality of vehicle identification ranges in the second image, and a depth value at the tunnel exit, such as B (i.e. the distance from the host vehicle to the tunnel exit) is obtained according to the farthest pixel position of the vehicle identification range, and the depth value at the tunnel exit (i.e. the tunnel section) is approximately consistent.
For example, the complete shape of the tunnel exit cross section can be obtained by taking the pixel position belonging to the depth value range of B +/-0.5 m, namely the shape formed by the depth pixel of the tunnel inner wall around the tunnel exit, and the depth value range of the tunnel exit at B +/-0.5 m is a cavity without reflection, and the depth pixel of the tunnel inner wall and the tunnel exit cavity pixel position form a strong contrast, so that the tunnel exit cavity pixel position can be easily extracted to determine the height, the width and the shape of the tunnel exit, and the tunnel exit information can be identified.
It should be noted that the specific description of the speed limit information is not described in detail herein.
It should be noted that the cruising speed of the subject vehicle after exiting the tunnel is usually higher than the tunnel speed limit, that is, the speed of the subject vehicle after exiting the tunnel may be controlled according to the setting of the user of the subject vehicle.
S702, the focal length of the front 3D camera is increased, and the working condition that the front target vehicle in the main lane changes to the non-main lane in the front in a deceleration way is identified according to the motion parameters of the front target vehicle and the steering lamp, so that the motion parameter control system of the main vehicle does not perform braking adjustment in the tunnel.
Specifically, the motion parameters of the main vehicle are controlled according to the motion parameters of the identified front target vehicle in the main lane and the tail turn lamp of the main vehicle, so that the running economy of the main vehicle is improved.
It can be understood that, after the host vehicle exits the tunnel, the road environment at a position far away from the host vehicle has a greater influence on the host vehicle control, the focal length of the front 3D camera is increased to acquire the environment imaging details at a position far away from the front of the host vehicle, for example, the focal length can be increased by an electric focusing lens, and the light source for assisting the 3D camera imaging can be switched to a high-beam irradiation light type matched with the 3D camera, for example.
Similarly, when the vehicle is running on the road outside the tunnel, the self-adaptive cruise system of the vehicle keeps the constant-speed cruise working condition, so that the good running economy of the main vehicle is obtained, and the running economy of the vehicle is poorer as the times of the variable-speed cruise working condition are more. For example, during deceleration lane change of the subject vehicle on the own lane to an emergency stop lane or ramp, the constant speed cruise of the conventional vehicle adaptive cruise system is interrupted, and the subject vehicle decelerates first and accelerates again after the subject vehicle ahead exits the own lane of the subject vehicle, resulting in an uneconomical shift cruise.
Therefore, the continuous process from turning to completing lane changing to non-self lane of the target vehicle in front of the self lane can be recognized and monitored without a combined navigation system, the duration, the distance and the transverse displacement of the front target vehicle in the continuous lane changing process are easy to monitor, and the motion parameters of the target vehicle can be used for controlling the motion parameters of the main vehicle so as to reduce unnecessary variable speed cruising.
For example, the pixel distance from the left target boundary of the front target vehicle to the left lane line of the host vehicle when the right turn light of the host vehicle is turned on is determined as the lateral distance P through conversion of camera projection relationship, N first images and N second images at different times are continuously acquired (the time for acquiring one first image or one second image is T), the change of the distance R of the target vehicle is identified and recorded during the period, the non-host vehicle which just completes lane change to the right side of the host vehicle is identified, the left target boundary of the front target vehicle coincides with the right lane line of the host vehicle, the width of the host vehicle is D, and therefore the motion parameters of the front target vehicle in the continuous lane change process are the duration N × T, the distance to the host vehicle is R, and the lateral displacement is (D-P).
Therefore, according to the distance R during the lane change of the front target vehicle identified above, the adaptive cruise system of the host vehicle can keep cruising at a constant speed as long as it is determined that R is always greater than the set safe cruise brake distance, and even if it is identified that the front target vehicle has just completed the lane change to the non-host vehicle lane on the right side of the host vehicle, at this time, the left target boundary of the front target vehicle coincides with the lane line on the right side of the host vehicle lane and R is less than the safe cruise brake distance, the adaptive cruise system of the host vehicle may reduce the power output, slightly wait for the identification that the target vehicle continues to be displaced to the right, and restore the power output to keep cruising at a constant speed.
To sum up, the vehicle driving automatic control method of the embodiment of the invention can acquire tunnel entrance information and speed limit information from a navigation system, change the settings of the cruising speed upper limit and the cruising safety distance according to the acquired tunnel entrance information and speed limit information, execute necessary deceleration control, reduce the focal length of the front 3D camera, perform in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the rear target vehicle, recognize tunnel exit information and speed limit information based on the first image and the second image, change the settings of the cruising speed upper limit and the cruising safety distance according to the recognized tunnel exit information, speed limit information and user setting information, and recognize the working condition that the target vehicle of the own lane is decelerated and changed to the non-own lane of the main vehicle through the recognized motion parameters of the target vehicle and the corresponding vehicle tail turn light of the recognized target vehicle, the motion parameter control system of the subject vehicle can reduce unnecessary braking adjustment, thereby reducing the risk of rear-end collision due to unnecessary braking adjustment of the subject vehicle.
Based on the above embodiments, in order to more clearly describe how to perform in-tunnel cruise control on the motion parameters of the subject vehicle according to the motion parameters and the turn lights of the front target vehicle and the motion parameters of the rear target vehicle, the following describes, in conjunction with a specific application scenario, a continuous process from turning on the turn lights to completing lane change to a non-self-lane of the subject vehicle in front of the subject vehicle and monitoring the subject vehicle in the present embodiment.
Specifically, the forward own-lane target vehicle is identified according to the vehicle identification range marking the forward own-lane label, the forward lane-changing target vehicle is identified according to the forward vehicle identification range combined in pairs, and the turn lights of the corresponding target vehicles are identified according to the vehicle light identification areas, so that the continuous process from turning on the turn lights to completing the lane changing to the non-own-lane of the forward own-lane target vehicle can be identified and monitored, and the motion parameters such as the duration, the distance relative to the host vehicle, the relative speed and the transverse displacement of the target vehicle in the continuous lane changing process can be easily monitored, so that the motion parameters of the target vehicle can be used for controlling the host vehicle according to the motion parameters of the target vehicle.
When the right steering lamp of the target vehicle of the front own lane is identified to be turned on, the pixel distance from the left target boundary of the target vehicle to the left lane line of the front own lane is converted through a camera projection relation to be determined as a transverse distance P, N first images and N second images at different moments are continuously acquired (the time for acquiring one first image or one second image is T), the change of the distance R of the target vehicle is identified and recorded in the process, and the relative speed V of the target vehicle can be calculated through the change of the distance R of the target vehicle relative to the T.
Recognizing that the target vehicle just completes lane changing to a non-self-lane on the right side of the front self-lane, wherein the left target boundary of the target vehicle coincides with the lane line on the right side of the front self-lane, and the width of the self-lane is D, so that the motion parameters of the front target vehicle in the continuous lane changing process are duration N multiplied by T, distance to the host vehicle is R, relative speed V and transverse displacement (D-P).
It should be emphasized that, the above identified lateral displacement is based on the left and right lane lines of the lane, and the lateral displacement can be accurately identified no matter the target vehicle is in a straight lane or a curve when changing the lane, or no matter the target vehicle changes the lane to the left or to the right, so as to provide an accurate control basis for the adaptive cruise system of the host vehicle.
Further, the lateral displacement of the target vehicle identified by the conventional vehicle adaptive cruise system that relies only on the lidar is based on the subject vehicle, and the lateral displacement of the target vehicle identified by the subject vehicle is sometimes not provided to the vehicle adaptive cruise system for accurate motion control.
Fig. 8 is a scene diagram of a vehicle running automatic control method according to an embodiment of the present invention.
As shown in fig. 8, when the target vehicle ahead of the host vehicle has completed the lane change from the host vehicle to the right in a curve that turns to the left, the lidar of the conventional vehicle that is located on a straight lane may still recognize that the target vehicle ahead is partially located on the host vehicle lane, the curvature radius of the curve is 250 meters, the target vehicle ahead has traveled 25 meters on the curve during the lane change, and the lane line on the right side of the host vehicle lane that coincides with the boundary of the target vehicle on the left side of the target vehicle ahead has been shifted to the left at 25 meters of the curve from the line extending the straight lane of the target vehicle lane to the left
If the lidar of the conventional vehicle recognizes that the distance to the target vehicle is 50 m to 80 m, i.e. the lidar of the above conventional vehicle is located on a straight road and still has a distance of 25 to 55 meters from the entrance of the curve, the lidar of the conventional vehicle recognizes that the front target vehicle still has a body with a width of about 1.25 m on the lane without the prior knowledge of the curve, and as the target vehicle continues to decelerate along the leftward curve, the lidar of the conventional vehicle recognizes that the target vehicle has a vehicle body of a greater width on the own lane, that is, the lidar of the conventional vehicle described above will generate inaccurate identification and will cause the conventional vehicle adaptive cruise system to perform continuous inaccurate and unnecessary braking, resulting in an increased risk of rear-end collision of the conventional vehicle with its rear target vehicle.
Similarly, the lidar of the conventional vehicle has inaccuracy in recognizing the lane change from the left lane to the right lane of the target vehicle in the right-hand curve.
In order to solve the inaccuracy of the above recognition, the above conventional vehicle lidar either adds a camera to help recognize the lane line or increases the accuracy of the azimuth recognition, which in short increases the complexity and cost of the system.
Therefore, according to the above example, according to the motion parameters of the target vehicle identified by the present invention and the turn signal lamp of the corresponding identified target vehicle, the condition that the target vehicle in the own lane changes to the subject vehicle without the own lane at a deceleration can be identified, so that the motion parameter control system of the subject vehicle can reduce unnecessary braking adjustment, thereby reducing the risk of rear-end collision due to unnecessary braking adjustment of the subject vehicle.
Similarly, according to the above example, the present invention can also recognize and monitor the continuous process from turning the turn signal to completing the lane change to the own lane of the non-own-lane target vehicle, and the motion parameters such as the duration, the distance from the host vehicle, the relative speed and the lateral displacement of the front target vehicle in the continuous lane change process are also easily monitored, so that the motion parameters of the front target vehicle can be used to control the motion parameters of the host vehicle to make braking adjustment earlier and improve the driving safety, and to control the lamp to warn the rear target vehicle earlier to reduce the rear-end collision risk.
Fig. 9 is a scene diagram of a vehicle running automatic control method according to another embodiment of the present invention.
For example, as shown in fig. 9, the subject vehicle travels in a constant speed mode in a straight lane of the own lane, and there is still a distance of 55 meters (or up to 25 meters) from the entrance of the curve, the curve curves to the right and has a radius of curvature of 250 meters, a target vehicle not ahead of the own lane is turning the left turn signal to the own lane on the right side of the own lane 25 meters ahead of the entrance of the curve, and the left target boundary of the target vehicle already coincides with the right lane line of the own lane.
According to the above example, the present invention can accurately recognize that the front target vehicle is changing lane to the lane, and since the target vehicle is about 80 meters (or 50 meters) away from the host vehicle, the present invention can control the power system of the host vehicle to accurately perform the action of reducing power output and even braking, and turn on the brake light in time, so as to ensure the safe distance between the host vehicle and the front and rear target vehicles, thereby improving the driving safety of the host vehicle and reducing the risk of rear-end collision.
However, the lateral displacement of the target vehicle identified by the conventional vehicle adaptive cruise system only relying on the laser radar is based on the subject vehicle, and the extension line of the front target vehicle from the lane line at the right side of the vehicle lane is also about the extension line of the front target vehicle to the lane line at the right side of the vehicle lane under the condition of lacking the prior knowledge of the curveThe transverse distance of meter, i.e. the laser radar needs to be displaced transversely by about 1.25 meters to the left to confirm that the front target vehicle starts entering the lane by mistake.
If the lateral displacement speed of the front target vehicle is 1 meter per second, the above-mentioned conventional vehicle adaptive cruise system relying only on the lidar will perform the action of reducing the power output and even braking after the front target vehicle actually enters the lane for about 1.25 seconds, which undoubtedly reduces the safe distance between the host vehicle and the front and rear target vehicles, resulting in the reduction of the driving safety of the host vehicle and the increase of the risk of rear-end collision.
Therefore, according to the above example, according to the motion parameters of the identified target vehicle and the corresponding steering lamp of the identified target vehicle, the working condition that the target vehicle without the host lane changes to the host lane during deceleration can be identified, so that the motion parameter control system and the safety system of the host vehicle can make adjustments earlier, the running safety of the host vehicle and passengers thereof is improved, the lamp system of the host vehicle can make adjustments earlier to remind the rear target vehicle, more braking or adjusting time is provided for the rear target vehicle, and the rear-end collision risk is reduced more effectively.
In conclusion, the vehicle running automatic control method provided by the embodiment of the invention improves the running safety of the main vehicle and passengers thereof, so that the lamp system of the main vehicle can make adjustment earlier to remind a rear target vehicle, more braking or adjusting time is provided for the rear target vehicle, and the rear-end collision risk is reduced more effectively.
In order to achieve the purpose, the invention also provides an automatic vehicle running control device.
Fig. 10 is a schematic configuration diagram of an automatic control device for vehicle running according to a first embodiment of the present invention.
As shown in fig. 10, the vehicle travel automatic control device may include: a first obtaining module 1010, a second obtaining module 1020, a third obtaining module 1030, a first generating module 1040, a first identifying module 1050, a second generating module 1060, a fourth obtaining module 1070, a second identifying module 1080, a fifth obtaining module 1090, a first adjusting module 1100, a second adjusting module 1110 and a control module 1120.
The first acquiring module 1010 is configured to acquire a first image and a second image of an environment in front of the subject vehicle from the front-facing 3D camera, where the first image is a color or brightness image and the second image is a depth image.
In one embodiment of the invention, the first acquisition module 1010 acquires a first image of the environment in front of the subject vehicle from an image sensor of the front 3D camera and a second image of the environment in front of the subject vehicle from a time-of-flight sensor of the front 3D camera.
And a second obtaining module 1020, configured to obtain a front highway lane line according to the first image.
In an embodiment of the present invention, the second obtaining module 1020, when the first image is a luminance image, identifies a front road lane line according to a luminance difference between the front road lane line and a road surface in the first image; or,
and when the first image is a color image, converting the color image into a brightness image, and identifying the front highway lane line according to the brightness difference between the front highway lane line and the road surface in the first image.
And a third obtaining module 1030, configured to obtain a third image and a rear road lane line according to the imaging parameter of the first image and the front road lane line.
The first generating module 1040 is configured to map the front road lane line into the second image according to the interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle identification ranges.
The first recognition module 1050 is configured to recognize the preceding target vehicle according to all the preceding vehicle recognition ranges.
A second generating module 1060 for generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane line.
A fourth obtaining module 1070, configured to obtain point cloud data from a laser radar, and project the point cloud data into the third image according to the installation parameters of the laser radar to obtain rear target point cloud data.
And the second identification module 1080 is used for identifying rear target vehicles according to all the rear vehicle identification ranges and the rear target point cloud data.
And a fifth obtaining module 1090 configured to obtain the tunnel information and the speed limit information.
In an embodiment of the present invention, the fifth obtaining module 1090 identifies the tunnel entrance information and the speed limit information according to the first image and the second image. Or,
and identifying tunnel exit information and speed limit information according to the first image and the second image.
And the first adjusting module 1100 is used for changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information.
And a second adjusting module 1110, configured to adjust a focal length of the front 3D camera according to the tunnel information.
And a control module 1120, configured to perform intra-tunnel cruise control on the motion parameters of the host vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
Fig. 11 is a schematic configuration diagram of an automatic control device for vehicle running according to a second embodiment of the present invention, in one embodiment of the present invention. As shown in fig. 11, the vehicle travel automatic control device further includes, in addition to the one shown in fig. 10: a third generation module 1130 and a third recognition module 1140.
A third generating module 1130, configured to generate a front target vehicle range according to the front target vehicle, and map the front target vehicle range into the first image according to the interleaved mapping relationship between the first image and the second image to generate a front headlight identification area.
In one embodiment of the present invention, the third generation module 1130 detects the target boundary of the front target vehicle for recognition by using a boundary detection method in an image processing algorithm.
In one embodiment of the invention, the third generating module 1130 generates the front target vehicle range from a closed area enclosed by target boundaries of the front target vehicle; or,
generating a front target vehicle range according to a closed area enclosed by the extended target boundary of the front target vehicle; or,
and generating a front target vehicle range according to a closed area formed by connecting a plurality of pixel positions of the front target vehicle.
And a third identifying module 1140, configured to identify the turn signal of the corresponding front target vehicle according to the front lamp identifying region.
In one embodiment of the present invention, the third identifying module 1140 identifies the turn signal of the corresponding front target vehicle according to the color, the flashing frequency or the flashing sequence of the tail lights in the front light identifying area.
The control module 1120 identifies the working condition that the non-self-lane target vehicle in front decelerates and changes lane to the self-lane according to the motion parameters of the target vehicle in front and the steering lamp, so that the motion parameter control system of the main vehicle performs braking adjustment in the tunnel in advance, and the lamp system of the main vehicle reminds the target vehicle behind.
In one embodiment of the present invention, the control module 1120 identifies a condition that the target vehicle in the front lane changes to the non-self-lane in front at a reduced speed according to the motion parameters of the target vehicle in front and the turn signal lamps, so that the motion parameter control system of the host vehicle does not perform braking adjustment in the tunnel.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
To sum up, the automatic vehicle driving control apparatus according to the embodiment of the present invention obtains a first image and a second image of an environment in front of a host vehicle from a front-facing 3D camera, obtains a front road lane line, obtains a third image and a rear road lane line according to an imaging parameter of the first image and the front road lane line, maps the front road lane line to the second image according to an interleaved mapping relationship between the first image and the second image to generate a plurality of front vehicle identification ranges, identifies a front target vehicle according to the front vehicle identification ranges, generates a plurality of rear vehicle identification ranges according to the third image and the rear road lane line, obtains rear target point cloud data from a laser radar, identifies a rear target vehicle according to the rear target point cloud data, finally obtains tunnel information and speed limit information, changes settings of a cruising speed upper limit and a safe distance of the host vehicle according to the tunnel information and the speed limit information, and adjusts the front-facing 3D camera according to the tunnel information And D, performing intra-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle by the focal length of the camera. Therefore, the main vehicle can control the braking of the main vehicle according to the environmental information of the highway lane line, unnecessary braking adjustment is reduced, the risk of rear-end collision is effectively reduced, and the running safety of the main vehicle in the tunnel is improved.
Fig. 12 is a schematic configuration diagram of an automatic control device for vehicle running according to a third embodiment of the present invention. As shown in fig. 12, the second obtaining module 1020 includes a creating unit 1021, a first detecting unit 1022 and a second detecting unit 1023 on the basis as shown in fig. 11.
Wherein, the creating unit 1021 is used for creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold value.
The first detecting unit 1022 is configured to detect all edge pixel positions of a straight-lane solid-line lane line or all edge pixel positions of a curve solid-line lane line in the binary image according to a preset detection algorithm.
The second detecting unit 1023 is configured to detect all edge pixel positions of the straight dashed lane line or all edge pixel positions of the curved dashed lane line in the binary image according to a preset detection algorithm.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In summary, the automatic vehicle driving control apparatus according to the embodiment of the present invention creates a binary image of a front highway lane line according to the luminance information of the first image and a preset luminance threshold, detects all edge pixel positions of a straight solid lane line or all edge pixel positions of a curve solid lane line in the binary image according to a preset detection algorithm, and detects all edge pixel positions of a straight dotted lane line or all edge pixel positions of a curve dotted lane line in the binary image according to a preset detection algorithm. Therefore, the broken line and the solid line lane line of the straight road and the curve road in the highway lane line can be accurately identified.
Fig. 13 is a schematic configuration diagram of an automatic control device for vehicle running according to a fourth embodiment of the present invention. As shown in fig. 13, the third obtaining module 1030 includes a projection unit 1031 and an obtaining unit 1032 on the basis as shown in fig. 11.
And the projection unit 1031 is used for projecting all the pixel positions of the front road lane line to the physical world coordinate system of the host vehicle according to the imaging parameters of the first image to establish a third image.
An obtaining unit 1032 for obtaining the position of the front road lane line in the third image by continuous time accumulation and displacement from the origin of the physical world coordinate system of the subject vehicle.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In summary, the automatic vehicle driving control apparatus according to the embodiment of the present invention projects all pixel positions of the front road lane line to the physical world coordinate system of the host vehicle according to the imaging parameters of the first image to create a third image, and acquires the position of the rear road lane line by accumulating the position of the front road lane line in the third image for continuous time and by displacing the position of the front road lane line with respect to the origin of the physical world coordinate system of the host vehicle. Therefore, the position of the rear lane line is accurately known, the vehicle is conveniently and relatively controlled according to the position of the rear lane line, and a foundation is laid for ensuring the driving safety.
Fig. 14 is a schematic configuration diagram of a vehicular running automatic control apparatus according to a fifth embodiment of the invention. As shown in fig. 14, the first recognition module 1050 includes a first mark unit 1051, a first recognition unit 1052, a second recognition unit 1053 and a third recognition unit 1054 on the basis of that shown in fig. 11.
The first marking unit 1051 is used for marking the labels of the front own lane and the front non-own lane for all the front vehicle identification ranges.
A first recognition unit 1052 for recognizing the front own-lane target vehicle according to the vehicle recognition range marking the front own-lane tag.
A second recognition unit 1053 for recognizing the front non-own-lane target vehicle from the vehicle recognition range marking the front non-own-lane tag.
And a third identifying unit 1054 for identifying the preceding lane-change target vehicle according to the preceding vehicle identification ranges combined two by two.
Fig. 15 is a schematic configuration diagram of an automatic control device for vehicle running according to a sixth embodiment of the invention. As shown in fig. 15, the second recognition module 1080 includes a second marking unit 1081, a fourth recognition unit 1082, a fifth recognition unit 1083 and a sixth recognition unit 1084 on the basis of the illustration shown in fig. 11.
The second marking unit 1081 is configured to mark the labels of the rear own lane and the rear non-own lane for all the rear vehicle identification ranges.
And a fourth recognition unit 1082, configured to recognize the target vehicle in the lane behind the mark according to the vehicle recognition range of the lane label behind the mark and the point cloud data of the rear target point.
And a fifth identifying unit 1083, configured to identify a non-own-lane target vehicle behind the mark according to the vehicle identification range of the non-own-lane label behind the mark and the rear target point cloud data.
And a sixth identifying unit 1084, configured to identify the marked rear lane-change target vehicle according to the combined rear vehicle identification range and the rear target point cloud data.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
In summary, the automatic vehicle driving control device according to the embodiment of the invention accurately identifies the front target vehicle and the rear target vehicle, so as to control the driving of the host vehicle according to the motion parameters of the front target vehicle, the turn signal lamps and the rear target vehicle, and provide guarantee for ensuring driving safety.
Fig. 16 is a schematic configuration diagram of an automatic control device for vehicle running according to a seventh embodiment of the present invention. As shown in fig. 16, the automatic vehicle running control apparatus further includes a third adjustment module 1150 based on that shown in fig. 11.
The third adjusting module 1150 is configured to execute deceleration control.
In an embodiment of the present invention, the first adjusting module 1100 is further configured to change the settings of the cruising speed upper limit and the cruising safety distance of the host vehicle according to the tunnel exit information, the speed limit information, and the user setting information.
The second adjusting module 1110 is further configured to decrease the focal length of the front-facing 3D camera when the tunnel information is tunnel entrance information, or increase the focal length of the front-facing 3D camera when the tunnel information is tunnel exit information.
It should be noted that the foregoing explanation of the vehicle driving automatic control method is also applicable to the vehicle driving automatic control device according to the embodiment of the present invention, and the implementation principle thereof is similar and will not be described herein again.
To sum up, the automatic vehicle driving control apparatus according to the embodiment of the present invention can acquire tunnel entrance information and speed limit information from a navigation system, change the settings of the cruising speed upper limit and the cruising safety distance according to the acquired tunnel entrance information and speed limit information, execute necessary deceleration control, reduce the focal length of the front 3D camera, perform in-tunnel cruise control on the motion parameters of the host vehicle according to the motion parameters of the front target vehicle and the rear target vehicle, recognize tunnel exit information and speed limit information based on the first image and the second image, change the settings of the cruising speed upper limit and the cruising safety distance according to the recognized tunnel exit information, speed limit information and user setting information, and recognize the condition that the target vehicle in the own lane changes from the deceleration to the non-own lane of the host vehicle by the recognized motion parameters of the target vehicle and the corresponding vehicle rear turn light of the recognized target vehicle, the motion parameter control system of the subject vehicle can reduce unnecessary braking adjustment, thereby reducing the risk of rear-end collision due to unnecessary braking adjustment of the subject vehicle.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (32)
1. An automatic control method for vehicle running, characterized by comprising:
acquiring a first image and a second image of an environment in front of a subject vehicle from a front-facing 3D camera, wherein the first image is a color or brightness image, and the second image is a depth image;
acquiring a front road lane line according to the first image;
acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line;
mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges;
identifying a front target vehicle according to all the front vehicle identification ranges;
generating a plurality of rear vehicle identification ranges according to the third image and the rear road lane lines;
acquiring point cloud data from a laser radar, and projecting the point cloud data into the third image according to the installation parameters of the laser radar to acquire rear target point cloud data;
identifying rear target vehicles according to all rear vehicle identification ranges and the rear target point cloud data;
acquiring tunnel information and speed limit information;
changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information;
adjusting the focal length of the front 3D camera according to the tunnel information, performing in-tunnel cruise control on the motion parameters of the subject vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle, wherein adjusting the focal length of the front 3D camera according to the tunnel information includes:
when the tunnel information is tunnel entrance information, reducing the focal length of the front 3D camera;
and when the tunnel information is tunnel exit information, increasing the focal length of the front 3D camera.
2. The method of claim 1, wherein the acquiring the first image and the second image of the environment in front of the subject vehicle from the front-facing 3D camera comprises:
acquiring a first image of an environment in front of a subject vehicle from an image sensor of a front-facing 3D camera;
a second image of the environment in front of the subject vehicle is acquired from a time-of-flight sensor of the front-facing 3D camera.
3. The method of claim 1, wherein said obtaining a front highway lane line from said first image comprises:
when the first image is a brightness image, identifying a front road lane line according to the brightness difference between the front road lane line and the road surface in the first image; or,
and when the first image is a color image, converting the color image into a brightness image, and identifying the front highway lane line according to the brightness difference between the front highway lane line and the road surface in the first image.
4. The method of claim 3, wherein identifying the front highway lane line based on a difference in luminance of the front highway lane line and a road surface in the first image comprises:
creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold;
detecting all edge pixel positions of a straight-line lane line or detecting all edge pixel positions of a curve solid-line lane line in the binary image according to a preset detection algorithm;
and detecting all edge pixel positions of the straight road dotted line lane line or detecting all edge pixel positions of the curve dotted line lane line in the binary image according to a preset detection algorithm.
5. The method of claim 1, wherein said obtaining a third image and a rear highway lane line based on imaging parameters of the first image and the front highway lane line comprises:
projecting all pixel positions of the front road lane line to the physical world coordinate system of the main vehicle according to the imaging parameters of the first image to establish a third image;
and after accumulating the position of the front road lane line in the third image for continuous time, acquiring the displacement relative to the origin of the physical world coordinate system of the subject vehicle, and acquiring the position of the rear road lane line according to the displacement.
6. The method of claim 1, wherein identifying a preceding target vehicle based on all preceding vehicle identification ranges comprises:
marking the labels of the front lane and the front non-local lane for all the front vehicle identification ranges;
identifying a front own-lane target vehicle according to the vehicle identification range of the mark front own-lane label;
identifying the front non-own-lane target vehicle according to the vehicle identification range marking the front non-own-lane label;
and identifying the front lane-changing target vehicle according to the combined front vehicle identification range.
7. The method of claim 1, wherein identifying rear target vehicles from all rear vehicle identification ranges and the rear target point cloud data comprises:
marking labels of a rear own lane and a rear non-own lane for all rear vehicle identification ranges;
identifying a target vehicle of the lane behind the mark according to the vehicle identification range of the lane label behind the mark and the rear target point cloud data;
identifying the non-self-lane target vehicles behind the mark according to the vehicle identification range of the non-self-lane mark behind the mark and the rear target point cloud data;
and identifying and marking the rear lane-changing target vehicle according to the rear vehicle identification range and the rear target point cloud data which are combined pairwise.
8. The method of claim 1, further comprising:
generating a front target vehicle range according to the front target vehicle, and mapping the front target vehicle range to the first image according to the interleaving mapping relation between the first image and the second image to generate a front vehicle lamp identification area;
identifying a steering lamp of a corresponding front target vehicle according to the front vehicle lamp identification area;
the performing of the in-tunnel cruise control on the motion parameter of the subject vehicle according to the motion parameter of the front target vehicle and the motion parameter of the rear target vehicle includes:
and adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters and the steering lamp of the front target vehicle and the motion parameters of the rear target vehicle.
9. The method of claim 8, wherein the generating a forward target vehicle range from the forward target vehicle comprises:
and detecting the target boundary of the front target vehicle by adopting a boundary detection method in an image processing algorithm for identification.
10. The method of claim 8, wherein the generating a forward target vehicle range from the forward target vehicle comprises:
generating a front target vehicle range according to a closed area defined by the target boundary of the front target vehicle; or,
generating a front target vehicle range according to a closed area enclosed by the extended target boundary of the front target vehicle; or,
and generating a front target vehicle range according to a closed area formed by connecting a plurality of pixel positions of the front target vehicle.
11. The method of claim 8, wherein identifying the turn signals of the respective forward target vehicles based on the forward vehicle light identification regions comprises:
and identifying the steering lamp of the corresponding front target vehicle according to the color, the flashing frequency or the flashing sequence of the tail lamp in the front vehicle lamp identification area.
12. The method of claim 1, wherein the obtaining of the tunnel information and the speed limit information comprises:
acquiring tunnel entrance information and speed limit information from a navigation system; or,
and acquiring tunnel exit information and speed limit information from a navigation system.
13. The method of claim 1, wherein the obtaining of the tunnel information and the speed limit information comprises:
identifying tunnel entrance information and speed limit information according to the first image and the second image; or,
and identifying tunnel exit information and speed limit information according to the first image and the second image.
14. The method according to claim 1, wherein when the tunnel information is tunnel entrance information, after the changing of the settings of the cruising speed upper limit and the cruising safety distance of the subject vehicle based on the tunnel information and the speed limit information, further comprises:
deceleration control is executed.
15. The method according to claim 1, wherein when the tunnel information is tunnel exit information, the changing the settings of the cruising speed upper limit and the cruising safety distance of the subject vehicle according to the tunnel information and the speed limit information includes:
and changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel exit information, the speed limit information and the user setting information.
16. The method according to claim 8, wherein the performing intra-tunnel cruise control on the motion parameters of the subject vehicle based on the motion parameters of the preceding target vehicle and turn signals, and the following target vehicle, comprises:
identifying the working condition that the target vehicle in the non-self-lane in the front decelerates and changes lane to the self-lane according to the motion parameters of the target vehicle in the front and the steering lamp, so that the motion parameter control system of the main vehicle performs braking adjustment in the tunnel in advance, and the lamp system of the main vehicle reminds the target vehicle behind;
or,
and identifying the working condition that the target vehicle in the front road changes into the non-self-road in the front road in a deceleration way according to the motion parameters of the target vehicle in the front and the steering lamp so that the motion parameter control system of the main vehicle does not perform braking adjustment in the tunnel.
17. An automatic vehicle travel control device, characterized by comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a first image and a second image of the environment in front of a main vehicle from a front 3D camera, the first image is a color or brightness image, and the second image is a depth image;
the second acquisition module is used for acquiring a front highway lane line according to the first image;
the third acquisition module is used for acquiring a third image and a rear road lane line according to the imaging parameters of the first image and the front road lane line;
the first generation module is used for mapping the front road lane lines into the second image according to the interweaving mapping relation between the first image and the second image to generate a plurality of front vehicle identification ranges;
the first identification module is used for identifying the front target vehicles according to all the front vehicle identification ranges;
a second generating module, configured to generate a plurality of rear vehicle identification ranges according to the third image and the rear road lane;
the fourth acquisition module is used for acquiring point cloud data from a laser radar and projecting the point cloud data into the third image according to the installation parameters of the laser radar so as to acquire rear target point cloud data;
the second identification module is used for identifying rear target vehicles according to all the rear vehicle identification ranges and the rear target point cloud data;
the fifth acquisition module is used for acquiring tunnel information and speed limit information;
the first adjusting module is used for changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel information and the speed limit information;
a second adjusting module, configured to adjust a focal length of the front-facing 3D camera according to the tunnel information, where the second adjusting module is specifically configured to: when the tunnel information is tunnel entrance information, reducing the focal length of the front 3D camera;
when the tunnel information is tunnel exit information, increasing the focal length of the front 3D camera;
and the control module is used for carrying out in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters of the front target vehicle and the motion parameters of the rear target vehicle.
18. The apparatus of claim 17, wherein the first obtaining module is to:
acquiring a first image of an environment in front of a subject vehicle from an image sensor of a front-facing 3D camera;
a second image of the environment in front of the subject vehicle is acquired from a time-of-flight sensor of the front-facing 3D camera.
19. The apparatus of claim 17, wherein the second obtaining module is to:
when the first image is a brightness image, identifying a front road lane line according to the brightness difference between the front road lane line and the road surface in the first image; or,
and when the first image is a color image, converting the color image into a brightness image, and identifying the front road lane line according to the brightness difference between the front road lane line and the road surface in the first image.
20. The apparatus of claim 17, wherein the second obtaining module comprises:
the creating unit is used for creating a binary image of the front road lane line according to the brightness information of the first image and a preset brightness threshold value;
the first detection unit is used for detecting all edge pixel positions of a straight-line lane line or all edge pixel positions of a curve solid-line lane line in the binary image according to a preset detection algorithm;
and the second detection unit is used for detecting all edge pixel positions of the straight-way dotted line lane line or detecting all edge pixel positions of the curve dotted line lane line in the binary image according to a preset detection algorithm.
21. The apparatus of claim 17, wherein the third obtaining module comprises:
the projection unit is used for projecting all pixel positions of the front road lane line to the physical world coordinate system of the main vehicle according to the imaging parameters of the first image to establish a third image;
and the acquisition unit is used for acquiring the displacement relative to the origin of the physical world coordinate system of the main vehicle after accumulating the position of the front highway lane line in the third image for continuous time, and acquiring the position of the rear highway lane line according to the displacement.
22. The apparatus of claim 17, wherein the first identification module comprises:
a first marking unit for marking the labels of the front own lane and the front non-own lane for all the front vehicle identification ranges;
a first recognition unit configured to recognize a preceding own-lane target vehicle on the basis of a vehicle recognition range in which a preceding own-lane tag is marked;
a second recognition unit for recognizing a non-own-lane target vehicle ahead according to the vehicle recognition range marking the non-own-lane tag ahead;
and the third identification unit is used for identifying the front lane-changing target vehicle according to the combined front vehicle identification ranges.
23. The apparatus of claim 17, wherein the second identification module comprises:
the second marking unit is used for marking the labels of the rear lane and the rear non-local lane for all the rear vehicle identification ranges;
the fourth identification unit is used for identifying the target vehicle of the lane behind the mark according to the vehicle identification range of the lane label behind the mark and the point cloud data of the rear target point;
the fifth identification unit is used for identifying the non-self-lane target vehicles behind the mark according to the vehicle identification range of the non-self-lane mark behind the mark and the rear target point cloud data;
and the sixth identification unit is used for identifying and marking the rear lane-changing target vehicle according to the rear vehicle identification range and the rear target point cloud data which are combined in pairs.
24. The apparatus of claim 17, further comprising:
the third generation module is used for generating a front target vehicle range according to the front target vehicle and mapping the front target vehicle range to the first image according to the interweaving mapping relation between the first image and the second image to generate a front vehicle lamp identification area;
the third identification module is used for identifying the steering lamp of the corresponding front target vehicle according to the front vehicle lamp identification area;
the control module is used for:
and adjusting the focal length of the front 3D camera according to the tunnel information, and performing in-tunnel cruise control on the motion parameters of the main vehicle according to the motion parameters and the steering lamp of the front target vehicle and the motion parameters of the rear target vehicle.
25. The apparatus of claim 24, wherein the third generation module is to:
and detecting the target boundary of the front target vehicle by adopting a boundary detection method in an image processing algorithm for identification.
26. The apparatus of claim 24, wherein the third generation module is to:
generating a front target vehicle range according to a closed area defined by the target boundary of the front target vehicle; or,
generating a front target vehicle range according to a closed area enclosed by the extended target boundary of the front target vehicle; or,
and generating a front target vehicle range according to a closed area formed by connecting a plurality of pixel positions of the front target vehicle.
27. The apparatus of claim 24, wherein the third identification module is to:
and identifying the steering lamp of the corresponding front target vehicle according to the color, the flashing frequency or the flashing sequence of the tail lamp in the front vehicle lamp identification area.
28. The apparatus of claim 17, wherein the fifth obtaining module is to:
acquiring tunnel entrance information and speed limit information from a navigation system; or,
and acquiring tunnel exit information and speed limit information from a navigation system.
29. The apparatus of claim 17, wherein the fifth obtaining module is to:
identifying tunnel entrance information and speed limit information according to the first image and the second image; or,
and identifying tunnel exit information and speed limit information according to the first image and the second image.
30. The apparatus of claim 17, further comprising:
and the third adjusting module is used for executing deceleration control.
31. The apparatus of claim 17, wherein the first adjustment module is to:
and changing the settings of the cruising speed upper limit and the cruising safety distance of the main vehicle according to the tunnel exit information, the speed limit information and the user setting information.
32. The apparatus of claim 24, wherein the control module is to:
identifying the working condition that the target vehicle in the non-self-lane in the front decelerates and changes lane to the self-lane according to the motion parameters of the target vehicle in the front and the steering lamp, so that the motion parameter control system of the main vehicle performs braking adjustment in the tunnel in advance, and the lamp system of the main vehicle reminds the target vehicle behind;
or,
and identifying the working condition that the target vehicle in the front road changes into the non-self-road in the front road in a deceleration way according to the motion parameters of the target vehicle in the front and the steering lamp so that the motion parameter control system of the main vehicle does not perform braking adjustment in the tunnel.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710120713.5A CN108528432B (en) | 2017-03-02 | 2017-03-02 | Automatic control method and device for vehicle running |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710120713.5A CN108528432B (en) | 2017-03-02 | 2017-03-02 | Automatic control method and device for vehicle running |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108528432A CN108528432A (en) | 2018-09-14 |
CN108528432B true CN108528432B (en) | 2020-11-06 |
Family
ID=63489295
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710120713.5A Active CN108528432B (en) | 2017-03-02 | 2017-03-02 | Automatic control method and device for vehicle running |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108528432B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109532831B (en) * | 2018-11-16 | 2021-03-16 | 广东工业大学 | Vehicle speed control method and related device thereof |
CN109878517B (en) * | 2019-01-04 | 2020-09-25 | 江苏大学 | System and method for automatically and forcibly decelerating and limiting speed of vehicle |
CN111645705A (en) * | 2020-06-17 | 2020-09-11 | 广州小鹏车联网科技有限公司 | Method for issuing driving route adjustment and server |
CN111885792B (en) * | 2020-08-10 | 2022-09-20 | 招商局重庆交通科研设计院有限公司 | Method for optimizing lighting design speed of highway tunnel in alpine region |
CN114475665B (en) * | 2022-03-17 | 2024-07-02 | 北京小马睿行科技有限公司 | Control method and control device for automatic driving vehicle and automatic driving system |
CN115591698B (en) * | 2022-10-31 | 2024-08-23 | 重庆长安汽车股份有限公司 | Intelligent vehicle body release control method and system based on automobile coating production double lines |
CN118468233B (en) * | 2024-07-11 | 2024-11-01 | 驭动科技(宁波)有限公司 | Multi-sensor target data fusion detection method and system for intelligent driving |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101767539A (en) * | 2008-12-30 | 2010-07-07 | 比亚迪股份有限公司 | Automobile cruise control method and cruise device |
CN103754221A (en) * | 2014-01-24 | 2014-04-30 | 清华大学 | Vehicle adaptive cruise control system |
CN104477168A (en) * | 2014-11-28 | 2015-04-01 | 长城汽车股份有限公司 | Automotive adaptive cruise system and method |
CN104952254A (en) * | 2014-03-31 | 2015-09-30 | 比亚迪股份有限公司 | Vehicle identification method and device and vehicle |
CN106463064A (en) * | 2014-06-19 | 2017-02-22 | 日立汽车系统株式会社 | Object recognition apparatus and vehicle travel controller using same |
-
2017
- 2017-03-02 CN CN201710120713.5A patent/CN108528432B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101767539A (en) * | 2008-12-30 | 2010-07-07 | 比亚迪股份有限公司 | Automobile cruise control method and cruise device |
CN103754221A (en) * | 2014-01-24 | 2014-04-30 | 清华大学 | Vehicle adaptive cruise control system |
CN104952254A (en) * | 2014-03-31 | 2015-09-30 | 比亚迪股份有限公司 | Vehicle identification method and device and vehicle |
CN106463064A (en) * | 2014-06-19 | 2017-02-22 | 日立汽车系统株式会社 | Object recognition apparatus and vehicle travel controller using same |
CN104477168A (en) * | 2014-11-28 | 2015-04-01 | 长城汽车股份有限公司 | Automotive adaptive cruise system and method |
Also Published As
Publication number | Publication date |
---|---|
CN108528432A (en) | 2018-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108528432B (en) | Automatic control method and device for vehicle running | |
CN108528433B (en) | Automatic control method and device for vehicle running | |
CN108528448B (en) | Automatic control method and device for vehicle running | |
CN108528431B (en) | Automatic control method and device for vehicle running | |
CN106909152B (en) | Automobile-used environmental perception system and car | |
WO2018059585A1 (en) | Vehicle identification method and device, and vehicle | |
CN108536134B (en) | Automatic control method and device for vehicle running | |
JP5680573B2 (en) | Vehicle driving environment recognition device | |
JP5863536B2 (en) | Outside monitoring device | |
JP5363921B2 (en) | Vehicle white line recognition device | |
CN109204311B (en) | Automobile speed control method and device | |
JP5313638B2 (en) | Vehicle headlamp device | |
EP2863374A1 (en) | Lane partition marking detection apparatus, and drive assist system | |
US9886773B2 (en) | Object detection apparatus and object detection method | |
WO2020259284A1 (en) | Obstacle detection method and device | |
CN111937002A (en) | Obstacle detection device, automatic braking device using obstacle detection device, obstacle detection method, and automatic braking method using obstacle detection method | |
JP5363920B2 (en) | Vehicle white line recognition device | |
JP5361901B2 (en) | Headlight control device | |
CN107886729B (en) | Vehicle identification method and device and vehicle | |
CN108528449B (en) | Automatic control method and device for vehicle running | |
CN108528450B (en) | Automatic control method and device for vehicle running | |
JP5643877B2 (en) | Vehicle headlamp device | |
US11749002B2 (en) | Lane line recognition apparatus | |
JP5452518B2 (en) | Vehicle white line recognition device | |
JP7486556B2 (en) | Lane marking recognition device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |