CN114397877A - Intelligent automobile automatic driving system - Google Patents

Intelligent automobile automatic driving system Download PDF

Info

Publication number
CN114397877A
CN114397877A CN202110714865.4A CN202110714865A CN114397877A CN 114397877 A CN114397877 A CN 114397877A CN 202110714865 A CN202110714865 A CN 202110714865A CN 114397877 A CN114397877 A CN 114397877A
Authority
CN
China
Prior art keywords
module
data
automatic driving
algorithm
driving system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110714865.4A
Other languages
Chinese (zh)
Inventor
李贵炎
赵魏维
耿禹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Communications Institute of Technology
Original Assignee
Nanjing Communications Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Communications Institute of Technology filed Critical Nanjing Communications Institute of Technology
Priority to CN202110714865.4A priority Critical patent/CN114397877A/en
Publication of CN114397877A publication Critical patent/CN114397877A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an intelligent automobile automatic driving system. The system consists of an inter-module communication library, a positioning navigation module, an environment perception module, a data fusion module, a high-precision map module, a path planning module and a terminal display module. The modules transmit data in real time by using a communication library, coordinate conversion is performed by using the position information of the positioning navigation module, and a visualization result is displayed by using the terminal display module. In an automatic driving scene, environment information is collected by an environment sensing module, processed by a sensing algorithm and transmitted to a data fusion module, and the environment information is supplied to a path planning module after grid projection, voting confirmation and roadside fitting, and is used for planning a driving route in real time and controlling a vehicle under the assistance of a high-precision map. The invention can automatically complete tasks such as target identification, detection, avoidance and the like in an intelligent driving scene, has high safety and reliability, effectively reduces potential safety hazards caused by improper driving, and improves road safety.

Description

Intelligent automobile automatic driving system
Technical Field
The invention relates to the field of unmanned driving, in particular to an intelligent automobile automatic driving system
Background
Under the promotion of various favorable conditions such as vigorous support of national policies, rapid development of artificial intelligence technology and the like, automatic driving becomes one of the most fierce development industries in recent years in China. The demand of the industries such as logistics distribution which is continuously expanded, unmanned leasing in smart cities, fine automation sanitation operation, high-throughput port and wharf loading and unloading is particularly urgent for the automatic driving system. The automatic driving is a product of deep fusion of the automobile industry and a new generation of information technology represented by artificial intelligence, and is mainly used for helping cities to construct safe and efficient future transportation structures. The basic situation in the current scene with high demand for the automatic driving system is as follows:
1. in the logistics distribution industry, high labor cost is needed in logistics operation processes such as loading, unloading, transportation, receiving, warehousing and transportation, fatigue is easily caused during manual operation, the working efficiency is reduced, errors easily occur subjectively, and after the logistics operation process is automated, the whole industrial chain is promoted to reduce cost, improve efficiency and upgrade.
2. In the taxi industry, a driver drives a vehicle for a long time and is in a fatigue state under most conditions, so that great potential safety hazards exist; the automatic driving system-based unmanned rental system greatly reduces potential safety hazards, improves road safety and reduces accident rate.
3. In the environmental sanitation industry, high cost, disordered process and high difficulty in deep cleaning are pain points in the industry, most practitioners are older, the influence of physical quality is considered, the cleaning vehicle is difficult to work for a long time in severe environments such as high temperature and severe cold, an unmanned sweeper based on an automatic driving system plans a route by automatically identifying the road environment, and the cleaning operation with fine and high efficiency in all days can be realized.
4. In port and wharf, the cargo handling capacity is large, the loading and unloading difficulty is large, the demand on truck drivers is large, the technical requirement is high, an automatic driving system is developed, the container loading and unloading transportation automation is realized, the container loading and unloading automatic driving system is a necessary path for building an first-class port, and the problems of inaccurate running line, large turning blind area and the like are effectively solved.
In recent years, deep learning develops rapidly in the field of automatic driving, the bottleneck of many computer vision problems is broken through by the deep learning with accurate recognition rate and high-efficiency calculation speed, and the deep learning can classify and detect multi-mode data such as two-dimensional images and three-dimensional point clouds in real time and is applied to many industrial fields.
Disclosure of Invention
The invention aims to provide an intelligent automobile automatic driving system which provides reliable reference for automatic driving under multiple scenes.
The technical solution for realizing the purpose of the invention is as follows: the utility model provides an intelligent automobile autopilot system, by intermodule communication library, location navigation module, environmental perception module, data fusion module, high-accuracy map module, route planning module and terminal display module constitute, wherein:
the inter-module communication library is responsible for transmitting data among the modules in real time.
The positioning navigation module provides position information for coordinate conversion;
the environment perception module collects multi-modal data based on the vehicle-mounted sensor, and after the multi-modal data are processed by a perception algorithm, the environment perception module packs and sends the preliminary filtering data and the detection result to the data fusion module and the terminal display module;
the data fusion module performs raster projection on the preliminary filtering data from the environment sensing module, performs road side fitting after voting confirmation, and sends a fusion result to the path planning module and the terminal display module;
the high-precision map module assists path planning to make decisions in an environment with poor real-time perception;
the path planning module plans a driving route according to the road side fitting data of the data fusion module, avoids obstacles by using an obstacle avoidance algorithm, controls a vehicle and sends the planned route to the terminal display module;
and the terminal display module is used for visually displaying the data of each module.
Further, the inter-module communication library has the specific functions of transmitting data among the modules in real time: signal processing, service initialization, multi-process communication, shared memory management, concurrent execution and synchronization, and the like.
Furthermore, the position information of the positioning navigation module is provided by a differential GPS and an inertial navigation system, and can be further selected and matched according to different precision requirements.
Further, the environment sensing module, vehicle-mounted sensor wherein comprise visible light camera, infrared camera, laser radar, millimeter wave radar etc. the concrete model of sensor can further the apolegamy according to the demand of difference to data acquisition demand under the different scenes is reached in the collection.
Further, the perception algorithm of the environment perception module is composed of a two-dimensional target detection algorithm YOLO V4 and a three-dimensional target detection algorithm pointpilars, and the specific functions are as follows: respectively giving the class and bounding box of the target in two dimensions and three dimensions;
the YOLO V4 integrates several innovative methods of algorithm models, has obvious advantages in speed and precision, and the specific construction mode of the network structure is as follows: the trunk network adopts CSPDarknet53, adopts SPP idea to increase receptive field, shortens information path between lower layer and highest feature by using path aggregation module, and uses head of YOLO-v 3; the specific measures adopted by the improved strategy are as follows: adopting a Mosaic data enhancement method, mixing four pictures with different semantic information, enhancing the robustness of a model, adopting a self-countermeasure mode, using cross small batch standardized CmBN in training, adopting improved SAM and PANet, directly activating a feature graph obtained by convolution by using Sigmoid, and multiplying corresponding points, thereby changing Spatial-wise Attention into Point-wise Attention, and changing the original additive fusion into a fusion mode of element-wise multiplication by the latter; for data from a vehicle-mounted sensor, the algorithm is input into the model after preprocessing, and then the category and bounding box results of two-dimensional target detection can be obtained;
the pointpilars greatly improves the detection efficiency of point cloud data, and the specific method comprises the following steps: on the basis of voxelization, dividing the top view into H multiplied by W uniform squares, wherein all points of each square in the height direction form a pilar, namely the number P of the pilars can be calculated by the following formula:
P=H×W
and then extracting point cloud features by using PointNet to obtain a three-dimensional feature representation (C, P, N), wherein C is the number of channels, P is the number of pilars, N is the number of points in each pilar, the feature dimension is converted into (C, P) after the maximum value pooling is carried out, and the feature dimension can be transformed into (C, P) through the formula(C, H, W), namely, the two-dimensional Backbone can be used for further extracting features, so that the calculation complexity is greatly reduced; detecting loss in head
Figure BDA0003134484940000035
By loss of classification
Figure BDA0003134484940000034
Loss of return
Figure BDA0003134484940000036
And loss of direction
Figure BDA0003134484940000037
The formula is as follows:
Figure BDA0003134484940000031
wherein N ispRepresents the number of positive sample boxes, βcls、βloc、βdirThe method is characterized by comprising the following steps of (1) respectively weighing parameters of three losses, specifically, adopting focal loss for classifying the losses, adopting Smoothl1 loss for regression losses, adopting softmax loss for directional losses, and specifically calculating the following formulas:
Figure BDA0003134484940000032
Figure BDA0003134484940000033
and inputting the data of the vehicle-mounted laser radar into the model, so as to obtain the detection category and bounding box result of the three-dimensional target.
Further, the data fusion module specifically functions as: voting confirmation is carried out to remove random objects appearing in a single frame, and a real-time voting queue is formed; and (4) matching the channel edge with the subscription voting confirmation queue, and fitting the channel edge line by using a RANSAC algorithm according to the newly enqueued channel edge point.
Further, the high-precision map module has the specific functions of: and (3) constructing a real-time map by using the position information provided by the differential GPS and the inertial navigation system, and assisting a path planning module to make decisions in a highly complex environment.
Further, the obstacle avoidance algorithm in the path planning module specifically functions as follows: on the premise of safely avoiding the obstacles, the obstacle-avoiding vehicle can bypass the obstacle-avoiding vehicle as quickly as possible; according to different application scenes, an APF algorithm and a VFH algorithm can be further selected.
Furthermore, the data displayed visually by the terminal display module consists of the original data acquired by the vehicle-mounted sensor, the detection result of the environment sensing module, the grid projection data and the road side fitting result of the fusion module, and the planning route of the path planning module.
The system is a highly intelligent automatic driving system, can automatically identify and detect the positions of vehicles, pedestrians and other targets in a sensing range, plans a driving route with the assistance of a high-precision map based on fused data information, reasonably avoids obstacles and realizes safe traffic.
The invention has the following beneficial effects: the position of a target in the driving process of the vehicle can be timely and accurately identified; the potential safety hazard caused by improper driving is effectively reduced, the accident rate is reduced, and the road safety is improved; the labor cost is greatly reduced, the all-weather efficient work is realized, and reliable basis and reference are provided for automatic driving under multiple scenes.
Drawings
Fig. 1 is a schematic diagram of an implementation of the intelligent automobile automatic driving system.
Fig. 2 is a schematic network structure diagram of a two-dimensional target detection algorithm YOLO V4 in an environment sensing module in the intelligent automobile automatic driving system of fig. 1.
Fig. 3 is a schematic diagram of a network structure of a three-dimensional target detection algorithm pointpilars in an environment sensing module in the intelligent automobile automatic driving system of fig. 1.
FIG. 4 is a schematic diagram of the improved SAM structure employed in YOLO V4 in FIG. 2.
Fig. 5 is a schematic diagram of the modified PANet structure employed in YOLO V4 in fig. 2.
Detailed Description
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It should be noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The invention discloses an intelligent automobile automatic driving system which is based on the organic combination of a positioning navigation module, an environment sensing module, a data fusion module, a high-precision map module, a path planning module, a terminal display module, an inter-module communication library and other components, realizes the automatic identification and detection of the positions of targets such as vehicles, pedestrians and the like in a sensing range, plans a driving route under the assistance of a high-precision map based on fused data information, reasonably avoids obstacles, realizes safe passing, and thus provides reliable basis and reference for automatic driving under multiple scenes.
Referring to fig. 1, further, the inter-module communication library is responsible for transmitting data among the modules in real time;
further, the differential GPS and inertial navigation system in the positioning navigation module provide position information for coordinate conversion.
Furthermore, vehicle-mounted sensors such as a visible light camera, an infrared camera, a laser radar and a millimeter wave radar in the environment perception module collect data in the vehicle running process and send the data to the internal perception algorithm part, and the perception algorithm packages and sends two-dimensional and three-dimensional detection results and preliminary filtering data to the data fusion module and the terminal display module.
Further, the data fusion module performs raster projection on the preliminary filtering data from the environment sensing module, performs road side fitting after voting confirmation, and sends a fusion result to the path planning module and the terminal display module;
further, the high-precision map module constructs a real-time map under the environment with poor real-time perception and sends the real-time map to the path planning module;
further, the path planning module plans a driving route according to the roadside fitting data of the data fusion module with the aid of a high-precision map, avoids obstacles to control the vehicle and sends the planned route to the terminal display module;
further, the terminal display module performs visual display on data of each module, including original data acquired by the vehicle-mounted sensor, a detection result of the environment sensing module, grid projection data of the fusion module, a road side fitting result, a planned route of the path planning module and the like.

Claims (9)

1. An intelligent automobile automatic driving system is characterized by comprising an inter-module communication library, a positioning navigation module, an environment sensing module, a data fusion module, a high-precision map module, a path planning module and a terminal display module;
the inter-module communication library is responsible for transmitting data among the modules in real time;
the positioning navigation module provides position information for coordinate conversion;
the environment perception module collects multi-modal data based on the vehicle-mounted sensor, and after the multi-modal data are processed by a perception algorithm, the environment perception module packs and sends the preliminary filtering data and the detection result to the data fusion module and the terminal display module;
the data fusion module performs raster projection on the preliminary filtering data from the environment sensing module, performs road side fitting after voting confirmation, and sends a fusion result to the path planning module and the terminal display module;
the high-precision map module assists path planning to make decisions in an environment with poor real-time perception;
the path planning module plans a driving route according to the road side fitting data of the data fusion module, avoids obstacles by using an obstacle avoidance algorithm, controls a vehicle and sends the planned route to the terminal display module;
and the terminal display module is used for visually displaying the data of each module.
2. The intelligent automatic driving system of automobile as claimed in claim 1, wherein the inter-module communication library is used for transmitting data among modules in real time, and the inter-module communication library is used for: signal processing, service initialization, multi-process communication, shared memory management, concurrent execution and synchronization, and the like.
3. The intelligent automatic driving system of automobile as claimed in claim 1, wherein the positioning navigation module, the position information of which is provided by differential GPS and inertial navigation system, can be further selected and matched according to different precision requirements.
4. The automatic driving system of an intelligent vehicle according to claim 1, wherein the environment sensing module, the vehicle-mounted sensor, and the like are composed of a visible light camera, an infrared camera, a laser radar, a millimeter wave radar, and the like, and the specific type of the sensor can be further selected and matched according to different requirements, so that the data acquisition requirements under different scenes can be met.
5. The intelligent automatic driving system of automobile as claimed in claim 1, wherein the sensing algorithm of the environment sensing module is composed of a two-dimensional target detection algorithm YOLO V4 and a three-dimensional target detection algorithm pointpilars, and specifically functions as: respectively giving the class and bounding box of the target in two dimensions and three dimensions; the YOLO V4 integrates several innovative methods of algorithm models, has obvious advantages in speed and precision, and the specific construction mode of the network structure is as follows: the trunk network adopts CSPDarknet53, adopts SPP idea to increase receptive field, shortens information path between lower layer and highest feature by using path aggregation module, and uses head of YOLO V3; the specific measures adopted by the improved strategy are as follows: adopting a Mosaic data enhancement method, mixing four pictures with different semantic information, enhancing the robustness of a model, adopting a self-countermeasure mode, using cross small batch standardized CmBN in training, adopting improved SAM and PANet, directly activating a feature graph obtained by convolution by using Sigmoid, and multiplying corresponding points, thereby changing Spatial-wise Attention into Point-wise Attention, and changing the original additive fusion into a fusion mode of element-wise multiplication by the latter; for data from a vehicle-mounted sensor, the algorithm is input into the model after preprocessing, and then the category and bounding box results of two-dimensional target detection can be obtained;
the detection efficiency of the point cloud data is improved to a great extent by the PointPillars, and the specific method comprises the following steps: on the basis of voxelization, dividing the top view into H multiplied by W uniform squares, wherein all points of each square in the height direction form a pilar, namely the number P of the pilars can be calculated by the following formula:
P=H×W
then, point cloud features are extracted by using PointNet to obtain three-dimensional feature representation (C, P, N), wherein C is the number of channels, P is the number of pilars, N is the number of points in each pilar, the feature dimension is converted into (C, P) after the point number is subjected to maximum pooling, the (C, H, W) can be formed by the formula, the features can be further extracted by using a two-dimensional backhaul, and the calculation complexity is greatly reduced; detecting loss in head
Figure FDA0003134484930000021
By loss of classification
Figure FDA0003134484930000022
Loss of return
Figure FDA0003134484930000023
And loss of direction
Figure FDA0003134484930000024
The formula is as follows:
Figure FDA0003134484930000025
wherein N ispRepresents the number of positive sample boxes, βcls、βloc、βdirThe method is characterized by comprising the following steps of (1) respectively weighing parameters of three losses, specifically, adopting focal loss for classifying the losses, adopting Smoothl1 loss for regression losses, adopting softmax loss for directional losses, and specifically calculating the following formulas:
Figure FDA0003134484930000026
Figure FDA0003134484930000027
and inputting the data of the vehicle-mounted laser radar into the model, so as to obtain the detection category and bounding box result of the three-dimensional target.
6. The intelligent automatic driving system of automobile according to claim 1, wherein the data fusion module is specifically configured to: voting confirmation is carried out to remove random objects appearing in a single frame, and a real-time voting queue is formed; and (4) matching the channel edge with the subscription voting confirmation queue, and fitting the channel edge line by using a RANSAC algorithm according to the newly enqueued channel edge point.
7. The intelligent automatic driving system of the automobile as claimed in claim 1, wherein the high-precision map module is specifically used for: and (3) constructing a real-time map by using the position information provided by the differential GPS and the inertial navigation system, and assisting a path planning module to make decisions in a highly complex environment.
8. The intelligent automatic driving system of an automobile as claimed in claim 1, wherein the obstacle avoidance algorithm in the path planning module is specifically configured to: on the premise of safely avoiding the obstacles, the obstacle-avoiding vehicle can bypass the obstacle-avoiding vehicle as quickly as possible; according to different application scenes, an APF algorithm and a VFH algorithm can be further selected.
9. The intelligent automatic driving system of automobile as claimed in claim 1, wherein the data displayed visually by the terminal display module is composed of raw data collected by the vehicle-mounted sensor, detection results of the environment sensing module, grid projection data of the fusion module, road-edge fitting results, and a planned route of the path planning module.
CN202110714865.4A 2021-06-25 2021-06-25 Intelligent automobile automatic driving system Pending CN114397877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110714865.4A CN114397877A (en) 2021-06-25 2021-06-25 Intelligent automobile automatic driving system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110714865.4A CN114397877A (en) 2021-06-25 2021-06-25 Intelligent automobile automatic driving system

Publications (1)

Publication Number Publication Date
CN114397877A true CN114397877A (en) 2022-04-26

Family

ID=81225684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110714865.4A Pending CN114397877A (en) 2021-06-25 2021-06-25 Intelligent automobile automatic driving system

Country Status (1)

Country Link
CN (1) CN114397877A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862957A (en) * 2022-07-08 2022-08-05 西南交通大学 Subway car bottom positioning method based on 3D laser radar
CN115187964A (en) * 2022-09-06 2022-10-14 中诚华隆计算机技术有限公司 Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
CN115290069A (en) * 2022-07-22 2022-11-04 清华大学 Multi-source heterogeneous sensor data fusion and collaborative perception handheld mobile platform
CN116052420A (en) * 2023-01-05 2023-05-02 北京清丰智行科技有限公司 Vehicle-road cloud collaborative big data management system for park

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184852A (en) * 2015-08-04 2015-12-23 百度在线网络技术(北京)有限公司 Laser-point-cloud-based urban road identification method and apparatus
CN107577996A (en) * 2017-08-16 2018-01-12 中国地质大学(武汉) A kind of recognition methods of vehicle drive path offset and system
CN110287779A (en) * 2019-05-17 2019-09-27 百度在线网络技术(北京)有限公司 Detection method, device and the equipment of lane line
CN111612059A (en) * 2020-05-19 2020-09-01 上海大学 Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars
CN111860695A (en) * 2020-08-03 2020-10-30 上海高德威智能交通系统有限公司 Data fusion and target detection method, device and equipment
CN112418212A (en) * 2020-08-28 2021-02-26 西安电子科技大学 Improved YOLOv3 algorithm based on EIoU
CN112612287A (en) * 2020-12-28 2021-04-06 清华大学 System, method, medium and device for planning local path of automatic driving automobile

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184852A (en) * 2015-08-04 2015-12-23 百度在线网络技术(北京)有限公司 Laser-point-cloud-based urban road identification method and apparatus
CN107577996A (en) * 2017-08-16 2018-01-12 中国地质大学(武汉) A kind of recognition methods of vehicle drive path offset and system
CN110287779A (en) * 2019-05-17 2019-09-27 百度在线网络技术(北京)有限公司 Detection method, device and the equipment of lane line
CN111612059A (en) * 2020-05-19 2020-09-01 上海大学 Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars
CN111860695A (en) * 2020-08-03 2020-10-30 上海高德威智能交通系统有限公司 Data fusion and target detection method, device and equipment
CN112418212A (en) * 2020-08-28 2021-02-26 西安电子科技大学 Improved YOLOv3 algorithm based on EIoU
CN112612287A (en) * 2020-12-28 2021-04-06 清华大学 System, method, medium and device for planning local path of automatic driving automobile

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALEXEY BOCHKOVSKIY ET AL.: "YOLOv4: Optimal Speed and Accuracy of Object Detection", 《ARXIV》 *
詹为钦 等: "基于注意力机制的pointpillars+三维目标检测", 《江苏大学学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114862957A (en) * 2022-07-08 2022-08-05 西南交通大学 Subway car bottom positioning method based on 3D laser radar
CN114862957B (en) * 2022-07-08 2022-09-27 西南交通大学 Subway car bottom positioning method based on 3D laser radar
CN115290069A (en) * 2022-07-22 2022-11-04 清华大学 Multi-source heterogeneous sensor data fusion and collaborative perception handheld mobile platform
CN115187964A (en) * 2022-09-06 2022-10-14 中诚华隆计算机技术有限公司 Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
CN116052420A (en) * 2023-01-05 2023-05-02 北京清丰智行科技有限公司 Vehicle-road cloud collaborative big data management system for park
CN116052420B (en) * 2023-01-05 2023-09-22 北京清丰智行科技有限公司 Vehicle-road cloud collaborative big data management system for park

Similar Documents

Publication Publication Date Title
CN114397877A (en) Intelligent automobile automatic driving system
WO2022141910A1 (en) Vehicle-road laser radar point cloud dynamic segmentation and fusion method based on driving safety risk field
CN109556615B (en) Driving map generation method based on multi-sensor fusion cognition of automatic driving
Aycard et al. Intersection safety using lidar and stereo vision sensors
CN102819263B (en) Multi-camera visual perception system for UGV (Unmanned Ground Vehicle)
CN105210128B (en) The active and sluggish construction ground band of map structuring is for autonomous driving
CN113313154A (en) Integrated multi-sensor integrated automatic driving intelligent sensing device
Rawashdeh et al. Collaborative automated driving: A machine learning-based method to enhance the accuracy of shared information
CN111880174A (en) Roadside service system for supporting automatic driving control decision and control method thereof
Liu et al. Deep learning-based localization and perception systems: approaches for autonomous cargo transportation vehicles in large-scale, semiclosed environments
CN115552200A (en) Method and system for generating importance occupancy grid map
US20220146277A1 (en) Architecture for map change detection in autonomous vehicles
CN102608998A (en) Vision guiding AGV (Automatic Guided Vehicle) system and method of embedded system
CN113071518B (en) Automatic unmanned driving method, minibus, electronic equipment and storage medium
CN113791619B (en) Airport automatic driving tractor dispatching navigation system and method
CN116022657B (en) Path planning method and device and crane
CN111459172A (en) Autonomous navigation system of boundary security unmanned patrol car
EP4134769A1 (en) Method and apparatus for vehicle to pass through boom barrier
CN110435541A (en) A kind of the vehicle lane change method for early warning and system of view-based access control model identification and ranging
Dong et al. A vision-based method for improving the safety of self-driving
Johari et al. Comparison of autonomy and study of deep learning tools for object detection in autonomous self driving vehicles
US11884268B2 (en) Motion planning in curvilinear coordinates for autonomous vehicles
CN112820097A (en) Truck fleet longitudinal hierarchical control method based on 5G-V2X and unmanned aerial vehicle
Diab et al. Experimental lane keeping assist for an autonomous vehicle based on optimal PID controller
CN115027506B (en) Logistics luggage tractor driving control system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220426

RJ01 Rejection of invention patent application after publication