CN112883820B - Road target 3D detection method and system based on laser radar point cloud - Google Patents
Road target 3D detection method and system based on laser radar point cloud Download PDFInfo
- Publication number
- CN112883820B CN112883820B CN202110107066.0A CN202110107066A CN112883820B CN 112883820 B CN112883820 B CN 112883820B CN 202110107066 A CN202110107066 A CN 202110107066A CN 112883820 B CN112883820 B CN 112883820B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- point
- points
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 28
- 238000001914 filtration Methods 0.000 claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 18
- 238000013135 deep learning Methods 0.000 claims abstract description 10
- 238000012216 screening Methods 0.000 claims abstract description 10
- 230000007797 corrosion Effects 0.000 claims description 3
- 238000005260 corrosion Methods 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Probability & Statistics with Applications (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a road target 3D detection method and system based on laser radar point cloud, comprising the following steps: receiving point cloud data acquired by a laser radar on a road; disassembling the point cloud data packet to obtain point cloud data of each frame, and respectively storing the point cloud data of each frame according to time sequence; screening the point cloud data of each frame according to the number threshold value of the points in the selected target area so as to discard any frame of point cloud data of which the number of the points is smaller than the number threshold value; carrying out statistical filtering operation on the point cloud data; filtering the preprocessed point cloud data to determine target object point cloud data; clustering the point cloud data of the target object to combine the point cloud data with dense density to generate a point cloud target candidate set; and judging the types of the point cloud candidate sets according to the pre-trained deep learning network model, and calculating the spatial position information of the target object. The method and the device can avoid the problem of losing the data precision after the three-dimensional point cloud data are mapped to the two-dimensional plane.
Description
Technical Field
The invention relates to the field of computer vision, in particular to a road target 3D detection method and system based on laser radar point cloud.
Background
The technical development of the unmanned platform automobile is that a mode of multi-sensor information fusion, high-precision map and positioning, environment perception, decision making, path planning and vehicle bottom layer control is basically formed up to the present, wherein the environment perception is the basis and premise of safe running of the unmanned platform, and the running strategy formulation, the path planning and the vehicle bottom layer control of the unmanned platform automobile are directly dependent on the perception with high robustness and accuracy.
In early research in the three-dimensional field, the computing power was limited, and people usually convert three-dimensional data into two-dimensional images and then process the two-dimensional images. For three-dimensional data, a two-dimensional image can be obtained by mapping point cloud, and then the target is detected by utilizing related knowledge in image processing; although the method of the two-dimensional image can well detect the target information, the depth and the coordinate information are lost in the process of mapping to the two dimensions, so that the detected target information is inaccurate.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a road target 3D detection method and system based on laser radar point cloud, which solve the problem that the existing scheme of performing two-dimensional projection on three-dimensional point cloud and then using two-dimensional image processing cannot meet the target detection requirement of complex road scenes, and improve the target detection precision and reduce the false detection rate in an environment sensing system.
The road target 3D detection method based on the laser radar point cloud provided by the invention comprises the following steps:
Step S1: receiving point cloud data acquired by a laser radar on a road, and storing a section of the point cloud data intercepted according to time sequence in the form of a point cloud data packet;
Step S2: disassembling the point cloud data packet to obtain point cloud data of each frame, and respectively storing the point cloud data of each frame according to time sequence;
Step S3: screening the point cloud data of each frame according to the number threshold value of the points in the selected target area so as to discard any frame of point cloud data of which the number of the points is smaller than the number threshold value;
step S4: performing statistical filtering operation on the screened point cloud data to remove noise points and outliers and generate preprocessed point cloud data;
Step S5: filtering the preprocessed point cloud data to remove the ground point cloud data so as to determine target object point cloud data;
step S6: clustering the target object point cloud data to combine the point cloud data with dense density to generate a point cloud target candidate set;
Step S7: and judging the types of the point cloud candidate sets according to the pre-trained deep learning network model, and calculating the spatial position information of the target object when judging that the point cloud candidate sets are a preset target object.
Preferably, the point cloud data packet includes three-dimensional point cloud space data of a plurality of frames per second, and the characteristic of each point includes a X, Y, Z-axis space coordinate and reflection intensity.
Preferably, the step S4 includes the steps of:
step S401: calculating the average distance from each point to the nearest K points to obtain an average distance result between each point and the adjacent K points;
step S402: determining a standard distance range between adjacent points according to the average distance result and a 3 sigma criterion;
Step S403: the average distance value of each point and the adjacent K points is compared with the standard distance range to remove points outside the standard range.
Preferably, the step S5 includes the steps of:
Step S501: dividing the preprocessed point cloud data into N sections of areas along the road direction;
Step S502: selecting the lowest height point of each section of the point cloud data of the area as a priori condition, and fitting a plane;
Step S503: calculating the distance between each point in each section of area and the plane, judging whether the point belongs to the plane or not according to the distance value of each point and a set threshold value, and further generating a plane point set;
step S504: repeatedly executing the step S503 to generate a fitting plane, wherein the fitting plane is a plane containing ground point cloud data;
Step S505: and removing the ground point cloud data of each section of area, and splicing the point cloud data of the N sections of areas to generate target object point cloud data from which the ground point cloud data is removed.
Preferably, the step S6 includes the steps of:
step S601: randomly selecting a point P which is not accessed in the point cloud data of the target object, searching a point within the radius R of the point P, determining the point in the range and judging whether the point is larger than a preset point threshold value or not;
Step S602: when the number of points in the range is larger than the number threshold, taking the point P as a center point, establishing a cluster C, and marking the point P as accessed; for points in the range, the points of which are smaller than the set point threshold, marking the point P as noise points and marking the points as accessed points;
Step S603: traversing all points in the range of R, marking the points as C categories, taking all the points in the R as core points, removing the points around the corrosion, marking the corroded points as C categories, and stopping until the points cannot be found;
Step S604: steps S601 to S603 are repeatedly performed so that all points are accessed, and finally, a point cloud target candidate set is generated.
Preferably, in step S7, the deep learning network model is generated using modified pointnet network training, and can be used for target object local feature extraction.
Preferably, the spatial position information of the target object includes a length, a width, a height, and a center point of the target object.
Preferably, in step S4, a RANSAC algorithm is used when performing a statistical filtering operation on the filtered point cloud data.
Preferably, a DBSCAN algorithm is adopted when clustering the target object point cloud data in step S6.
The invention provides a road target 3D detection system based on laser radar point cloud, which comprises the following modules:
the data receiving module is used for receiving point cloud data acquired by a laser radar on a road and storing a section of the point cloud data intercepted according to time sequence in a form of a point cloud data packet;
The data decomposition module is used for decomposing the point cloud data packet into point cloud data of each frame, and respectively storing the point cloud data of each frame according to time sequence;
the data screening module is used for screening the point cloud data of each frame according to the number threshold value of the points in the selected target area so as to discard any frame of point cloud data of which the number of the points is smaller than the number threshold value;
The data filtering module is used for carrying out statistical filtering operation on the screened point cloud data so as to remove noise points and outliers and generate preprocessed point cloud data;
The data filtering module is used for filtering the preprocessed point cloud data to remove the ground point cloud data so as to determine the point cloud data of the target object;
The data clustering module is used for clustering the point cloud data of the target object to combine the point cloud data with dense density to generate a point cloud target candidate set;
The data identification module is used for judging the types of the point cloud candidate sets according to the pre-trained deep learning network model, and calculating the spatial position information of the target object when judging that the point cloud candidate sets are a preset target object.
Compared with the prior art, the invention has the following beneficial effects:
According to the method, the spatial position information of the target object is determined by directly processing the original point cloud number, so that the problem of losing data precision after the three-dimensional point cloud data are mapped to the two-dimensional plane is solved, and the problem that the traditional image target detection lacks coordinate position information in road target detection is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art. Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of steps of a method for detecting a road target 3D based on a laser radar point cloud in an embodiment of the invention;
fig. 2 is a schematic block diagram of a road target 3D detection system based on a laser radar point cloud according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical scheme of the invention is described in detail below by specific examples. The following embodiments may be combined with each other, and some embodiments may not be repeated for the same or similar concepts or processes.
The invention provides a road target 3D detection method based on a laser radar point cloud, which aims to solve the problems in the prior art.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a step flowchart of a road target 3D detection method based on a laser radar point cloud in an embodiment of the present invention, and as shown in fig. 1, the road target 3D detection method based on a laser radar point cloud provided by the present invention includes the following steps:
Step S1: receiving point cloud data acquired by a laser radar on a road, and storing a section of the point cloud data intercepted according to time sequence in the form of a point cloud data packet;
In an embodiment of the invention, the point cloud data packet includes three-dimensional point cloud space data of 20 frames per second, and the characteristic of each point includes a X, Y, Z-axis space coordinate and a reflection intensity (REFLECTED INTENSITY). The spatial position information of the target object includes the length, width, height and center point of the target object.
Step S2: disassembling the point cloud data packet to obtain point cloud data of each frame, and respectively storing the point cloud data of each frame according to time sequence;
Step S3: screening the point cloud data of each frame according to the number threshold value of the points in the selected target area so as to discard any frame of point cloud data of which the number of the points is smaller than the number threshold value;
step S4: performing statistical filtering operation on the screened point cloud data to remove noise points and outliers and generate preprocessed point cloud data;
In the embodiment of the present invention, in step S4, a RANSAC algorithm is adopted when performing statistical filtering operation on the filtered point cloud data. The outliers are often caused by problems such as occlusion of the acquisition equipment.
In an embodiment of the present invention, the step S4 includes the following steps:
step S401: calculating the average distance from each point to the nearest K points to obtain an average distance result between each point and the adjacent K points;
step S402: determining a standard distance range between adjacent points according to the average distance result and a 3 sigma criterion;
Step S403: the average distance value of each point and the adjacent K points is compared with the standard distance range to remove points outside the standard range.
In the embodiment of the invention, definition of the invalid point cloud is that the density of the point cloud at the point is smaller than a threshold, namely, the point cloud is considered invalid, and the point cloud data at the point is removed.
Step S5: filtering the preprocessed point cloud data to remove the ground point cloud data so as to determine target object point cloud data;
in an embodiment of the present invention, the step S5 includes the following steps:
Step S501: for the preprocessed point cloud data, dividing the preprocessed point cloud data into N sections of areas along the road direction, wherein N is a natural number larger than 2;
Step S502: selecting the lowest height point of each section of the point cloud data of the area as a priori condition, and fitting a plane;
Step S503: calculating the distance between each point in each section of area and the plane, judging whether the point belongs to the plane or not according to the distance value of each point and a set threshold value, and further generating a plane point set;
step S504: repeatedly executing the step S503 to generate a fitting plane, wherein the fitting plane is a plane containing ground point cloud data;
Step S505: and removing the ground point cloud data of each section of area, and splicing the point cloud data of the N sections of areas to generate target object point cloud data from which the ground point cloud data is removed.
Step S6: clustering the target object point cloud data to combine the point cloud data with dense density to generate a point cloud target candidate set;
in the embodiment of the invention, a DBSCAN algorithm based on density is adopted when clustering is performed on the target object point cloud data in step S6.
In an embodiment of the present invention, the step S6 includes the following steps:
step S601: randomly selecting a point P which is not accessed in the point cloud data of the target object, searching a point within the radius R of the point P, determining the point in the range and judging whether the point is larger than a preset point threshold value or not;
Step S602: when the number of points in the range is larger than the number threshold, taking the point P as a center point, establishing a cluster C, and marking the point P as accessed; for points in the range, the points of which are smaller than the set point threshold, marking the point P as noise points and marking the points as accessed points;
Step S603: traversing all points in the range of R, marking the points as C categories, taking all the points in the R as core points, removing the points around the corrosion, marking the corroded points as C categories, and stopping until the points cannot be found;
Step S604: steps S601 to S603 are repeatedly performed so that all points are accessed, and finally, a point cloud target candidate set is generated.
Step S7: and judging the types of the point cloud candidate sets according to the pre-trained deep learning network model, and calculating the spatial position information of the target object when judging that the point cloud candidate sets are a preset target object.
In the embodiment of the present invention, in step S7, the deep learning network model is generated by using improved pointnet network training, and is used for judging the type of the point cloud candidate set, and the improved pointnet network can be used for extracting the local characteristics of the target object.
Fig. 2 is a schematic block diagram of a road target 3D detection system based on a laser radar point cloud in an embodiment of the present invention, and as shown in fig. 2, the road target 3D detection system based on a laser radar point cloud provided by the present invention includes the following modules:
the data receiving module is used for receiving point cloud data acquired by a laser radar on a road and storing a section of the point cloud data intercepted according to time sequence in a form of a point cloud data packet;
The data decomposition module is used for decomposing the point cloud data packet into point cloud data of each frame, and respectively storing the point cloud data of each frame according to time sequence;
the data screening module is used for screening the point cloud data of each frame according to the number threshold value of the points in the selected target area so as to discard any frame of point cloud data of which the number of the points is smaller than the number threshold value;
The data filtering module is used for carrying out statistical filtering operation on the screened point cloud data so as to remove noise points and outliers and generate preprocessed point cloud data;
The data filtering module is used for filtering the preprocessed point cloud data to remove the ground point cloud data so as to determine the point cloud data of the target object;
The data clustering module is used for clustering the point cloud data of the target object to combine the point cloud data with dense density to generate a point cloud target candidate set;
The data identification module is used for judging the types of the point cloud candidate sets according to the pre-trained deep learning network model, and calculating the spatial position information of the target object when judging that the point cloud candidate sets are a preset target object.
In the embodiment of the invention, the spatial position information of the target object is determined by directly processing the original point cloud number, so that the problem of losing the data precision after the three-dimensional point cloud data are mapped to the two-dimensional plane is avoided, and the problem that the traditional image target detection lacks coordinate position information in the road target detection is solved.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes and modifications may be made by one skilled in the art within the scope of the claims without affecting the spirit of the invention.
Claims (8)
1. The road target 3D detection method based on the laser radar point cloud is characterized by comprising the following steps of:
Step S1: receiving point cloud data acquired by a laser radar on a road, and storing a section of the point cloud data intercepted according to time sequence in the form of a point cloud data packet;
Step S2: disassembling the point cloud data packet to obtain point cloud data of each frame, and respectively storing the point cloud data of each frame according to time sequence;
Step S3: screening the point cloud data of each frame according to the number threshold value of the points in the selected target area so as to discard any frame of point cloud data of which the number of the points is smaller than the number threshold value;
step S4: performing statistical filtering operation on the screened point cloud data to remove noise points and outliers and generate preprocessed point cloud data;
Step S5: filtering the preprocessed point cloud data to remove the ground point cloud data so as to determine target object point cloud data;
step S6: clustering the target object point cloud data to combine the point cloud data with dense density to generate a point cloud target candidate set;
Step S7: judging the types of the point cloud candidate sets according to the pre-trained deep learning network model, and calculating the spatial position information of a preset target object when the point cloud candidate sets are judged to be the target object;
The point cloud data packet comprises three-dimensional point cloud space data of a plurality of frames per second, and the characteristics of each point comprise a space coordinate of X, Y, Z axes and reflection intensity;
the step S5 includes the steps of:
Step S501: dividing the preprocessed point cloud data into N sections of areas along the road direction;
Step S502: selecting the lowest height point of each section of the point cloud data of the area as a priori condition, and fitting a plane;
Step S503: calculating the distance between each point in each section of area and the plane, judging whether the point belongs to the plane or not according to the distance value of each point and a set threshold value, and further generating a plane point set;
step S504: repeatedly executing the step S503 to generate a fitting plane, wherein the fitting plane is a plane containing ground point cloud data;
Step S505: and removing the ground point cloud data of each section of area, and splicing the point cloud data of the N sections of areas to generate target object point cloud data from which the ground point cloud data is removed.
2. The method for 3D detection of a road target based on a laser radar point cloud according to claim 1, wherein the step S4 comprises the steps of:
step S401: calculating the average distance from each point to the nearest K points to obtain an average distance result between each point and the adjacent K points;
step S402: determining a standard distance range between adjacent points according to the average distance result and a 3 sigma criterion;
Step S403: the average distance value of each point and the adjacent K points is compared with the standard distance range to remove points outside the standard range.
3. The method for 3D detection of a road target based on a laser radar point cloud according to claim 1, wherein the step S6 comprises the steps of:
step S601: randomly selecting a point P which is not accessed in the point cloud data of the target object, searching a point within the radius R of the point P, determining the point in the range and judging whether the point is larger than a preset point threshold value or not;
Step S602: when the number of points in the range is larger than the number threshold, taking the point P as a center point, establishing a cluster C, and marking the point P as accessed; for points in the range, the points of which are smaller than the set point threshold, marking the point P as noise points and marking the points as accessed points;
Step S603: traversing all points in the range of R, marking the points as C categories, taking all the points in the R as core points, removing the points around the corrosion, marking the corroded points as C categories, and stopping until the points cannot be found;
Step S604: steps S601 to S603 are repeatedly performed so that all points are accessed, and finally, a point cloud target candidate set is generated.
4. The method for 3D detection of a road target based on a lidar point cloud according to claim 1, wherein in step S7, the deep learning network model is generated using a modified pointnet network training, which can be used for local feature extraction of the target object.
5. The method for 3D detection of a road target based on a laser radar point cloud according to claim 1, wherein the spatial position information of the target object includes a length, a width, a height, and a center point of the target object.
6. The method for 3D detection of a road target based on laser radar point cloud according to claim 1, wherein in step S4, a RANSAC algorithm is used when performing a statistical filtering operation on the filtered point cloud data.
7. The method for 3D detection of a road target based on a point cloud of laser radar according to claim 1, wherein a DBSCAN algorithm is used when clustering the point cloud data of the target object in step S6.
8. The road target 3D detection system based on the laser radar point cloud is characterized by comprising the following modules:
the data receiving module is used for receiving point cloud data acquired by a laser radar on a road and storing a section of the point cloud data intercepted according to time sequence in a form of a point cloud data packet;
The data decomposition module is used for decomposing the point cloud data packet into point cloud data of each frame, and respectively storing the point cloud data of each frame according to time sequence;
the data screening module is used for screening the point cloud data of each frame according to the number threshold value of the points in the selected target area so as to discard any frame of point cloud data of which the number of the points is smaller than the number threshold value;
The data filtering module is used for carrying out statistical filtering operation on the screened point cloud data so as to remove noise points and outliers and generate preprocessed point cloud data;
The data filtering module is used for filtering the preprocessed point cloud data to remove the ground point cloud data so as to determine the point cloud data of the target object;
The data clustering module is used for clustering the point cloud data of the target object to combine the point cloud data with dense density to generate a point cloud target candidate set;
the data identification module is used for judging the types of the point cloud candidate sets according to the pre-trained deep learning network model, and calculating the spatial position information of a preset target object when judging that the point cloud candidate sets are the target object;
The point cloud data packet comprises three-dimensional point cloud space data of a plurality of frames per second, and the characteristics of each point comprise a space coordinate of X, Y, Z axes and reflection intensity; the data filtering module comprises: dividing the preprocessed point cloud data into N sections of areas along the road direction; selecting the lowest height point of each section of the point cloud data of the area as a priori condition, and fitting a plane; calculating the distance between each point in each section of area and the plane, judging whether the point belongs to the plane or not according to the distance value of each point and a set threshold value, and further generating a plane point set; generating a fitting plane, wherein the fitting plane is a plane containing ground point cloud data; and removing the ground point cloud data of each section of area, and splicing the point cloud data of the N sections of areas to generate target object point cloud data from which the ground point cloud data is removed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110107066.0A CN112883820B (en) | 2021-01-26 | 2021-01-26 | Road target 3D detection method and system based on laser radar point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110107066.0A CN112883820B (en) | 2021-01-26 | 2021-01-26 | Road target 3D detection method and system based on laser radar point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112883820A CN112883820A (en) | 2021-06-01 |
CN112883820B true CN112883820B (en) | 2024-04-19 |
Family
ID=76052426
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110107066.0A Active CN112883820B (en) | 2021-01-26 | 2021-01-26 | Road target 3D detection method and system based on laser radar point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112883820B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113655477B (en) * | 2021-06-11 | 2023-09-01 | 成都圭目机器人有限公司 | Method for automatically detecting geological diseases by adopting shallow layer ground radar |
CN113705617B (en) * | 2021-07-30 | 2024-06-21 | 武汉万集光电技术有限公司 | Processing method and device of point cloud data, computer equipment and storage medium |
CN113925479B (en) * | 2021-08-27 | 2023-10-03 | 上海赫千电子科技有限公司 | Life monitoring method and device based on intelligent vehicle-mounted box |
CN113744338A (en) * | 2021-10-29 | 2021-12-03 | 青岛影创信息科技有限公司 | Depth video space plane detection method and system |
CN114355381B (en) * | 2021-12-31 | 2022-09-09 | 安徽海博智能科技有限责任公司 | Laser radar point cloud quality detection and improvement method |
CN114359758B (en) * | 2022-03-18 | 2022-06-14 | 广东电网有限责任公司东莞供电局 | Power transmission line detection method and device, computer equipment and storage medium |
CN115100491B (en) * | 2022-08-25 | 2022-11-18 | 山东省凯麟环保设备股份有限公司 | Abnormal robust segmentation method and system for complex automatic driving scene |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018205119A1 (en) * | 2017-05-09 | 2018-11-15 | 深圳市速腾聚创科技有限公司 | Roadside detection method and system based on laser radar scanning |
CN110780305A (en) * | 2019-10-18 | 2020-02-11 | 华南理工大学 | Track cone bucket detection and target point tracking method based on multi-line laser radar |
CN110879401A (en) * | 2019-12-06 | 2020-03-13 | 南京理工大学 | Unmanned platform real-time target 3D detection method based on camera and laser radar |
CN111652926A (en) * | 2020-04-30 | 2020-09-11 | 南京理工大学 | Real-time three-dimensional target detection method based on sparse point cloud data |
-
2021
- 2021-01-26 CN CN202110107066.0A patent/CN112883820B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018205119A1 (en) * | 2017-05-09 | 2018-11-15 | 深圳市速腾聚创科技有限公司 | Roadside detection method and system based on laser radar scanning |
CN110780305A (en) * | 2019-10-18 | 2020-02-11 | 华南理工大学 | Track cone bucket detection and target point tracking method based on multi-line laser radar |
CN110879401A (en) * | 2019-12-06 | 2020-03-13 | 南京理工大学 | Unmanned platform real-time target 3D detection method based on camera and laser radar |
CN111652926A (en) * | 2020-04-30 | 2020-09-11 | 南京理工大学 | Real-time three-dimensional target detection method based on sparse point cloud data |
Non-Patent Citations (1)
Title |
---|
基于深度投影的三维点云目标分割和碰撞检测;王张飞;刘春阳;隋新;杨芳;马喜强;陈立海;;光学精密工程(第07期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112883820A (en) | 2021-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112883820B (en) | Road target 3D detection method and system based on laser radar point cloud | |
Chang et al. | Argoverse: 3d tracking and forecasting with rich maps | |
US11709058B2 (en) | Path planning method and device and mobile device | |
CN109087510B (en) | Traffic monitoring method and device | |
CN110032949A (en) | A kind of target detection and localization method based on lightweight convolutional neural networks | |
CN108898047B (en) | Pedestrian detection method and system based on blocking and shielding perception | |
CN110942449A (en) | Vehicle detection method based on laser and vision fusion | |
US9142011B2 (en) | Shadow detection method and device | |
CN111563442A (en) | Slam method and system for fusing point cloud and camera image data based on laser radar | |
CN115049700A (en) | Target detection method and device | |
CN110674705A (en) | Small-sized obstacle detection method and device based on multi-line laser radar | |
CN107025657A (en) | A kind of vehicle action trail detection method based on video image | |
CN115376109B (en) | Obstacle detection method, obstacle detection device, and storage medium | |
CN111295666A (en) | Lane line detection method, device, control equipment and storage medium | |
CN117274510B (en) | Vehicle body fault detection method based on three-dimensional modeling and structural dimension measurement | |
CN116309817A (en) | Tray detection and positioning method based on RGB-D camera | |
CN116978009A (en) | Dynamic object filtering method based on 4D millimeter wave radar | |
CN112528781B (en) | Obstacle detection method, device, equipment and computer readable storage medium | |
CN115100741B (en) | Point cloud pedestrian distance risk detection method, system, equipment and medium | |
CN116452977A (en) | Unmanned ship platform sea surface ship detection method, system and equipment | |
CN116573017A (en) | Urban rail train running clearance foreign matter sensing method, system, device and medium | |
CN108388854A (en) | A kind of localization method based on improvement FAST-SURF algorithms | |
CN115457130A (en) | Electric vehicle charging port detection and positioning method based on depth key point regression | |
CN114241448A (en) | Method and device for obtaining heading angle of obstacle, electronic equipment and vehicle | |
CN111899277B (en) | Moving object detection method and device, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |