CN113486963A - Density self-adaptive point cloud end-to-end sampling method - Google Patents

Density self-adaptive point cloud end-to-end sampling method Download PDF

Info

Publication number
CN113486963A
CN113486963A CN202110785552.8A CN202110785552A CN113486963A CN 113486963 A CN113486963 A CN 113486963A CN 202110785552 A CN202110785552 A CN 202110785552A CN 113486963 A CN113486963 A CN 113486963A
Authority
CN
China
Prior art keywords
point cloud
sampling
point
pointnet
preliminary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110785552.8A
Other languages
Chinese (zh)
Other versions
CN113486963B (en
Inventor
林雅南
雷蕴奇
王其聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202110785552.8A priority Critical patent/CN113486963B/en
Publication of CN113486963A publication Critical patent/CN113486963A/en
Application granted granted Critical
Publication of CN113486963B publication Critical patent/CN113486963B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Complex Calculations (AREA)

Abstract

A density self-adaptive point cloud end-to-end sampling method relates to a computer vision technology. The method comprises the following steps: A. a point cloud dataset is prepared for the classification task. B. And pre-training the classification network PointNet to obtain a PointNet pre-training model. C. And inputting the original point cloud data into a point cloud simplified network to realize the preliminary sampling of the point cloud. D. And aiming at each preliminary sampling point, implementing a density self-adaptive K nearest neighbor algorithm DAKNN, and dividing neighborhoods with different sizes by using K values. E. And adopting a self-attention mechanism, and performing weighted updating on the preliminary sampling points by utilizing neighborhood information. F. And inputting the updated sampling points into a pre-trained PointNet model, and optimizing a loss function to obtain final sampling points. The task-related point cloud sampling is realized in an end-to-end learning mode, and the interference of point cloud density change and noise on the sampling process can be relieved, so that the effectiveness of a point cloud sampling result is improved.

Description

Density self-adaptive point cloud end-to-end sampling method
Technical Field
The invention relates to a computer vision technology, in particular to a density self-adaptive point cloud end-to-end sampling method.
Background
As an important representative of 3D data, a point cloud is a set of unordered points that can describe the geometry of an object. Since the point cloud typically contains a large number of input points, down-sampling operations are typically performed prior to downstream task execution in order to reduce the point cloud size and increase computational efficiency.
The most widely used down-sampling method in the point cloud task is the most distant point sampling FPS. By iteratively selecting the farthest point, the FPS can cover the input point cloud as much as possible. However, the sampling results of the FPS are task independent and noise sensitive. Aiming at the problems of the FPS, many scholars propose an improved algorithm of downsampling based on a deep learning framework. Nezhardaya et al (Ehsan Nezhardya, Ehsan Taghavi, Ryan Razani, Bingbin Liu, and Jun Luo. "Adaptive iterative down-sampling for point classification." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition.2020.pp.12956-12964.) propose CP-nets which enable deterministic sampling by measuring the contribution of the sampling points to the global pooling characteristics. Dovrat et al (Oren Dovrat, Itai Lang, and Shai Avidan. "Learning to sample." InProcedents of the IEEE/CVF Conference on Computer Vision and Pattern recognition.2019.pp.2760-2769.) propose S-Net, which obtains a sample point optimal to a task through network training. Lang et al (Itai Lang, Asaf Manor, and Shai Avidan. "SampleNet." differential Point closed sampling. "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition.2020.pp.7578-7588.) designed SampleNet, by optimizing the projection weights, the microportability of the sample point matching step is achieved. Yang et al (Jiancheng Yang, Qiang Zhang, Bingbin Ni, Linguo Li, Jinxian Liu, Mengdie Zhou, and Qi Tian. "Modeling Point cloud With set-extension and Gumbel Subset sampling." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition.2019.pp.3323-3332.) propose Gumbel Subset sampling, which realizes an end-to-end trainable sampling layer by Gumbel-Softmax. Yan et al (Xu Yan, Chaoda Zheng, Zhen Li, Sheng Wang, and Shuguang cui. "Point ASNL: Robust points processing using non-local neural networks with adaptive sampling." Inproceedings of the IEEE/CVF Conference on Computer Vision and Pattern registration.2020.pp.5589-5598.) design PointASNL network, by adaptive sampling and design of non-local modules, it can alleviate the influence of noise on the sampling result.
Although the above studies achieved slight improvements in down-sampling, several problems still exist in sampling. First, in these studies, the sample points are typically limited to a subset of the original input. Therefore, the sampling result may be affected by the abnormal value. Second, these methods fail to jointly consider task dependencies and noise effects. In particular, in previous sampling methods, the non-uniformity of the point cloud density has not received much attention.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides a method which is beneficial to extracting salient points and can also relieve the influence of outliers on the sampling process; the density self-adaptive point cloud end-to-end sampling method can effectively reduce the influence of noise and improve the effectiveness of sampling results.
The invention comprises the following steps:
A. preparing a point cloud data set for a classification task;
B. pre-training a classification network PointNet to obtain a PointNet pre-training model;
C. inputting original point cloud data into a point cloud simplified network to realize preliminary sampling of the point cloud;
D. aiming at each preliminary sampling point, implementing a density self-adaptive K nearest neighbor algorithm DAKNN, and dividing neighborhoods with different sizes by using K values;
E. a self-attention mechanism is adopted, and the preliminary sampling points are weighted and updated by using neighborhood information;
F. and inputting the updated sampling points into a pre-trained PointNet model, and optimizing a loss function to obtain final sampling points.
In step a, the preparing of the point cloud data set for the classification task further comprises the sub-steps of:
A1. preparing a ModelNet40 data set (Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Lingueng Zhang, Xiaou Tang, and Jianxing. Xiao.g. "3 d shots: A deep presentation for volumetric shots." In Proceedings of the IEEE Conference Computer Vision and Pattern registration. 2015. pp.1912-1920.); the data set contains 12,311 models, each from 40 different categories; test split by official training, containing 9843 training models and 2468 test models.
In step B, the pre-training classification network PointNet to obtain a PointNet pre-training model further includes the following sub-steps:
B1. pre-training a PointNet classification task network on a ModelNet40 data set, wherein the input point number of each point cloud sample is 1024;
B2. and obtaining a pre-trained PointNet model, wherein the weight of the model is kept unchanged in the training process of the sampling network.
In step C, inputting the original point cloud data into the point cloud simplified network, and implementing the preliminary sampling of the point cloud further includes the following substeps:
C1. the original point cloud data
Figure BDA0003159112300000031
Inputting a point cloud simplified network;
C2. acquiring point cloud characteristics through a series of multilayer perceptrons, extracting global characteristics by utilizing global maximum pooling, and finally obtaining M preliminary simplified points by adopting a series of full-connected layers
Figure BDA0003159112300000032
In step D, the implementing density adaptive K-nearest neighbor algorithm DAKNN, dividing the neighborhoods of different sizes by using K values further includes the following sub-steps:
D1. calculating the constituent K of K0;K0Reflecting the average density of the point cloud, K of the sparse area0Greater value, and dense region of K0The value is small; k0Has a value of U × (N)0N); in the preset case, U is 32, N01024, N is the number of input points;
D2. calculating a component D of K; d is calculated by a nuclear density estimation algorithm in a mode of D-D/DmaxWhere d is the inverse density value;
D3. for each preliminary sampling point siThe value of K is represented by K0Calculating the multiplied by D; according to different K values, point cloud preliminary sampling points siAs a center, selectDividing the K neighbors into M regions Nsi=DAKNN(nk|si,k=1,2,...,K)。
In step E, the performing weighted update on the preliminary sampling points by using the neighborhood information by using the self-attention mechanism further includes the following sub-steps:
E1. at each preliminary sampling point siNeighborhood of (Ns)iCalculating a feature representation of each member; the calculation formula is as follows:
φ(nk)=Wφnk (1)
θ(nj)=Wθnj (2)
phi and theta represent different feature representations and can be realized by adopting one-dimensional convolution; n isk、njAll represent NsiA member of (a);
E2. calculating Ns in neighborhood by dot productiThe relationship f of each member in the group; the calculation formula is as follows:
f(nk,nj)=φ(nk)Tθ(nj) (3)
E3. updating the characteristics of the group members by using the relationship between each member and a self-attention mechanism, wherein the updating formula is as follows:
Figure BDA0003159112300000033
wherein n iskIndicating members currently needing to update information, njIterative enumeration of NsiAll members of (a); gamma is a linear variation used to encode the characteristics of the member; c is a normalization factor;
E4. calculating the contribution of K neighbors to each preliminary sampling point by using a point-by-point multilayer perceptron sigma, and calculating a standardized weight matrix by using Softmax, wherein the weight matrix is calculated in the following way:
Figure BDA0003159112300000041
E5. for each preliminary sampling point siCalculating based on the weighted average characteristics of the neighbors to obtain updated sampling points
Figure BDA0003159112300000042
The calculation formula is as follows:
Figure BDA0003159112300000043
in step F, inputting the updated sampling point into a pre-trained PointNet model, and optimizing a loss function to obtain a final sampling point, further includes the following substeps:
F1. will sample the point
Figure BDA0003159112300000044
Putting into a pre-trained task network PointNet, and calculating a classification task loss function Lt
F2. Constructing sampling points
Figure BDA0003159112300000045
Closeness loss function L to the neighborhood of the sitec,LcIs defined as:
Figure BDA0003159112300000046
F3. by jointly optimizing LtAnd LcTwo loss functions, generating task-dependent final sampling points, the total loss of the sampling network being defined as:
LDA-Net=Lt+Lc (8)。
compared with the prior art, the invention has the following outstanding advantages and technical effects:
the invention provides a density self-adaptive point cloud end-to-end sampling method. Firstly, a point cloud simplified network is designed, so that preliminary sampling points are generated in a network learning mode. Secondly, aiming at the problem of density change of the point cloud, a density self-adaptive K nearest neighbor algorithm DAKNN is provided, and neighborhood division of different sizes can be realized according to the area density. Meanwhile, a point cloud information adjusting module is designed, the preliminary sampling point information is adjusted through a self-attention mechanism, and the influence of noise on a sampling result can be relieved. And finally, putting the sampling points into a pre-training task network, thereby realizing task-related sampling in an end-to-end mode.
The invention is based on point cloud sampling of end-to-end learning, and the sampling result is not limited to the subset of the original point. The method is not only beneficial to the extraction of the salient points, but also can relieve the influence of outliers on the sampling process. Secondly, aiming at the problem of point cloud density change, the invention designs a density self-adaptive neighborhood building module, and can still realize robust sampling under the scene of uneven point cloud distribution. Meanwhile, in consideration of noise factors, the invention designs the sampling point adjusting module, updates the sampling point by using neighborhood information, can effectively reduce the influence of noise and improves the effectiveness of a sampling result.
Drawings
FIG. 1 is a schematic diagram of a point cloud sampling network framework according to the present invention.
FIG. 2 is a simplified point cloud networking module according to the present invention.
FIG. 3 is a schematic diagram of a density adaptive sampling module according to the present invention.
FIG. 4 is a comparison of the performance of the method of the present invention with other sampling methods at different sampling ratios.
Detailed Description
The method of the present invention will be described in detail with reference to the accompanying drawings and examples, which are provided for the purpose of describing the technical solutions of the present invention and are not limited to the following examples.
The invention first prepares a point cloud dataset for a classification task. And pre-training a classification network PointNet on the data set to obtain a PointNet pre-training model. And inputting the original point cloud data into a point cloud simplified network to realize the preliminary sampling of the point cloud. And aiming at each preliminary sampling point, implementing a density self-adaptive K nearest neighbor algorithm DAKNN, and dividing neighborhoods with different sizes by using K values. And adopting a self-attention mechanism, and performing weighted updating on the preliminary sampling points by utilizing neighborhood information. And inputting the updated sampling points into a pre-trained PointNet model, and optimizing a loss function to obtain final sampling points.
Referring to fig. 1 to 4, an embodiment of the present invention includes the following steps:
1. preparing a related data set of the point cloud classification task. A ModelNet40 data set was prepared (Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Lingueng Zhang, Xiaou Tang, and Jianxing Xiao. "3 d renderings: Adeep representation for volumetric maps." In Proceedings of the IEEE Conference Computer Vision and Pattern recognition.2015. pp.1912-1920.). The data set contains 12,311 models, including 40 object classes. According to the official training test split, the training set includes 843 models and the test set includes 2468 models.
2. The PointNet classification task network was pre-trained on a ModelNet40 dataset.
A. Pre-training of the PointNet network was performed on a modenet 40. The number of input points of each point cloud sample is 1024.
B. And saving the PointNet pre-training model, wherein the weight of the PointNet pre-training model is kept unchanged in the training process of the sampling network.
3. And utilizing the point cloud simplified network to obtain a preliminary sampling result.
A. Inputting raw point cloud data
Figure BDA0003159112300000051
And (4) simplifying the P input point cloud into a network.
B. Acquiring point cloud characteristics through a series of multilayer perceptrons, extracting global characteristics by utilizing global maximum pooling, and finally obtaining M preliminary simplified points by adopting a series of full-connected layers
Figure BDA0003159112300000052
4. And aiming at each point cloud preliminary sampling point, implementing a density self-adaptive K nearest neighbor algorithm DAKNN, and constructing neighborhoods with different sizes.
A. Calculating the constituent K of K0。K0Reflecting the average density of the point cloud, K of the sparse area0Greater value, and dense region of K0The value is small. K0Has a value of U × (N)0and/N). In the preset case, U is 32, N0N is the number of input points 1024.
B. Component D of K is calculated. D is calculated by a nuclear density estimation algorithm in a D/D waymax. Here, d is the inverse density value.
C. For each preliminary sampling point siThe value of K is represented by K0And multiplying by D. According to different K values, point cloud preliminary sampling points siSelecting K neighbors as the center to carry out region division, and obtaining M regions Ns in totali=DAKNN(nk|si,k=1,2,...,K)。
5. And adjusting the preliminary sampling point by using the neighborhood information.
A. At each preliminary sampling point siNeighborhood of (Ns)iA feature representation for each member is computed. The calculation formula is as follows:
φ(nk)=Wφnk (1)
θ(nj)=Wθnj (2)
where phi and theta represent different feature representations, which can be implemented by one-dimensional convolution. n isk、njAll represent NsiIs a member of (1).
B. Calculating Ns in neighborhood by dot productiThe relationship f of each member in the list to each other. The calculation formula is as follows:
f(nk,nj)=φ(nk)Tθ(nj) (3)
C. updating the characteristics of the group members by using the relationship between each member and a self-attention mechanism, wherein the updating formula is as follows:
Figure BDA0003159112300000061
here, n iskIndicating a current needMembers of new information, njIterative enumeration of NsiAll members of (1). Gamma is a linear variation used to encode the characteristics of the member. C is a normalization factor.
D. Calculating the contribution of K neighbors to each preliminary sampling point by using a point-by-point multilayer perceptron sigma, and calculating a standardized weight matrix by using Softmax, wherein the weight matrix is calculated in the following way:
Figure BDA0003159112300000062
E. for each preliminary sampling point siCalculating based on the weighted average characteristics of the neighbors to obtain updated sampling points
Figure BDA0003159112300000063
The calculation formula is as follows:
Figure BDA0003159112300000064
6. and inputting the sampling points into a pre-trained task network, and obtaining a final sampling result by optimizing a loss function.
A. Sampling the updated sampling point
Figure BDA0003159112300000073
Inputting a pre-trained task network to obtain a task loss function Lt
B. Constructing sampling points
Figure BDA0003159112300000074
Closeness loss function L to the neighborhood of the sitecDefined as:
Figure BDA0003159112300000071
C. joint optimization LtAnd LcAnd finally generating a sampling result which is related to the task and closely related to the neighborhood. Definition ofThe total loss of the sampling network is:
LDA-Net=Lt+Lc (8)
the invention is compared with other sampling methods in the performance of different sampling ratios in tables 1 and 2. The evaluation index is classification accuracy (%). Table 1 shows the performance of the proposed method compared to other sampling methods under noisy conditions on a ModelNet40 dataset, where:
RS is random sampling, an abbreviation for random sampling;
FPS is farthentpointpampling, an abbreviation for farthest point sampling;
S-NET corresponds to the method proposed by Dovrat et al (Oren Dovrat, Itai Lang, and Shai Avidan. "Learning to sample." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition.2019. pp.2760-2769.);
SampleNet corresponds to the method proposed by Lang et al (Itai Lang, Asaf Manor, and Shai Avidan. "SampleNet: differentiated point closed sampling." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern recognition.2020. pp.7578-7588.).
TABLE 1
Figure BDA0003159112300000072
TABLE 2
Figure BDA0003159112300000081
Table 2 shows the comparison between the method of the present invention and other sampling methods on the data set of the model net40 under the condition of varying the density of the point cloud, wherein the methods corresponding to the methods are the same as table 1. FIG. 4 is a comparison of the performance of the method of the present invention with other sampling methods at different sampling ratios. The evaluation index is classification accuracy (%), and the comparison shows that the classification accuracy of the method is obviously superior to that of other sampling methods.

Claims (7)

1. A density self-adaptive point cloud end-to-end sampling method is characterized by comprising the following steps:
A. preparing a point cloud data set for a classification task;
B. pre-training a classification network PointNet to obtain a PointNet pre-training model;
C. inputting original point cloud data into a point cloud simplified network to realize preliminary sampling of the point cloud;
D. aiming at each preliminary sampling point, implementing a density self-adaptive K nearest neighbor algorithm DAKNN, and dividing neighborhoods with different sizes by using K values;
E. a self-attention mechanism is adopted, and the preliminary sampling points are weighted and updated by using neighborhood information;
F. and inputting the updated sampling points into a pre-trained PointNet model, and optimizing a loss function to obtain final sampling points.
2. The method of claim 1, wherein in step a, the preparing of the point cloud data set for classification task further comprises the sub-steps of:
A1. preparing a ModelNet40 data set; the data set contains 12,311 models, each from 40 different categories; test split by official training, containing 9843 training models and 2468 test models.
3. The method of claim 1, wherein in step B, the pre-training classification network PointNet is trained, and obtaining a PointNet pre-training model further comprises the following sub-steps:
B1. pre-training a PointNet classification task network on a ModelNet40 data set, wherein the input point number of each point cloud sample is 1024;
B2. and obtaining a pre-trained PointNet model, wherein the weight of the model is kept unchanged in the training process of the sampling network.
4. The method of claim 1, wherein in step C, the raw point cloud data is input into a simplified point cloud network, and the preliminary sampling of the point cloud further comprises the following sub-steps:
C1. the original point cloud data
Figure FDA0003159112290000011
Inputting a point cloud simplified network;
C2. acquiring point cloud characteristics through a series of multilayer perceptrons, extracting global characteristics by utilizing global maximum pooling, and finally obtaining M preliminary simplified points by adopting a series of full-connected layers
Figure FDA0003159112290000012
5. The method of claim 1, wherein in step D, the density-adaptive K-nearest neighbor algorithm dakn is implemented, and the step of dividing the neighborhoods of different sizes by using K values further comprises the following sub-steps:
D1. calculating the constituent K of K0;K0Reflecting the average density of the point cloud, K of the sparse area0Greater value, and dense region of K0The value is small; k0Has a value of U × (N)0N); in the preset case, U is 32, N01024, N is the number of input points;
D2. calculating a component D of K; d is calculated by a nuclear density estimation algorithm in a mode of D-D/DmaxWhere d is the inverse density value;
D3. for each preliminary sampling point siThe value of K is represented by K0Calculating the multiplied by D; according to different K values, point cloud preliminary sampling points siSelecting K neighbors as the center to carry out region division to obtain M regions Nsi=DAKNN(nk|si,k=1,2,...,K)。
6. The method for density-adaptive point cloud end-to-end sampling according to claim 1, wherein in step E, the updating the preliminary sampling points by using the neighborhood information using the self-attention mechanism further comprises the following sub-steps:
E1. at each preliminary sampling point siNeighborhood of (Ns)iCalculating a feature representation of each member; the calculation formula is as follows:
φ(nk)=Wφnk (1)
θ(nj)=Wθnj (2)
phi and theta represent different feature representations and can be realized by adopting one-dimensional convolution; n isk、njAll represent NsiA member of (a);
E2. calculating Ns in neighborhood by dot productiThe relationship f of each member in the group; the calculation formula is as follows:
f(nk,nj)=φ(nk)Tθ(nj) (3)
E3. updating the characteristics of the group members by using the relationship between each member and a self-attention mechanism, wherein the updating formula is as follows:
Figure FDA0003159112290000021
wherein n iskIndicating members currently needing to update information, njIterative enumeration of NsiAll members of (a); gamma is a linear variation used to encode the characteristics of the member; c is a normalization factor;
E4. calculating the contribution of K neighbors to each preliminary sampling point by using a point-by-point multilayer perceptron sigma, and calculating a standardized weight matrix by using Softmax, wherein the weight matrix is calculated in the following way:
Figure FDA0003159112290000022
E5. for each preliminary sampling point siCalculating based on the weighted average characteristics of the neighbors to obtain updated sampling points
Figure FDA0003159112290000023
The calculation formula is as follows:
Figure FDA0003159112290000024
7. the method of claim 1, wherein in step F, the step of inputting the updated sampling points into a pre-trained PointNet model, and optimizing a loss function to obtain final sampling points further comprises the following sub-steps:
F1. will sample the point
Figure FDA0003159112290000031
Putting into a pre-trained task network PointNet, and calculating a classification task loss function Lt
F2. Constructing sampling points
Figure FDA0003159112290000032
Closeness loss function L to the neighborhood of the sitec,LcIs defined as:
Figure FDA0003159112290000033
F3. by jointly optimizing LtAnd LcTwo loss functions, generating task-dependent final sampling points, the total loss of the sampling network being defined as:
LDA-Net=Lt+Lc (8)。
CN202110785552.8A 2021-07-12 2021-07-12 Point cloud end-to-end sampling method with self-adaptive density Active CN113486963B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110785552.8A CN113486963B (en) 2021-07-12 2021-07-12 Point cloud end-to-end sampling method with self-adaptive density

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110785552.8A CN113486963B (en) 2021-07-12 2021-07-12 Point cloud end-to-end sampling method with self-adaptive density

Publications (2)

Publication Number Publication Date
CN113486963A true CN113486963A (en) 2021-10-08
CN113486963B CN113486963B (en) 2023-07-07

Family

ID=77938803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110785552.8A Active CN113486963B (en) 2021-07-12 2021-07-12 Point cloud end-to-end sampling method with self-adaptive density

Country Status (1)

Country Link
CN (1) CN113486963B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113988164A (en) * 2021-10-21 2022-01-28 电子科技大学 Representative point self-attention mechanism-oriented lightweight point cloud target detection method
CN114445280A (en) * 2022-01-21 2022-05-06 太原科技大学 Point cloud down-sampling method based on attention mechanism
CN116542981A (en) * 2023-07-06 2023-08-04 无锡晨智物联科技有限公司 Quality assessment method for reference-point-free cloud

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027559A (en) * 2019-10-31 2020-04-17 湖南大学 Point cloud semantic segmentation method based on expansion point convolution space pyramid pooling
CN111783838A (en) * 2020-06-05 2020-10-16 东南大学 Point cloud characteristic space representation method for laser SLAM
CN111951196A (en) * 2020-08-14 2020-11-17 北京大学深圳研究生院 Progressive point cloud down-sampling method and device based on graph
CN112418235A (en) * 2020-11-20 2021-02-26 中南大学 Point cloud semantic segmentation method based on expansion nearest neighbor feature enhancement
US20210133463A1 (en) * 2019-10-31 2021-05-06 Aptiv Technologies Limited Multi-Domain Neighborhood Embedding and Weighting of Point Cloud Data
CN112907602A (en) * 2021-01-28 2021-06-04 中北大学 Three-dimensional scene point cloud segmentation method based on improved K-nearest neighbor algorithm
CN112949647A (en) * 2021-02-26 2021-06-11 中国科学院自动化研究所 Three-dimensional scene description method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027559A (en) * 2019-10-31 2020-04-17 湖南大学 Point cloud semantic segmentation method based on expansion point convolution space pyramid pooling
US20210133463A1 (en) * 2019-10-31 2021-05-06 Aptiv Technologies Limited Multi-Domain Neighborhood Embedding and Weighting of Point Cloud Data
CN111783838A (en) * 2020-06-05 2020-10-16 东南大学 Point cloud characteristic space representation method for laser SLAM
CN111951196A (en) * 2020-08-14 2020-11-17 北京大学深圳研究生院 Progressive point cloud down-sampling method and device based on graph
CN112418235A (en) * 2020-11-20 2021-02-26 中南大学 Point cloud semantic segmentation method based on expansion nearest neighbor feature enhancement
CN112907602A (en) * 2021-01-28 2021-06-04 中北大学 Three-dimensional scene point cloud segmentation method based on improved K-nearest neighbor algorithm
CN112949647A (en) * 2021-02-26 2021-06-11 中国科学院自动化研究所 Three-dimensional scene description method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MEIXIA JIA; AIJIN LI; ZHAOYANG WU: ""A Global Point-Sift Attention Network for 3D Point Cloud Semantic Segmentation"", 《 IGARSS 2019 - 2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM》 *
QIANG LU, CHAO CHEN, WENJUN XIE, YUETONG LUO: "\"PointNGCNN: Deep convolutional networks on 3D point clouds with neighborhood graph filters\"", 《COMPUTERS & GRAPHICS》, vol. 86, no. 2 *
景川: ""基于深度学习的三维点云语义分割研究"", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, vol. 138, no. 2020 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113988164A (en) * 2021-10-21 2022-01-28 电子科技大学 Representative point self-attention mechanism-oriented lightweight point cloud target detection method
CN113988164B (en) * 2021-10-21 2023-08-08 电子科技大学 Lightweight point cloud target detection method for representative point self-attention mechanism
CN114445280A (en) * 2022-01-21 2022-05-06 太原科技大学 Point cloud down-sampling method based on attention mechanism
CN114445280B (en) * 2022-01-21 2024-03-29 太原科技大学 Point cloud downsampling method based on attention mechanism
CN116542981A (en) * 2023-07-06 2023-08-04 无锡晨智物联科技有限公司 Quality assessment method for reference-point-free cloud
CN116542981B (en) * 2023-07-06 2023-08-29 无锡晨智物联科技有限公司 Quality assessment method for reference-point-free cloud

Also Published As

Publication number Publication date
CN113486963B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
Dai et al. Compressing neural networks using the variational information bottleneck
CN113486963B (en) Point cloud end-to-end sampling method with self-adaptive density
Fu et al. Robust volume minimization-based matrix factorization for remote sensing and document clustering
CA2913743C (en) Systems and methods for performing bayesian optimization
JP7076648B2 (en) Image processing methods and equipment, computer equipment and computer storage media
CN113191387A (en) Cultural relic fragment point cloud classification method combining unsupervised learning and data self-enhancement
CN112307714A (en) Character style migration method based on double-stage deep network
CN110991621A (en) Method for searching convolutional neural network based on channel number
CN111353534B (en) Graph data category prediction method based on adaptive fractional order gradient
Thakur et al. PReLU and edge‐aware filter‐based image denoiser using convolutional neural network
CN109151727B (en) WLAN fingerprint positioning database construction method based on improved DBN
CN108182316B (en) Electromagnetic simulation method based on artificial intelligence and electromagnetic brain thereof
CN112950505B (en) Image processing method, system and medium based on generation countermeasure network
Kakillioglu et al. 3D capsule networks for object classification with weight pruning
CN115272696A (en) Point cloud semantic segmentation method based on self-adaptive convolution and local geometric information
CN116993513A (en) Financial wind control model interpretation method and device and computer equipment
EP4081953A1 (en) Domain generalization via batch normalization statistics
Kumar et al. Image Deconvolution using Deep Learning-based Adam Optimizer
WO2023086198A1 (en) Robustifying nerf model novel view synthesis to sparse data
CN113643336A (en) Three-dimensional image rigid matching method based on spherical polar coordinate system deep neural network
WO2024060839A1 (en) Object operation method and apparatus, computer device, and computer storage medium
CN117151998A (en) Image illumination correction method based on support vector regression
CN110717402A (en) Pedestrian re-identification method based on hierarchical optimization metric learning
CN117611838A (en) Multi-label image classification method based on self-adaptive hypergraph convolutional network
CN117349687A (en) Daily load curve clustering method based on variable convolution self-encoder

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant