CN112396655B - Point cloud data-based ship target 6D pose estimation method - Google Patents
Point cloud data-based ship target 6D pose estimation method Download PDFInfo
- Publication number
- CN112396655B CN112396655B CN202011290504.3A CN202011290504A CN112396655B CN 112396655 B CN112396655 B CN 112396655B CN 202011290504 A CN202011290504 A CN 202011290504A CN 112396655 B CN112396655 B CN 112396655B
- Authority
- CN
- China
- Prior art keywords
- point
- target
- dimensional
- proposal
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a point cloud data-based ship target 6D pose estimation method, which comprises the following steps of: step 1: acquiring a ship point cloud data set of a marine scene, wherein a data set label comprises a target category, a three-dimensional coordinate of a target, a three-dimensional size of the target and a three-dimensional pose of the target; step 2: constructing a neural network, and extracting point-by-point cloud features by adopting PointNet + + to obtain point-by-point high-dimensional features; and step 3: generating a 3D boundary frame proposal by a bottom-up scheme, generating a real segmentation mask based on the 3D boundary frame, segmenting foreground points, and simultaneously generating a boundary frame proposal with angle information from segmentation points for the input of RCNN; and 4, step 4: and (4) carrying out proposal optimization based on the proposal obtained in the step (3) and the foreground segmentation characteristics and spatial characteristics so as to output the final classification and 3D frame and posture angle. The method realizes the pose estimation effect of the three-dimensional target by adopting an end-to-end learning mode, and improves the instantaneity of pose estimation.
Description
Technical Field
The invention relates to a point cloud data-based ship target 6D pose estimation method, and relates to the field of point clouds, ship target pose estimation of offshore scenes, deep learning and neural networks.
Background
Pose estimation plays a very important role in the field of computer vision. The method has great application in the aspects of estimating the pose of the robot by using the vision sensor for control, robot navigation, augmented reality and the like. The pose estimation problem of an object is to determine the spatial position of the object in 3D space, and the angle of rotation of the object about coordinate axes, yaw (Yaw) about the Z-axis, pitch (Pitch) about the Y-axis, and Roll (Roll) about the X-axis. In recent years, methods for 6D pose estimation can be divided into four broad categories, namely a model corresponding point-based method, a template matching-based method, a voting-based method, and a regression-based method. The method based on the model corresponding points mainly aims at objects with rich textures, and the method based on the template matching mainly aims at images with weak textures or no textures. The specificity of the marine scene for a ship target, such as the effect of marine surface lighting, and the variability of weather affects image quality. The traditional three-dimensional attitude estimation algorithm can only detect a single target generally and is time-consuming.
Disclosure of Invention
In view of the prior art, the technical problem to be solved by the invention is to provide a ship target 6D pose estimation method based on point cloud data, by learning the relation between the features of point cloud information and the 6D pose, and by improving on the basis of target detection based on deep learning, the 6D pose of a target is further regressed.
In order to solve the technical problem, the invention provides a ship target 6D pose estimation method based on point cloud data, which comprises the following steps:
step 1: acquiring a ship point cloud data set of a marine scene, wherein a data set label comprises a target category, a three-dimensional coordinate of a target, a three-dimensional size of the target and a three-dimensional pose of the target;
step 2: constructing a neural network, and extracting point-by-point cloud features by adopting PointNet + + to obtain point-by-point high-dimensional features;
and step 3: generating a 3D boundary frame proposal by a bottom-up scheme, generating a real segmentation mask based on the 3D boundary frame, segmenting foreground points and simultaneously generating a boundary frame proposal with angle information from segmentation points for the input of RCNN;
and 4, step 4: and (4) carrying out proposal optimization based on the proposal obtained in the step (3) and the foreground segmentation characteristics and spatial characteristics so as to output the final classification and 3D frame and posture angle.
The invention also includes:
1. in the step 2, pointNet + + is adopted to extract point cloud features point by point, and the obtained point-by-point high-dimensional features are specifically as follows: the feature extraction of the point set comprises three parts, including Sampling layer, grouping layer and Pointnet layer, wherein the Sampling algorithm of the Sampling layer uses an iteration farthest point Sampling method, a series of points are selected from the input point cloud, the center of a local area is defined, then a local neighborhood is constructed, points are searched within a given distance, then feature extraction is carried out by using a full connection layer, finally, pooling operation is carried out to obtain high-level features, the number of the original point sets is sampled from the point sets, and the high-dimensional features of the point sets one by one are obtained.
2. Generating a real segmentation mask based on the 3D boundary frame in the step 3, segmenting foreground points and generating a boundary frame proposal with angle information from the segmentation points specifically: classifying each point for two times, dividing foreground and background scores, normalizing to 0-1 through a sigmod function, considering that the score is higher than a threshold value as a foreground point, dividing foreground points and simultaneously generating boundary frame proposals with angle information from the divided points, wherein the foreground points are taken as centers, the total number is N, regression values and set average sizes are used for generating initial proposals on each point, the size is (batch _ size N, 9), 9-dimensional vectors are respectively [ x, y, z, h, w, L, rx, ry, rz ], namely the central position, the length, the width and the height of a target and the rotating angle in the xyz direction respectively, sorting is carried out according to the classified scores, 512 boundary frames in front of each batch is found out by using non-maximum values, the boundary frames, the angle information and the confidence score are returned, a cross loss function is adopted in a segmentation stage as a loss function of a binary classification network, and the initial proposals are predicted, wherein x, z and the offset angle are calculated based on the regression function of bin losses, the size information (h, size information and the size information (L, rz) and the loss are calculated smoothly by adopting a rotation angle function (L, rz).
The invention has the beneficial effects that: the ship sensing link is one of paths for intelligent ship development, and the identification of the position and the posture of a ship target is an indispensable link for ship sensing or ship tracking identification. The sensing precision can be enhanced by adding the attitude estimation of the ship target on the basis of target detection. The traditional three-dimensional attitude estimation algorithm can only detect a single target generally and consumes a lot of time, so the algorithm adopts a neural network based on deep learning and adopts a learning mode to train a priori data set, thereby greatly improving the real-time performance of the algorithm and being capable of estimating the attitude of a complex scene. The method realizes the effect of estimating the pose of the three-dimensional target in an end-to-end learning mode, and improves the instantaneity of pose estimation.
Drawings
FIG. 1 is a schematic diagram of a generator in a network;
FIG. 2 is a schematic diagram of visualization of an algorithm pose estimation result.
Detailed Description
The following description of the embodiments of the present invention will be made with reference to the accompanying drawings.
The purpose of the invention is realized as follows:
1. acquiring a data set
A point cloud data set is obtained through manual production, and the data set requires ship point cloud data of an offshore scene, and can be ship targets under the conditions of single ship, multiple ships, shielding and the like. Meanwhile, the point cloud data of each scene should contain a corresponding ship position and posture label, and the structure of the label comprises three angles, namely a yaw angle, a roll angle and a pitch angle corresponding to the ship target, of the category, the three-dimensional coordinate of the target, the three-dimensional size of the target and the three-dimensional position and posture of the target.
2. Constructing neural networks
The algorithm is based on a 3D target detection network PointRCNN, targets of the 3D target detection algorithm are only output to a 3D frame, and network angle branches are added on the basis to estimate the 6D pose of the targets. The network mainly comprises two parts: point cloud feature extraction based on PointNet + + and improvement based on 3D target detection network PointRCNN. The PointNet + + network serves as a backbone network and is responsible for feature extraction from input point cloud data.
The improved 6D pose estimation network based on PointRCNN comprises two stages, wherein the first stage aims to generate a 3D boundary frame proposal in a bottom-up scheme, obtain a three-dimensional pose angle of a target at the same time on the basis of the 3D boundary frame proposal, generate a real segmentation mask based on the boundary frame, segment foreground points and generate a small number of boundary frame proposals from segmentation points at the same time. Such a strategy avoids the use of a large number of anchor boxes throughout the three-dimensional space. And in the second stage, carrying out standard frame optimization with angles. After the bounding box and angle proposals are generated, the point representation from the first stage is pooled using a point cloud area pooling operation. Unlike the method of directly estimating global bounding box coordinates, the pooled points are converted to canonical coordinates and combined with the point features and the first stage segmentation mask to complete coordinate optimization. This strategy takes full advantage of the information provided by the segmentation and proposal sub-network of the first stage.
3. 6D pose estimation network main steps
(1) Generation of 3D bezel and angle proposals
And the main network adopts PointNet + + to extract point-by-point cloud characteristics. Inputting point cloud data of N x 3, and for each point cloud input of N x 3, extracting the characteristics of a point set, wherein the characteristics of the point set comprise three parts, namely a Sampling layer, a Grouping layer and a Pointnet layer. The sampling algorithm of the sampling layer uses iterative farthest point sampling method iterative Fast Point Sampling (FPS). A series of points is selected in the input point cloud, thereby defining the center of the local region. And then constructing a local neighborhood, searching points within a given distance, extracting features by using a full-link layer, performing pooling operation to obtain high-grade features, and upsampling the point set to the number of the original point set to obtain point-by-point high-dimensional features in the input point set.
Objects in a 3D scene are naturally separated and do not overlap with each other. The segmentation masks for all three-dimensional objects can be obtained directly by 3D bounding box annotation, i.e. the 3D points within the 3D box are considered foreground points. And (3) carrying out point-by-point characteristic after the processing of the backbone network, wherein the foreground prediction branch is used for carrying out classified prediction on each point, and the real segmentation mask of the point cloud is determined by the three-dimensional frame.
And predicting branches of a 3D frame and a three-dimensional angle while predicting branches of the foreground, and adopting a full connection layer. Because the dimension changes of the surrounding frame coordinates and the yaw angle of the ship target are large, but the change dimension of the roll angle and the pitch angle is usually small, and Bin-based thought regression calculation is adopted for the horizontal direction coordinates x and z of the surrounding frame and the yaw angle. Specifically, the surrounding area of each foreground point is divided into a series of discrete bins along the X-axis and Z-axis, and Bin-based cross entropy loss classification and residual regression are applied in both directions instead of direct smooth L1 loss regression. In most offshore scenes, the horizontal direction scale change of a ship target is large, the central coordinate scale change in the vertical direction is usually small, the attitude angle pitch angle and the roll angle change within a very small scale range, and the roll angle, the pitch angle and the height position of the ship can be accurately obtained by directly regressing through smooth L1 loss.
(2) Pooling point cloud area
After the three-dimensional bounding box scheme is obtained, the position and the direction of the box are optimized according to the previously generated frame and the angle branch. And pooling each point and the characteristics thereof according to the position of each three-dimensional frame. The points and their features within the bounding box after slight enlargement will be retained. The segmentation mask is then used to distinguish between foreground and background points within a slightly enlarged box. Proposals without interior points will be eliminated.
(3) Canonical 3D bounding box and angle regression proposal optimization
The pooled point sets and their associated features are fed into the second stage subnetwork to optimize the position, angle and confidence of the foreground object of the border. Combining the characteristics, obtaining high-dimensional characteristics through a Sampling layer provided by 3 PointNet + +, and then predicting and outputting coordinate size and attitude angle information of the target by using classification and regression branches.
The loss function is as follows:
wherein
B is the proposal set of the first stage, B pos For a regression proposal, prob i For confidence, table i Are corresponding labels, F cls A confidence for the prediction is calculated for the cross entropy loss function.
Pose estimation (position estimation) plays a very important role in the field of computer vision. The method has great application in the aspects of estimating the pose of the robot by using the vision sensor for control, robot navigation, augmented reality and the like. The problem of pose estimation of targets is to determine the spatial position of an object in 3D space, and the angle of rotation of the object about coordinate axes, yaw (Yaw), pitch (Pitch) and Roll (Roll) about the Z axis, Y axis and X axis.
With reference to fig. 1 and 2, the steps of the present invention are as follows:
step 1, a position and pose data set of a ship scene is prepared, wherein data set labels comprise three angles of a target type, a three-dimensional coordinate of a target, a three-dimensional size of the target and a three-dimensional position and pose of the target, namely a yaw angle, a roll angle and a pitch angle corresponding to the ship target.
And 2, point cloud feature extraction based on PointNet + +, wherein the feature extraction part mainly comprises three parts of feature extraction of a point Set consisting of Set Abstraction sub-networks, input point clouds are arranged, a farthest point is searched based on a sampling algorithm iterative farthest point sampling method (FPS), and the center of a local area is defined after a series of farthest points are selected from the input point clouds. And then constructing a local neighborhood, searching points within a given distance, extracting features by using a full connection layer, performing pooling operation to obtain high-level features, upsampling the point set to the number of the original point sets to obtain high-dimensional features point by point in the input point set, and obtaining high-dimensional features point by point of the input point cloud data through a main network.
And 3, an RPN stage, namely generating a 3D boundary frame proposal by a bottom-up scheme, generating a real segmentation mask based on the 3D boundary frame, namely performing two classifications on each point, scoring the foreground and the background, normalizing to 0-1 by a sigmod function, considering that points with scores higher than a threshold value are foreground points, segmenting foreground points, generating a small amount of boundary frame proposals with angle information from segmented points, taking the foreground points (the total number of the N points) as the center, and generating an initial proposal with the size of (batch _ size N, 9) and 9-dimensional vectors of [ x, y, z, h, w, l, rx, ry, rz ] on each point by using a regression value and a set average size, namely the center position, the length, the width and the rotation angle of the target in the xyz direction. And then sorting according to the classified scores, finding out the top 512 bounding boxes of each batch by using a non-maximum value, returning the bounding boxes and angle information, and using a confidence score for the input of an RCNN stage, predicting an initial proposal by using a cross entropy loss function as a loss function of a binary network in a segmentation stage, wherein x, z and a yaw angle ry are subjected to regression calculation based on the loss function of bin, and size information (h, w, L) and rotation angles (rx, rz) are subjected to calculation by using smooth L1 loss.
And 4, an RCNN stage, which aims to optimize the proposal based on a small number of proposals obtained in the RPN stage and the foreground segmentation characteristics and spatial characteristics so as to output the final classification and 3D frame and attitude angle. According to the proposal of the RPN stage, calculating the IOU between the ROI and the truth value, wherein the IOU is more than 0.55, and the truth value is divided into the proposal of prediction for fine adjustment. And connecting and recombining the segmented mask with the point cloud coordinates and the high-dimensional features, obtaining high-level features of the combined feature vectors from a Set Abstraction sub-network of PointNet + +, and predicting by a classification layer and a regression layer network, wherein the classification layer is cross entropy loss, and the regression layer is bin-based loss and smooth L1 loss.
Example (b):
1. and making a data set containing a target type, a three-dimensional coordinate of the target, a three-dimensional size of the target and three postures of the target, namely three angles of a yaw angle, a roll angle and a pitch angle corresponding to the ship target, for training.
2. And constructing a neural network.
And the main network adopts PointNet + + to extract point-by-point cloud characteristics. The feature extraction of the point set comprises three parts, namely a Sampling layer, a Grouping layer and a Pointnet layer. The sampling algorithm of the sampling layer uses an iterative farthest point sampling method iterative Fast Point Sampling (FPS). A series of points is selected from the input point cloud, thereby defining the center of the local region. And then constructing a local neighborhood, searching points within a given distance, extracting features by using a full-connection layer, performing pooling operation to obtain high-level features, up-sampling the point set to the number of the original point set to obtain high-dimensional features point by point in the input point set, and inputting the high-dimensional features with the conversion from n x 3 to n x 128.
3. 6D pose estimation network
The overall network structure is shown in fig. 1. The point-by-point characteristics processed by the backbone network, the high-dimensional characteristics pass through three subsequent structures, namely a foreground prediction branch, a frame regression branch and an angle regression branch. And (4) carrying out classified prediction on each point by using the foreground prediction branch, wherein the real segmentation mask of the point cloud is determined by the three-dimensional frame. And predicting branches of a 3D frame and a three-dimensional angle while predicting the branches of the foreground, and adopting a full connection layer. Due to the fact that the scale change of the surrounding frame coordinate and the yaw angle of the ship target is large, the change scale of the roll angle and the pitch angle is usually small, and the thought regression calculation based on Bin is adopted for the horizontal direction coordinate x, z and the yaw angle of the surrounding frame. Specifically, the surrounding area of each foreground point is divided into a series of discrete bins along the X-axis and Z-axis, with Bin-based cross entropy loss classification and residual regression instead of direct smooth L1 loss regression in both directions. In most offshore scenes, the horizontal direction scale change of a ship target is large, but the central coordinate scale change in the vertical direction is usually small, the pitch angle and the roll angle of the attitude angle change in a very small scale range, and the algorithm can obtain an accurate value by directly performing regression by using smooth L1 loss.
After the three-dimensional bounding box scheme is obtained, the position and the direction of the box are optimized according to the previously generated frame and the angle branch. And pooling each point and the characteristics thereof according to the position of each three-dimensional frame. The points and their features within the bounding box after slight enlargement will be retained. The segmentation mask is then used to distinguish between foreground and background points within a slightly enlarged box. The proposal without interior points would be eliminated.
The pooled point sets and their associated features are fed into a second stage subnetwork to optimize the position, angle of the border and confidence of the foreground object. Combining the characteristics, obtaining high-dimensional characteristics through a Sampling layer proposed by 3 PointNet + +, then predicting the coordinate size and the posture angle information of an output target by using classification and regression branches, and displaying the output information as a visualization result shown in FIG. 2.
The loss function is as follows:
wherein
B is the proposal set of the first stage, B pos For the positive proposal of regression, prob i For confidence, ble i Are corresponding labels, F cls A confidence for the prediction is calculated for the cross entropy loss function.
Claims (2)
1. A ship target 6D pose estimation method based on point cloud data is characterized by comprising the following steps:
step 1: acquiring a ship point cloud data set of a marine scene, wherein a data set label comprises a target category, a three-dimensional coordinate of a target, a three-dimensional size of the target and a three-dimensional pose of the target;
and 2, step: constructing a neural network, and extracting point-by-point cloud features by adopting PointNet + + to obtain point-by-point high-dimensional features;
and step 3: generating a 3D boundary frame proposal by a bottom-up scheme, generating a real segmentation mask based on the 3D boundary frame, segmenting foreground points and simultaneously generating a boundary frame proposal with angle information from segmentation points for the input of RCNN; the proposal of generating the real segmentation mask based on the 3D bounding box, segmenting foreground points and generating the bounding box with angle information from the segmentation points is specifically as follows: classifying each point twice, scoring foreground and background, normalizing to 0-1 through a sigmod function, considering that the point with the score higher than a threshold value is a foreground point, segmenting foreground points and generating a boundary frame proposal with angle information from the segmented points at the same time, wherein the foreground points are taken as the center, the total number is N, regression values and set average sizes are used for generating initial proposals on each point, the size is (batch _ size x N, 9), 9-dimensional vectors are respectively [ x, y, z, h, w, L, rx, ry, rz ], namely the center position, the height and the width of a target and the rotating angle in the xyz direction respectively, sorting is carried out according to the classified scores, 512 boundary frames in front of each batch are found out by using non-maximum values, the boundary frames, the angle information and the confidence scores are returned, a cross-loss function is adopted in the segmentation stage as a loss function of a two-class network, and the initial proposal is predicted, wherein x, z and yaw angle are calculated based on the regression function of bins, and size information (h, size information and L, rx, and loss (rz) and loss are calculated smoothly by adopting a sigmoid function;
and 4, step 4: and (4) carrying out proposal optimization based on the proposal obtained in the step (3) and the foreground segmentation characteristics and spatial characteristics so as to output the final classification and 3D frame and posture angle.
2. The point cloud data-based ship target 6D pose estimation method according to claim 1, characterized in that: step 2, point-by-point cloud feature extraction is carried out by adopting PointNet + +, and the obtained point-by-point high-dimensional features are specifically as follows: the feature extraction of the point set comprises three parts, including a Sampling layer, a Grouping layer and a Pointnet layer, wherein a Sampling algorithm of a Sampling layer uses an iteration farthest point Sampling method, a series of points are selected from an input point cloud, the center of a local area is defined, then a local neighborhood is constructed, points are searched within a given distance, feature extraction is carried out by using a full connection layer, finally, pooling operation is carried out to obtain high-level features, the number of original point sets is sampled on the point sets, and high-dimensional features of the input point sets point by point are obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011290504.3A CN112396655B (en) | 2020-11-18 | 2020-11-18 | Point cloud data-based ship target 6D pose estimation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011290504.3A CN112396655B (en) | 2020-11-18 | 2020-11-18 | Point cloud data-based ship target 6D pose estimation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112396655A CN112396655A (en) | 2021-02-23 |
CN112396655B true CN112396655B (en) | 2023-01-03 |
Family
ID=74606473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011290504.3A Active CN112396655B (en) | 2020-11-18 | 2020-11-18 | Point cloud data-based ship target 6D pose estimation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112396655B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112883979A (en) * | 2021-03-11 | 2021-06-01 | 先临三维科技股份有限公司 | Three-dimensional instance segmentation method, device, equipment and computer-readable storage medium |
CN113298163A (en) * | 2021-05-31 | 2021-08-24 | 国网湖北省电力有限公司黄石供电公司 | Target identification monitoring method based on LiDAR point cloud data |
CN114972968A (en) * | 2022-05-19 | 2022-08-30 | 长春市大众物流装配有限责任公司 | Tray identification and pose estimation method based on multiple neural networks |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109064514A (en) * | 2018-07-03 | 2018-12-21 | 北京航空航天大学 | A kind of six-freedom degree pose algorithm for estimating returned based on subpoint coordinate |
CN109086683A (en) * | 2018-07-11 | 2018-12-25 | 清华大学 | A kind of manpower posture homing method and system based on cloud semantically enhancement |
CN110473284A (en) * | 2019-07-29 | 2019-11-19 | 电子科技大学 | A kind of moving object method for reconstructing three-dimensional model based on deep learning |
CN110533721A (en) * | 2019-08-27 | 2019-12-03 | 杭州师范大学 | A kind of indoor objects object 6D Attitude estimation method based on enhancing self-encoding encoder |
CN110930454A (en) * | 2019-11-01 | 2020-03-27 | 北京航空航天大学 | Six-degree-of-freedom pose estimation algorithm based on boundary box outer key point positioning |
CN110930452A (en) * | 2019-10-23 | 2020-03-27 | 同济大学 | Object pose estimation method based on self-supervision learning and template matching |
CN111126269A (en) * | 2019-12-24 | 2020-05-08 | 京东数字科技控股有限公司 | Three-dimensional target detection method, device and storage medium |
CN111368733A (en) * | 2020-03-04 | 2020-07-03 | 电子科技大学 | Three-dimensional hand posture estimation method based on label distribution learning, storage medium and terminal |
CN111862201A (en) * | 2020-07-17 | 2020-10-30 | 北京航空航天大学 | Deep learning-based spatial non-cooperative target relative pose estimation method |
CN111915677A (en) * | 2020-07-08 | 2020-11-10 | 哈尔滨工程大学 | Ship pose estimation method based on three-dimensional point cloud characteristics |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107644121B (en) * | 2017-08-18 | 2020-07-31 | 昆明理工大学 | Reverse three-dimensional reconstruction and solid modeling method for pavement material skeleton structure |
US10977827B2 (en) * | 2018-03-27 | 2021-04-13 | J. William Mauchly | Multiview estimation of 6D pose |
US11010592B2 (en) * | 2018-11-15 | 2021-05-18 | Toyota Research Institute, Inc. | System and method for lifting 3D representations from monocular images |
CN109801337B (en) * | 2019-01-21 | 2020-10-02 | 同济大学 | 6D pose estimation method based on instance segmentation network and iterative optimization |
US11030766B2 (en) * | 2019-03-25 | 2021-06-08 | Dishcraft Robotics, Inc. | Automated manipulation of transparent vessels |
CN111080693A (en) * | 2019-11-22 | 2020-04-28 | 天津大学 | Robot autonomous classification grabbing method based on YOLOv3 |
CN111179324B (en) * | 2019-12-30 | 2023-05-05 | 同济大学 | Object six-degree-of-freedom pose estimation method based on color and depth information fusion |
CN111259934B (en) * | 2020-01-09 | 2023-04-07 | 清华大学深圳国际研究生院 | Stacked object 6D pose estimation method and device based on deep learning |
CN111275758B (en) * | 2020-01-15 | 2024-02-09 | 深圳市微埃智能科技有限公司 | Hybrid 3D visual positioning method, device, computer equipment and storage medium |
-
2020
- 2020-11-18 CN CN202011290504.3A patent/CN112396655B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109064514A (en) * | 2018-07-03 | 2018-12-21 | 北京航空航天大学 | A kind of six-freedom degree pose algorithm for estimating returned based on subpoint coordinate |
CN109086683A (en) * | 2018-07-11 | 2018-12-25 | 清华大学 | A kind of manpower posture homing method and system based on cloud semantically enhancement |
CN110473284A (en) * | 2019-07-29 | 2019-11-19 | 电子科技大学 | A kind of moving object method for reconstructing three-dimensional model based on deep learning |
CN110533721A (en) * | 2019-08-27 | 2019-12-03 | 杭州师范大学 | A kind of indoor objects object 6D Attitude estimation method based on enhancing self-encoding encoder |
CN110930452A (en) * | 2019-10-23 | 2020-03-27 | 同济大学 | Object pose estimation method based on self-supervision learning and template matching |
CN110930454A (en) * | 2019-11-01 | 2020-03-27 | 北京航空航天大学 | Six-degree-of-freedom pose estimation algorithm based on boundary box outer key point positioning |
CN111126269A (en) * | 2019-12-24 | 2020-05-08 | 京东数字科技控股有限公司 | Three-dimensional target detection method, device and storage medium |
CN111368733A (en) * | 2020-03-04 | 2020-07-03 | 电子科技大学 | Three-dimensional hand posture estimation method based on label distribution learning, storage medium and terminal |
CN111915677A (en) * | 2020-07-08 | 2020-11-10 | 哈尔滨工程大学 | Ship pose estimation method based on three-dimensional point cloud characteristics |
CN111862201A (en) * | 2020-07-17 | 2020-10-30 | 北京航空航天大学 | Deep learning-based spatial non-cooperative target relative pose estimation method |
Also Published As
Publication number | Publication date |
---|---|
CN112396655A (en) | 2021-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111798475B (en) | Indoor environment 3D semantic map construction method based on point cloud deep learning | |
CN108648233B (en) | Target identification and capture positioning method based on deep learning | |
CN111339903B (en) | Multi-person human body posture estimation method | |
CN106682598B (en) | Multi-pose face feature point detection method based on cascade regression | |
CN107742102B (en) | Gesture recognition method based on depth sensor | |
CN110852182B (en) | Depth video human body behavior recognition method based on three-dimensional space time sequence modeling | |
CN112396655B (en) | Point cloud data-based ship target 6D pose estimation method | |
CN112288857A (en) | Robot semantic map object recognition method based on deep learning | |
CN111899172A (en) | Vehicle target detection method oriented to remote sensing application scene | |
CN112784736B (en) | Character interaction behavior recognition method based on multi-modal feature fusion | |
CN111161317A (en) | Single-target tracking method based on multiple networks | |
CN103886619B (en) | A kind of method for tracking target merging multiple dimensioned super-pixel | |
CN113408584B (en) | RGB-D multi-modal feature fusion 3D target detection method | |
Wang et al. | An overview of 3d object detection | |
CN107527054B (en) | Automatic foreground extraction method based on multi-view fusion | |
CN110533720A (en) | Semantic SLAM system and method based on joint constraint | |
CN110751097B (en) | Semi-supervised three-dimensional point cloud gesture key point detection method | |
CN110334584B (en) | Gesture recognition method based on regional full convolution network | |
CN113705579A (en) | Automatic image annotation method driven by visual saliency | |
CN110533716A (en) | A kind of semantic SLAM system and method based on 3D constraint | |
CN117011380A (en) | 6D pose estimation method of target object | |
CN114299339A (en) | Three-dimensional point cloud model classification method and system based on regional correlation modeling | |
CN110348311B (en) | Deep learning-based road intersection identification system and method | |
CN112950786A (en) | Vehicle three-dimensional reconstruction method based on neural network | |
CN118071932A (en) | Three-dimensional static scene image reconstruction method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |