WO2020237693A1 - 一种水面无人装备多源感知方法及系统 - Google Patents
一种水面无人装备多源感知方法及系统 Download PDFInfo
- Publication number
- WO2020237693A1 WO2020237693A1 PCT/CN2019/089748 CN2019089748W WO2020237693A1 WO 2020237693 A1 WO2020237693 A1 WO 2020237693A1 CN 2019089748 W CN2019089748 W CN 2019089748W WO 2020237693 A1 WO2020237693 A1 WO 2020237693A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- water surface
- image
- camera
- coordinate system
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/30—Assessment of water resources
Definitions
- the invention relates to the technical field of intelligent research on surface unmanned equipment, in particular to a multi-source sensing method and system for surface unmanned equipment.
- Surface unmanned equipment is a new type of carrier with highly nonlinear dynamic characteristics that can perform tasks in various complex and unknown surface environments without human intervention. It has the advantages of small size, intelligence, and autonomy. It is used to perform tasks with high risk factors and harsh operating environments, and has a wide range of application requirements in military operations, maritime patrols, island and reef supply and other fields.
- the "13th Five-Year Plan" of the shipbuilding industry pointed out that by 2020, my country’s manufacturing capabilities for high-tech ships, marine engineering equipment and key supporting equipment led by surface unmanned equipment will be significantly enhanced, and surface unmanned equipment will have an important strategic position. Development can effectively promote the further development of the shipbuilding industry.
- the purpose of the present invention is to overcome the shortcomings and deficiencies of the prior art and provide a multi-source sensing method and system for surface unmanned equipment. Aiming at the multi-source perception problem of unmanned equipment on the water surface, the present invention trains Deeplab and Faster RCNN network models by constructing a water surface image data set, thereby realizing the recognition of water surface boundary lines and water surface obstacles.
- the 3D point cloud data obtained by the lidar is projected onto the image obtained by the camera, depth information is added to the image, and the final result is obtained through the coordinate conversion between the camera coordinate system and the world coordinate system
- the world coordinate information of obstacles and water surface boundaries, and the topic communication mechanism of ROS (Robot Operating System) is transmitted to the application module in real time, so as to provide a priori environmental information for the next decision of unmanned equipment.
- ROS Robot Operating System
- a multi-source sensing method for surface unmanned equipment including the following steps:
- step S1 is specifically: using a camera to obtain visual information of the water surface image in real time, and using a three-dimensional lidar to scan the unmanned equipment forward sector area in real time to obtain three-dimensional point cloud information of the water surface environment;
- step S2 is specifically: calibrating the pre-collected water surface image from top to bottom pixel level into three categories: background, land, and water surface for Deeplab network model training.
- the obstacle candidate frame in the water surface image is calibrated into two types of ships and floating objects, which are used for Faster RCNN network model training to construct a water surface image data set.
- the step S3 is specifically: inputting the water surface image collected in real time to the trained Deeplab network. See Figure 2.
- the original image input obtains a feature image through multiple convolutional layers and pooling layers, in order to obtain and output
- the output image of the same size of the image is enlarged by deconvolution, and finally the fully connected conditional random field (CRF) is used to improve the ability of the model to capture details, ensuring the pixel-level segmentation of land and water.
- CCF conditional random field
- the pixel coordinate value at the water surface boundary is obtained through image processing, and the pixel coordinate set at the water surface boundary is transmitted to the information fusion node.
- the step S4 is specifically: input the water surface image collected in real time to the trained Faster RCNN network, and pass through the shared convolutional layer, the RPN network, the ROI pooling layer and the fully connected layer through forward propagation, and finally output
- the obstacles in the input image are divided into ships and floating objects, and the intersection ratio between the prediction frame output by the Faster RCNN network and the water surface area output by the image semantic segmentation network is calculated.
- the threshold is set to 0.8, and the results below this threshold will be eliminated; for the prediction frame classified as a ship, the threshold is set to 0.1, and the results below this threshold will be eliminated;
- the step S5 specifically adopts a checkerboard calibration method to select several corner points on the checkerboard at different angles and different positions, and determine the coordinates of these corner points in the camera coordinate system, the coordinates in the world coordinate system and For the coordinates in the radar coordinate system, substitute the corresponding coordinates into the mathematical model of camera calibration and joint calibration, and solve them simultaneously to obtain three rotation parameters (rotation matrix) and three translation parameters (translation matrix) in the camera-radar coordinate conversion equation And a scale factor, as well as the rotation matrix and translation matrix in the camera-world coordinate conversion equation to determine the specific form of the coordinate conversion equation.
- a checkerboard calibration method to select several corner points on the checkerboard at different angles and different positions, and determine the coordinates of these corner points in the camera coordinate system, the coordinates in the world coordinate system and For the coordinates in the radar coordinate system, substitute the corresponding coordinates into the mathematical model of camera calibration and joint calibration, and solve them simultaneously to obtain three rotation parameters (rotation matrix) and three translation parameters (translation matrix) in the camera-radar
- step S6 is specifically: in the information fusion node, according to the conversion equation of the lidar coordinate system and the camera coordinate system, the point cloud coordinates obtained by the lidar are converted into camera coordinates, and then the camera coordinate system and the pixel coordinates The conversion relationship between the systems, the point cloud is projected to the imaging plane, so that the image has depth information.
- the prediction box output by Faster RCNN and the pixel coordinate information and depth information of the water surface boundary line output by the Deeplab model are combined to generate three-dimensional coordinates, and the camera external parameters obtained by camera calibration are converted into corresponding world coordinates to determine obstacles and water surface The specific position of the dividing line in the world coordinate system.
- a ROS-based multi-source sensing system for surface unmanned equipment including sensing and application parts:
- the perception part establishes point cloud information processing nodes, image information processing nodes and information fusion nodes through the node mechanism of ROS.
- the image information processing node contains two convolutional network models, Faster RCNN and Deeplab model.
- the image can be processed by the convolutional neural network to obtain the pixel coordinate information of the obstacle prediction frame and the water surface boundary line. This information is transmitted through the topic subscription mechanism of ROS Go to the information fusion node and wait for the next step; the point cloud information processing node converts the point cloud information into a standard coordinate format in the lidar coordinate system, and transmits the point cloud coordinate information to the information fusion node through the topic communication mechanism.
- the point cloud coordinates are converted into camera coordinates, and then the point cloud is projected onto the imaging plane through the conversion relationship between the camera coordinate system and the pixel coordinate system.
- the image have depth information, thereby obtaining the three-dimensional coordinates of the image; finally, the three-dimensional coordinates of the image will be converted into the corresponding world coordinates according to the external parameters of the camera, so as to determine the specific position of the obstacle and the water surface boundary in the world coordinate system.
- the application part includes different types of ROS functional nodes, including obstacle avoidance nodes, tracking nodes, and path planning nodes.
- Obstacle avoidance nodes obtain the world coordinate information of obstacles and water surface boundaries by subscribing to topics published by the information fusion node, and establish a vector field histogram through the VFH+ obstacle avoidance algorithm, through which the current feasible obstacle avoidance direction can be determined.
- the tracking node obtains the pixel coordinate information of the video sequence and the obstacle prediction frame on the image by subscribing to the image topic and target detection topic.
- the CF target tracking algorithm is activated, and the feature matching and After filtering, the coordinate information of the frame selection target in each frame of image can be output in real time, so as to realize the tracking function.
- the path planning node subscribes to the topic of semantic segmentation and information fusion, obtains the pixel coordinates of the water surface and obstacles by segmenting the image, and then obtains the approximate world coordinate information according to the information fusion topic.
- a local map can be established based on this information. Use the RRT search algorithm to obtain the feasible path of the current local map.
- the present invention has the following advantages and beneficial effects:
- the invention adopts the Deeplab network model to realize the extraction of the water surface boundary line. Compared with the traditional sea antenna detection method, it is less affected by the changes of the water surface environment, has better system generalization ability, and adapts to the sea antenna detection with obvious linear characteristics. It is also suitable for coastline detection with complex coast geometric features; the Faster RCNN network model is used for rough extraction of obstacle candidate frames, and it is fused with the three-dimensional point cloud data obtained in real time by lidar, which can be used in the case of redundant detection of sensing parameters Realize a more accurate three-dimensional description of obstacles; through the distributed communication mechanism of ROS, it can ensure that the sensor fusion information is updated by the sensing system as soon as possible and processed in real time; through the joint calibration between the camera and the three-dimensional lidar As a result, the corresponding relationship between the visual recognition results and the world coordinates is established to provide a priori information for the subsequent intelligent decision-making of surface unmanned equipment.
- the multi-source sensing method and system proposed by the present invention realize the complete description of the
- Figure 1 is a method flow chart of a multi-source sensing method for surface unmanned equipment
- Figure 2 is a Deeplab network architecture based on VGG16 in the embodiment
- Fig. 3 is the Faster RCNN network architecture based on AlexNet in the embodiment
- Figure 4 is a schematic diagram of a multi-source sensing system for surface unmanned equipment based on ROS.
- a multi-source sensing method for surface unmanned equipment including the following steps:
- Step 10 Collect the sensing parameters of the multi-source sensing system of the surface unmanned equipment in real time, and obtain the visual information of the water surface image and the three-dimensional point cloud information of the water surface environment;
- Step 20 Manually calibrate the water surface image collected in advance, use the calibrated data set to train the Deeplab model and Faster RCNN model and save the network model parameters;
- Step 30 Divide the real-time input water surface image into three categories: background, land and water surface through the Deeplab model, and extract the water surface boundary line according to the outer contour of the water surface area;
- Step 40 Extract the prediction frames of water surface obstacles through the Faster RCNN network model, respectively calculate the intersection ratio between the ship and floating object prediction frames and the water surface area output by the image semantic segmentation network, and eliminate meaningless obstacle detection results;
- Step 50 Perform camera calibration, obtain camera internal and external parameters, then perform joint calibration of the three-dimensional lidar and camera, and obtain the coordinate conversion relationship between the radar and the camera based on the calibration results;
- Step 60 Project the three-dimensional point cloud data obtained by the lidar onto the image obtained by the camera according to the coordinate conversion relationship, add depth information to the image, and finally obtain the boundary line of the obstacle and the water surface through the coordinate conversion between the camera coordinate system and the world coordinate system World coordinates.
- the above step 20 specifically includes calibrating the pre-collected water surface image from top to bottom pixel level into three categories: background, land and water surface, which are used for Deeplab network model training.
- the obstacle candidate frame in the water surface image is calibrated into two major categories: boats and floating objects, which are used for Faster RCNN network model training to construct a water surface image data set.
- the above step 30 specifically includes inputting the real-time collected water surface image to the trained Deeplab network. See Figure 2.
- the original image input is used to extract image features through the convolutional layer to obtain the corresponding feature map, and then the feature map is compressed by the pooling layer to extract
- Deeplab can ensure that the size of the feature map remains unchanged by changing the pooling layer of the fourth and fifth layers to a pooling layer without downsampling. At the same time, it changes the convolutional layer behind the two pooling layers. It is a hollow convolutional layer to ensure that the receptive field of neurons after pooling does not change.
- the feature image is enlarged to the size of the original input image through deconvolution, and then a fully connected conditional random field (CRF) is used to improve the model's ability to capture details, ensuring the pixel-level segmentation of land and water.
- CRF conditional random field
- the Deeplab network model is constructed based on VGG16. First remove the downsampling of the last two pooling layers of VGG16, then change the convolution kernel behind these two pooling layers to hole convolution, and finally replace the three fully connected layers of VGG16 It is the convolution layer, which realizes the full convolution structure of the Deeplab model.
- the deconvolution method is used to deconvolve the feature map obtained after the pooling and convolution processing to obtain a segmented image with the same size as the input image, and finally use full connection
- the random condition field optimizes the details of the water and land segmentation image to obtain a segmented image with fine edges of the water surface boundary line.
- the above step 40 specifically includes inputting the water surface image collected in real time to the trained Faster RCNN network.
- the Faster RCNN network model is constructed based on the AlexNet convolutional neural network, which is specifically composed of the Fast RCNN network and the RPN network.
- the Faster RCNN network The shared convolutional layer with the RPN network is composed of the first five layers of AlexNet convolutional neural networks.
- the third pooling layer of AlexNet is modified to an ROI pooling layer.
- the two fully connected layers of AlexNet are retained, and the last layer of Softmax is classified Modified the linear regressor used to frame obstacles on the water surface and the linear regressor + Softmax classifier layer used to classify ships and floating objects.
- the water surface image first extracts the feature map of the original image through the shared convolutional layer, and then sends the feature map of the original image into the RPN network structure.
- a sliding window is generated by convolution sliding through a 3*3 convolution kernel, and 9 anchor boxes are generated at the center point of each sliding window.
- the feature map of each anchor frame can be obtained from the original image, and these feature maps are forwarded into the fully connected layer to generate feature vectors.
- the feature vectors are sent to the Softmax classifier and linear regression to perform target classification and positioning. Simplify the anchor point box, and select the anchor point box with a high area score as the suggested area.
- the above step 50 specifically includes adopting a checkerboard calibration method, selecting several corner points on the checkerboard at different angles and different positions, and determining the coordinates in the camera coordinate system, the coordinates in the world coordinate system and the radar coordinate system of these corner points. Substitute the corresponding coordinates into the mathematical model of camera calibration and joint calibration, and solve them simultaneously to obtain three rotation parameters (rotation matrix), three translation parameters (translation matrix) and one The scale factor and the rotation matrix and translation matrix in the camera-world coordinate conversion equation determine the specific form of the coordinate conversion equation.
- the above step 60 specifically includes in the information fusion node, according to the conversion equation of the lidar coordinate system and the camera coordinate system, the point cloud coordinates obtained by the lidar are converted into camera coordinates, and then the conversion between the camera coordinate system and the pixel coordinate system Relationship, the point cloud is projected to the imaging plane, so that the image has depth information.
- the prediction box output by Faster RCNN and the pixel coordinate information and depth information of the water surface boundary line output by the Deeplab model are combined to generate three-dimensional coordinates, and the camera external parameters obtained by camera calibration are converted into corresponding world coordinates to determine obstacles and water surface The specific position of the dividing line in the world coordinate system.
- the ROS information processing module includes two parts: sensing and application.
- the perception part establishes three nodes through the node mechanism of ROS, namely point cloud information processing node, image information processing node and information fusion node.
- the image information processing node contains two convolutional network models, Faster RCNN and Deeplab model.
- the image can be processed by the convolutional neural network to obtain the pixel coordinate information of the obstacle prediction frame and the water surface boundary line.
- This information is transmitted through the topic subscription mechanism of ROS Go to the information fusion node and wait for the next step; the point cloud information processing node converts the point cloud information into a standard coordinate format in the lidar coordinate system, and transmits the point cloud coordinate information to the information fusion node through the topic communication mechanism.
- the point cloud coordinates are converted into camera coordinates, and then the point cloud is projected onto the imaging plane through the conversion relationship between the camera coordinate system and the pixel coordinate system.
- the image have depth information, thereby obtaining the three-dimensional coordinates of the image; finally, the three-dimensional coordinates of the image will be converted into the corresponding world coordinates according to the external parameters of the camera, so as to determine the specific position of the obstacle and the water surface boundary in the world coordinate system.
- the application part includes different types of ROS functional nodes, including obstacle avoidance nodes, tracking nodes, and path planning nodes.
- Obstacle avoidance nodes obtain the world coordinate information of obstacles and water surface boundaries by subscribing to topics published by the information fusion node, and establish a vector field histogram through the VFH+ obstacle avoidance algorithm, through which the current feasible obstacle avoidance direction can be determined.
- the tracking node obtains the pixel coordinate information of the video sequence and the obstacle prediction frame on the image by subscribing to the image topic and target detection topic.
- the CF target tracking algorithm is activated, and the feature matching and After filtering, the coordinate information of the frame selection target in each frame of image can be output in real time, so as to realize the tracking function.
- the path planning node subscribes to the topic of semantic segmentation and information fusion, obtains the pixel coordinates of the water surface and obstacles by segmenting the image, and then obtains the approximate world coordinate information according to the information fusion topic.
- a local map can be established based on this information. Use the RRT search algorithm to obtain the feasible path of the current local map.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Quality & Reliability (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (9)
- 一种水面无人装备多源感知方法,其特征在于,包括以下步骤:S1、实时采集水面无人装备多源感知系统的传感参量,获取水面图像的视觉信息和水面环境的三维点云信息;S2、对预先采集到的水面图像进行人工标定,利用标定好的数据集对Deeplab模型和Faster RCNN模型进行训练并保存网络模型参数;S3、通过Deeplab模型将实时输入的水面图像分割为背景、陆地和水面三类,根据水面区域的外围轮廓提取水面边界线;S4、通过Faster RCNN网络模型提取水面障碍物的预测框,分别计算船只和漂浮物预测框与图像语义分割网络输出的水面区域之间的交并比,剔除无意义的障碍物检测结果;S5、进行相机标定,获取相机内参和外参,然后进行三维激光雷达和相机的联合标定,结合标定结果获取雷达和相机之间的坐标转换关系;S6、将激光雷达获得的三维点云数据根据坐标转换关系投影到相机获得的图像上,向图像添加深度信息,再通过相机坐标系-世界坐标系的坐标转换最终得到障碍物和水面边界线的世界坐标。
- 根据权利要求1所述的水面无人装备多源感知方法,其特征在于,所述步骤S2中的标定具体为:将水面图像从上到下像素级标定为背景、陆地和水面三类,用于Deeplab网络模型训练;将水面图像中的障碍物候选框标定为船只和漂浮物两类,用于Faster RCNN网络模型训练。
- 根据权利要求1所述的水面无人装备多源感知方法,其特征在于,所述步骤S3中Deeplab网络模型基于VGG16进行构建,首先去掉VGG16最后两个池化层的下采样,然后将这两个池化层后面的卷积核改为空洞卷积,最后将VGG16的三个全连接层替换为卷积层,实现Deeplab模型的全卷积结构;为了获得与原图相同尺寸的输出,采用反卷积的方法对池化和卷积处理后得到的特征图进行反卷积,从而获得一个与输入图像尺寸大小相同的分割图像,最后使用全连接随机条件场对水陆分割图像进行细节优化,从而获得一个水面边界线边缘精细的分割图像。
- 根据权利要求1所述的水面无人装备多源感知方法,其特征在于,所述步骤S4中Faster RCNN网络模型基于AlexNet卷积神经网络进行构建,具体由Fast RCNN网络和RPN网络构成,其中Fast RCNN网络和RPN网络的共享卷积层由AlexNet的前五层卷积神经网络构成,AlexNet的第三个池化层修改为ROI池 化层,保留AlexNet的两层全连接层,将最后一层Softmax分类器修改为用于框选水面障碍物的线性回归器和用于船只和漂浮物分类的线性回归器+Softmax分类器层;而在RPN网络中,添加一层卷积核为3*3的卷积层来提取滑动窗口,其后接全连接层提取特征向量,最后是对输入特征向量进行区域评价的Softmax分类器层和边框回归层。
- 根据权利要求1所述的水面无人装备多源感知方法,其特征在于,所述步骤S4中对无意义检测结果的剔除过程具体为:以障碍物预测框与水面区域的交集占整个矩形框的比值作为指标来评判检测结果的合理性;对于分类为漂浮物的预测框,设定阈值为0.8,低于此阈值的结果将给予剔除;对于分类为船只的预测框,设定阈值为0.1,低于此阈值的结果将给予剔除;
- 根据权利要求1所述的水面无人装备多源感知方法,其特征在于,所述步骤S6具体为:根据激光雷达坐标系和相机坐标系的转换方程,将激光雷达获得的点云坐标转换为相机坐标,再通过相机坐标系与像素坐标系之间的转换关系,将点云投影到成像平面,使得图像具有深度信息;最后将Faster RCNN输出的预测框和Deeplab模型输出的水面边界线的像素坐标信息和深度信息结合起来生成三维坐标,根据相机标定得到的相机外参转换为对应的世界坐标,从而确定障碍物和水面分界线在世界坐标系中的具体位置。
- 一种水面无人装备多源感知系统,其特征在于,所述感知系统以ROS处理模块为核心,涵盖了水面无人装备信息传递、信息融合和信息输出功能的一个集成模块,所述ROS信息处理模块包括感知和应用两部分。
- 根据权利要求1所述的水面无人装备多源感知系统,其特征在于,所述感知部分通过ROS的节点机制建立了三个节点,分别是点云信息处理节点、图像信息处理节点和信息融合节点;所述点云信息处理节点通过网口获取点云信息,并将点云信息转换为激光雷达坐标系下的标准坐标格式,最后通过话题通信机制将点云坐标信息传输到信息融合节点;所述图像信息处理节点通过串口读取图像信息,该节点内部结合Faster RCNN和Deeplab模型两个卷积网络模型,图像通过卷积神经网络处理可以获得障碍物预测框和水面边界线的像素坐标信息,该信息通过ROS的话题订阅机制传输到其他节点等待下一步的处理;所述信息融合节点通过订阅点云节点话题和图像话题获得对应的点云信息 和图像信息,根据激光雷达坐标系和相机坐标系的转换方程,将点云坐标转换为相机坐标,再通过相机坐标系与像素坐标系之间的转换关系,将点云投影到成像平面,使得图像具有深度信息,由此获得图像的三维坐标,最后将根据相机外参将图像三维坐标转换为对应的世界坐标,从而确定障碍物和水面分界线的在世界坐标系中的具体位置。
- 根据权利要求1所述的水面无人装备多源感知系统,其特征在于,所述应用部分涵盖ROS不同类型功能节点,包括避障节点、跟踪节点与路径规划节点,各节点通过ROS的分布式通信机制进行通信;ROS通过节点管理器获得水面无人装备系统的所有节点信息与话题信息,并通过订阅与发布机制保证融合信息更新后能立刻被订阅节点所感知以获取最新信息,从而满足了水面无人装备的实时性避障与路径规划要求;通过应用ROS的话题通信机制,将感知部分获得的传感信息融合后实时上传到对应的话题并发布出去,应用节点订阅该话题,通过限定消息队列为1,在话题的消息文件更新时第一时间获取融合信息,并根据该信息进行对应的避障和路径规划动作,保证无人装备面对环境的变化以第一时间感知并做出快速反应动作。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910467501.3 | 2019-05-31 | ||
CN201910467501.3A CN110188696B (zh) | 2019-05-31 | 2019-05-31 | 一种水面无人装备多源感知方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020237693A1 true WO2020237693A1 (zh) | 2020-12-03 |
Family
ID=67719245
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/089748 WO2020237693A1 (zh) | 2019-05-31 | 2019-06-03 | 一种水面无人装备多源感知方法及系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110188696B (zh) |
WO (1) | WO2020237693A1 (zh) |
Cited By (54)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112258590A (zh) * | 2020-12-08 | 2021-01-22 | 杭州迦智科技有限公司 | 一种基于激光的深度相机外参标定方法、设备及其存储介质 |
CN112801194A (zh) * | 2021-02-03 | 2021-05-14 | 大连海事大学 | 基于改进AlexNet的船用雷达降雨分析方法 |
CN112819737A (zh) * | 2021-01-13 | 2021-05-18 | 西北大学 | 基于3d卷积的多尺度注意力深度卷积网络的遥感图像融合方法 |
CN112861653A (zh) * | 2021-01-20 | 2021-05-28 | 上海西井信息科技有限公司 | 融合图像和点云信息的检测方法、系统、设备及存储介质 |
CN113052066A (zh) * | 2021-03-24 | 2021-06-29 | 中国科学技术大学 | 三维目标检测中基于多视图和图像分割的多模态融合方法 |
CN113075683A (zh) * | 2021-03-05 | 2021-07-06 | 上海交通大学 | 一种环境三维重构方法、装置及系统 |
CN113093254A (zh) * | 2021-04-12 | 2021-07-09 | 南京速度软件技术有限公司 | 基于多传感器融合的地图特征的高架桥中的车辆定位方法 |
CN113093746A (zh) * | 2021-03-31 | 2021-07-09 | 上海三一重机股份有限公司 | 作业机械环境感知方法、装置、系统及作业机械 |
CN113111751A (zh) * | 2021-04-01 | 2021-07-13 | 西北工业大学 | 一种自适应融合可见光与点云数据的三维目标检测方法 |
CN113160217A (zh) * | 2021-05-12 | 2021-07-23 | 北京京东乾石科技有限公司 | 一种线路异物的检测方法、装置、设备及存储介质 |
CN113160316A (zh) * | 2021-04-25 | 2021-07-23 | 华南理工大学 | 非刚体三维形状的扇形卷积特征提取方法与系统 |
CN113177593A (zh) * | 2021-04-29 | 2021-07-27 | 上海海事大学 | 一种水上交通环境中雷达点云与影像数据的融合方法 |
CN113281723A (zh) * | 2021-05-07 | 2021-08-20 | 北京航空航天大学 | 一种基于AR tag的3D激光雷达与相机间结构参数的标定方法 |
CN113532424A (zh) * | 2021-08-10 | 2021-10-22 | 广东师大维智信息科技有限公司 | 一种获取多维信息的一体化设备与协同测量方法 |
CN113587933A (zh) * | 2021-07-29 | 2021-11-02 | 山东山速机器人科技有限公司 | 一种基于分支定界算法的室内移动机器人定位方法 |
CN113686314A (zh) * | 2021-07-28 | 2021-11-23 | 武汉科技大学 | 船载摄像头的单目水面目标分割及单目测距方法 |
CN113696178A (zh) * | 2021-07-29 | 2021-11-26 | 大箴(杭州)科技有限公司 | 一种机器人智能抓取的控制方法及系统、介质、设备 |
CN113808219A (zh) * | 2021-09-17 | 2021-12-17 | 西安电子科技大学 | 基于深度学习的雷达辅助相机标定方法 |
CN113850304A (zh) * | 2021-09-07 | 2021-12-28 | 辽宁科技大学 | 一种高准确率的点云数据分类分割改进算法 |
CN113970753A (zh) * | 2021-09-30 | 2022-01-25 | 南京理工大学 | 一种基于激光雷达和视觉检测的无人机定位控制方法及系统 |
CN113984037A (zh) * | 2021-09-30 | 2022-01-28 | 电子科技大学长三角研究院(湖州) | 一种基于任意方向目标候选框的语义地图构建方法 |
CN113989350A (zh) * | 2021-10-29 | 2022-01-28 | 大连海事大学 | 无人船自主探索和未知环境三维重构的监控系统 |
CN114037972A (zh) * | 2021-10-08 | 2022-02-11 | 岚图汽车科技有限公司 | 目标检测方法、装置、设备及可读存储介质 |
CN114067353A (zh) * | 2021-10-12 | 2022-02-18 | 北京控制与电子技术研究所 | 一种采用多功能加固处理机实现多源数据融合的方法 |
CN114063619A (zh) * | 2021-11-15 | 2022-02-18 | 浙江大学湖州研究院 | 一种基于地毯式扫描方式的无人船探破障方法 |
CN114089675A (zh) * | 2021-11-23 | 2022-02-25 | 长春工业大学 | 一种基于人机距离的机器控制方法及控制系统 |
CN114088082A (zh) * | 2021-11-01 | 2022-02-25 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114112945A (zh) * | 2021-12-31 | 2022-03-01 | 安徽大学 | 一种新型巢湖蓝藻水华监测系统 |
CN114140675A (zh) * | 2021-10-29 | 2022-03-04 | 广西民族大学 | 一种基于深度学习的甘蔗种筛选系统及方法 |
CN114359861A (zh) * | 2021-12-20 | 2022-04-15 | 浙江天尚元科技有限公司 | 基于视觉和激光雷达的智能车辆障碍物识别深度学习方法 |
CN114359181A (zh) * | 2021-12-17 | 2022-04-15 | 上海应用技术大学 | 一种基于图像和点云的智慧交通目标融合检测方法及系统 |
CN114403114A (zh) * | 2022-01-26 | 2022-04-29 | 安徽农业大学 | 一种高地隙植保机车身姿态平衡控制系统及方法 |
CN114648579A (zh) * | 2022-02-15 | 2022-06-21 | 浙江零跑科技股份有限公司 | 一种多分支输入的激光雷达目标检测方法 |
CN114677531A (zh) * | 2022-03-23 | 2022-06-28 | 东南大学 | 一种融合多模态信息的水面无人艇目标检测与定位方法 |
CN114764906A (zh) * | 2021-01-13 | 2022-07-19 | 长沙中车智驭新能源科技有限公司 | 用于自动驾驶的多传感器后融合方法、电子设备及车辆 |
CN114779275A (zh) * | 2022-03-24 | 2022-07-22 | 南京理工大学 | 基于AprilTag与激光雷达的移动机器人自动跟随避障方法 |
CN114862973A (zh) * | 2022-07-11 | 2022-08-05 | 中铁电气化局集团有限公司 | 基于固定点位的空间定位方法、装置、设备及存储介质 |
CN114879180A (zh) * | 2022-03-22 | 2022-08-09 | 大连海事大学 | 一种无人艇载多元多尺度雷达实时融合的无缝态势感知方法 |
CN114879685A (zh) * | 2022-05-25 | 2022-08-09 | 合肥工业大学 | 一种用于无人船的河岸线检测及自主巡航方法 |
CN115049825A (zh) * | 2022-08-16 | 2022-09-13 | 北京大学 | 水面清洁方法、装置、设备及计算机可读存储介质 |
CN115097442A (zh) * | 2022-08-24 | 2022-09-23 | 陕西欧卡电子智能科技有限公司 | 基于毫米波雷达的水面环境地图构建方法 |
CN115100287A (zh) * | 2022-04-14 | 2022-09-23 | 美的集团(上海)有限公司 | 外参标定方法及机器人 |
CN115115595A (zh) * | 2022-06-30 | 2022-09-27 | 东北林业大学 | 一种面向森林火灾监测的机载激光雷达和红外相机的实时标定方法 |
CN115187743A (zh) * | 2022-07-29 | 2022-10-14 | 江西科骏实业有限公司 | 一种地铁站内部环境布置预测和白模采集方法及系统 |
CN115342814A (zh) * | 2022-07-26 | 2022-11-15 | 江苏科技大学 | 一种基于多传感器数据融合的无人船定位方法 |
CN115496923A (zh) * | 2022-09-14 | 2022-12-20 | 北京化工大学 | 一种基于不确定性感知的多模态融合目标检测方法及装置 |
CN115641434A (zh) * | 2022-12-26 | 2023-01-24 | 浙江天铂云科光电股份有限公司 | 电力设备定位方法、系统、终端及存储介质 |
CN116030023A (zh) * | 2023-02-02 | 2023-04-28 | 泉州装备制造研究所 | 一种点云检测方法及系统 |
CN116106899A (zh) * | 2023-04-14 | 2023-05-12 | 青岛杰瑞工控技术有限公司 | 一种基于机器学习的港口航道小目标识别方法 |
CN116338628A (zh) * | 2023-05-16 | 2023-06-27 | 中国地质大学(武汉) | 一种基于学习架构的激光雷达测深方法、装置及电子设备 |
CN116524017A (zh) * | 2023-03-13 | 2023-08-01 | 明创慧远科技集团有限公司 | 一种用于矿山井下检测识别定位系统 |
CN117788302A (zh) * | 2024-02-26 | 2024-03-29 | 山东全维地信科技有限公司 | 一种测绘图形处理系统 |
CN117975769A (zh) * | 2024-03-29 | 2024-05-03 | 交通运输部水运科学研究所 | 一种基于多源数据融合的智能航行安全管理方法及系统 |
CN117994797A (zh) * | 2024-04-02 | 2024-05-07 | 杭州海康威视数字技术股份有限公司 | 一种水尺读数方法、装置、存储介质和电子设备 |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110705623B (zh) * | 2019-09-26 | 2022-08-02 | 哈尔滨工程大学 | 基于全卷积神经网络的海天线在线检测方法 |
CN110763306B (zh) * | 2019-09-30 | 2020-09-01 | 中国科学院西安光学精密机械研究所 | 一种基于单目视觉的液位测量系统及方法 |
CN111144208A (zh) * | 2019-11-22 | 2020-05-12 | 北京航天控制仪器研究所 | 一种海上船舶目标的自动检测和识别方法及目标检测器 |
CN111179300A (zh) * | 2019-12-16 | 2020-05-19 | 新奇点企业管理集团有限公司 | 障碍物检测的方法、装置、系统、设备以及存储介质 |
CN111354045A (zh) * | 2020-03-02 | 2020-06-30 | 清华大学 | 一种基于红外热成像的视觉语义与位置感知方法和系统 |
WO2021174539A1 (zh) * | 2020-03-06 | 2021-09-10 | 深圳市大疆创新科技有限公司 | 物体检测方法、可移动平台、设备和存储介质 |
CN111583663B (zh) * | 2020-04-26 | 2022-07-12 | 宁波吉利汽车研究开发有限公司 | 基于稀疏点云的单目感知修正方法、装置及存储介质 |
CN111899301A (zh) * | 2020-06-02 | 2020-11-06 | 广州中国科学院先进技术研究所 | 一种基于深度学习的工件6d位姿估计方法 |
CN111881932B (zh) * | 2020-06-11 | 2023-09-15 | 中国人民解放军战略支援部队信息工程大学 | 一种军用飞机的FasterRCNN目标检测算法 |
CN114076937A (zh) * | 2020-08-20 | 2022-02-22 | 北京万集科技股份有限公司 | 激光雷达与相机联合标定方法和装置、服务器、计算机可读存储介质 |
CN112101222A (zh) * | 2020-09-16 | 2020-12-18 | 中国海洋大学 | 一种基于无人艇多模态传感器的海面三维目标检测方法 |
CN112541886A (zh) * | 2020-11-27 | 2021-03-23 | 北京佳力诚义科技有限公司 | 一种激光雷达和相机融合人工智能矿石识别方法和装置 |
CN112652064B (zh) * | 2020-12-07 | 2024-02-23 | 中国自然资源航空物探遥感中心 | 海陆一体三维模型构建方法、装置、存储介质和电子设备 |
CN112529072B (zh) * | 2020-12-07 | 2024-08-09 | 中国船舶重工集团公司七五0试验场 | 一种基于声呐图像处理的水下沉埋物识别与定位方法 |
CN112666534A (zh) * | 2020-12-31 | 2021-04-16 | 武汉理工大学 | 基于激光雷达识别算法的无人船路线规划方法和装置 |
CN112733753B (zh) * | 2021-01-14 | 2024-04-30 | 江苏恒澄交科信息科技股份有限公司 | 结合卷积神经网络和数据融合的大桥方位识别方法及系统 |
CN112927237A (zh) * | 2021-03-10 | 2021-06-08 | 太原理工大学 | 基于改进SCB-Unet网络的蜂窝肺病灶分割方法 |
CN113159042A (zh) * | 2021-03-30 | 2021-07-23 | 苏州市卫航智能技术有限公司 | 一种激光视觉融合的无人船桥洞通行方法及系统 |
CN113033572B (zh) * | 2021-04-23 | 2024-04-05 | 上海海事大学 | 一种基于usv的障碍物分割网络及其生成方法 |
CN113362395A (zh) * | 2021-06-15 | 2021-09-07 | 上海追势科技有限公司 | 一种基于传感器融合的环境感知方法 |
CN113485375B (zh) * | 2021-08-13 | 2023-03-24 | 苏州大学 | 一种基于启发式偏置采样的室内环境机器人探索方法 |
CN113936198B (zh) * | 2021-11-22 | 2024-03-22 | 桂林电子科技大学 | 低线束激光雷达与相机融合方法、存储介质及装置 |
CN114527468B (zh) * | 2021-12-28 | 2024-08-27 | 湖北三江航天红峰控制有限公司 | 一种基于激光雷达的人员检测系统 |
CN114332647B (zh) * | 2021-12-31 | 2022-10-21 | 合肥工业大学 | 一种用于无人船的河道边界检测与跟踪方法及系统 |
CN114692731B (zh) * | 2022-03-09 | 2024-05-28 | 华南理工大学 | 基于单目视觉与激光测距阵列的环境感知融合方法及系统 |
CN114863258B (zh) * | 2022-07-06 | 2022-09-06 | 四川迪晟新达类脑智能技术有限公司 | 海天线场景中基于视角转换检测小目标的方法 |
CN115015911B (zh) * | 2022-08-03 | 2022-10-25 | 深圳安德空间技术有限公司 | 一种基于雷达图像的导航地图制作和使用方法及系统 |
CN116310999B (zh) * | 2023-05-05 | 2023-07-21 | 贵州中水能源股份有限公司 | 一种水力发电站库区大型漂浮物检测方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140141174A (ko) * | 2013-05-31 | 2014-12-10 | 한국과학기술원 | 3차원 객체 인식을 위한 rgb-d 영상 기반 객체 구역화 및 인식 방법 및 장치 |
CN106709568A (zh) * | 2016-12-16 | 2017-05-24 | 北京工业大学 | 基于深层卷积网络的rgb‑d图像的物体检测和语义分割方法 |
CN106843209A (zh) * | 2017-01-10 | 2017-06-13 | 上海华测导航技术股份有限公司 | 一种基于开源控制系统的无人驾驶船 |
CN108171796A (zh) * | 2017-12-25 | 2018-06-15 | 燕山大学 | 一种基于三维点云的巡检机器人视觉系统及控制方法 |
CN108469817A (zh) * | 2018-03-09 | 2018-08-31 | 武汉理工大学 | 基于fpga和信息融合的无人船避障控制系统 |
CN109444911A (zh) * | 2018-10-18 | 2019-03-08 | 哈尔滨工程大学 | 一种单目相机和激光雷达信息融合的无人艇水面目标检测识别与定位方法 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108932736B (zh) * | 2018-05-30 | 2022-10-11 | 南昌大学 | 二维激光雷达点云数据处理方法以及动态机器人位姿校准方法 |
-
2019
- 2019-05-31 CN CN201910467501.3A patent/CN110188696B/zh active Active
- 2019-06-03 WO PCT/CN2019/089748 patent/WO2020237693A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140141174A (ko) * | 2013-05-31 | 2014-12-10 | 한국과학기술원 | 3차원 객체 인식을 위한 rgb-d 영상 기반 객체 구역화 및 인식 방법 및 장치 |
CN106709568A (zh) * | 2016-12-16 | 2017-05-24 | 北京工业大学 | 基于深层卷积网络的rgb‑d图像的物体检测和语义分割方法 |
CN106843209A (zh) * | 2017-01-10 | 2017-06-13 | 上海华测导航技术股份有限公司 | 一种基于开源控制系统的无人驾驶船 |
CN108171796A (zh) * | 2017-12-25 | 2018-06-15 | 燕山大学 | 一种基于三维点云的巡检机器人视觉系统及控制方法 |
CN108469817A (zh) * | 2018-03-09 | 2018-08-31 | 武汉理工大学 | 基于fpga和信息融合的无人船避障控制系统 |
CN109444911A (zh) * | 2018-10-18 | 2019-03-08 | 哈尔滨工程大学 | 一种单目相机和激光雷达信息融合的无人艇水面目标检测识别与定位方法 |
Cited By (87)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112258590A (zh) * | 2020-12-08 | 2021-01-22 | 杭州迦智科技有限公司 | 一种基于激光的深度相机外参标定方法、设备及其存储介质 |
CN112819737A (zh) * | 2021-01-13 | 2021-05-18 | 西北大学 | 基于3d卷积的多尺度注意力深度卷积网络的遥感图像融合方法 |
CN112819737B (zh) * | 2021-01-13 | 2023-04-07 | 西北大学 | 基于3d卷积的多尺度注意力深度卷积网络的遥感图像融合方法 |
CN114764906A (zh) * | 2021-01-13 | 2022-07-19 | 长沙中车智驭新能源科技有限公司 | 用于自动驾驶的多传感器后融合方法、电子设备及车辆 |
CN112861653A (zh) * | 2021-01-20 | 2021-05-28 | 上海西井信息科技有限公司 | 融合图像和点云信息的检测方法、系统、设备及存储介质 |
CN112861653B (zh) * | 2021-01-20 | 2024-01-23 | 上海西井科技股份有限公司 | 融合图像和点云信息的检测方法、系统、设备及存储介质 |
CN112801194A (zh) * | 2021-02-03 | 2021-05-14 | 大连海事大学 | 基于改进AlexNet的船用雷达降雨分析方法 |
CN112801194B (zh) * | 2021-02-03 | 2023-08-25 | 大连海事大学 | 基于改进AlexNet的船用雷达降雨分析方法 |
CN113075683A (zh) * | 2021-03-05 | 2021-07-06 | 上海交通大学 | 一种环境三维重构方法、装置及系统 |
CN113075683B (zh) * | 2021-03-05 | 2022-08-23 | 上海交通大学 | 一种环境三维重构方法、装置及系统 |
CN113052066A (zh) * | 2021-03-24 | 2021-06-29 | 中国科学技术大学 | 三维目标检测中基于多视图和图像分割的多模态融合方法 |
CN113052066B (zh) * | 2021-03-24 | 2022-09-02 | 中国科学技术大学 | 三维目标检测中基于多视图和图像分割的多模态融合方法 |
CN113093746B (zh) * | 2021-03-31 | 2024-01-23 | 上海三一重机股份有限公司 | 作业机械环境感知方法、装置、系统及作业机械 |
CN113093746A (zh) * | 2021-03-31 | 2021-07-09 | 上海三一重机股份有限公司 | 作业机械环境感知方法、装置、系统及作业机械 |
CN113111751B (zh) * | 2021-04-01 | 2024-06-04 | 西北工业大学 | 一种自适应融合可见光与点云数据的三维目标检测方法 |
CN113111751A (zh) * | 2021-04-01 | 2021-07-13 | 西北工业大学 | 一种自适应融合可见光与点云数据的三维目标检测方法 |
CN113093254A (zh) * | 2021-04-12 | 2021-07-09 | 南京速度软件技术有限公司 | 基于多传感器融合的地图特征的高架桥中的车辆定位方法 |
CN113160316A (zh) * | 2021-04-25 | 2021-07-23 | 华南理工大学 | 非刚体三维形状的扇形卷积特征提取方法与系统 |
CN113160316B (zh) * | 2021-04-25 | 2023-01-06 | 华南理工大学 | 非刚体三维形状的扇形卷积特征提取方法与系统 |
CN113177593A (zh) * | 2021-04-29 | 2021-07-27 | 上海海事大学 | 一种水上交通环境中雷达点云与影像数据的融合方法 |
CN113177593B (zh) * | 2021-04-29 | 2023-10-27 | 上海海事大学 | 一种水上交通环境中雷达点云与影像数据的融合方法 |
CN113281723A (zh) * | 2021-05-07 | 2021-08-20 | 北京航空航天大学 | 一种基于AR tag的3D激光雷达与相机间结构参数的标定方法 |
CN113160217A (zh) * | 2021-05-12 | 2021-07-23 | 北京京东乾石科技有限公司 | 一种线路异物的检测方法、装置、设备及存储介质 |
CN113686314A (zh) * | 2021-07-28 | 2021-11-23 | 武汉科技大学 | 船载摄像头的单目水面目标分割及单目测距方法 |
CN113686314B (zh) * | 2021-07-28 | 2024-02-27 | 武汉科技大学 | 船载摄像头的单目水面目标分割及单目测距方法 |
CN113587933A (zh) * | 2021-07-29 | 2021-11-02 | 山东山速机器人科技有限公司 | 一种基于分支定界算法的室内移动机器人定位方法 |
CN113587933B (zh) * | 2021-07-29 | 2024-02-02 | 山东山速机器人科技有限公司 | 一种基于分支定界算法的室内移动机器人定位方法 |
CN113696178A (zh) * | 2021-07-29 | 2021-11-26 | 大箴(杭州)科技有限公司 | 一种机器人智能抓取的控制方法及系统、介质、设备 |
CN113532424B (zh) * | 2021-08-10 | 2024-02-20 | 广东师大维智信息科技有限公司 | 一种获取多维信息的一体化设备与协同测量方法 |
CN113532424A (zh) * | 2021-08-10 | 2021-10-22 | 广东师大维智信息科技有限公司 | 一种获取多维信息的一体化设备与协同测量方法 |
CN113850304A (zh) * | 2021-09-07 | 2021-12-28 | 辽宁科技大学 | 一种高准确率的点云数据分类分割改进算法 |
CN113808219B (zh) * | 2021-09-17 | 2024-05-14 | 西安电子科技大学 | 基于深度学习的雷达辅助相机标定方法 |
CN113808219A (zh) * | 2021-09-17 | 2021-12-17 | 西安电子科技大学 | 基于深度学习的雷达辅助相机标定方法 |
CN113984037B (zh) * | 2021-09-30 | 2023-09-12 | 电子科技大学长三角研究院(湖州) | 一种基于任意方向目标候选框的语义地图构建方法 |
CN113970753A (zh) * | 2021-09-30 | 2022-01-25 | 南京理工大学 | 一种基于激光雷达和视觉检测的无人机定位控制方法及系统 |
CN113984037A (zh) * | 2021-09-30 | 2022-01-28 | 电子科技大学长三角研究院(湖州) | 一种基于任意方向目标候选框的语义地图构建方法 |
CN113970753B (zh) * | 2021-09-30 | 2024-04-30 | 南京理工大学 | 一种基于激光雷达和视觉检测的无人机定位控制方法及系统 |
CN114037972A (zh) * | 2021-10-08 | 2022-02-11 | 岚图汽车科技有限公司 | 目标检测方法、装置、设备及可读存储介质 |
CN114067353A (zh) * | 2021-10-12 | 2022-02-18 | 北京控制与电子技术研究所 | 一种采用多功能加固处理机实现多源数据融合的方法 |
CN114067353B (zh) * | 2021-10-12 | 2024-04-02 | 北京控制与电子技术研究所 | 一种采用多功能加固处理机实现多源数据融合的方法 |
CN113989350B (zh) * | 2021-10-29 | 2024-04-02 | 大连海事大学 | 无人船自主探索和未知环境三维重构的监控系统 |
CN114140675A (zh) * | 2021-10-29 | 2022-03-04 | 广西民族大学 | 一种基于深度学习的甘蔗种筛选系统及方法 |
CN113989350A (zh) * | 2021-10-29 | 2022-01-28 | 大连海事大学 | 无人船自主探索和未知环境三维重构的监控系统 |
CN114088082A (zh) * | 2021-11-01 | 2022-02-25 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114088082B (zh) * | 2021-11-01 | 2024-04-16 | 广州小鹏自动驾驶科技有限公司 | 一种地图数据的处理方法和装置 |
CN114063619A (zh) * | 2021-11-15 | 2022-02-18 | 浙江大学湖州研究院 | 一种基于地毯式扫描方式的无人船探破障方法 |
CN114063619B (zh) * | 2021-11-15 | 2023-09-19 | 浙江大学湖州研究院 | 一种基于地毯式扫描方式的无人船探破障方法 |
CN114089675B (zh) * | 2021-11-23 | 2023-06-09 | 长春工业大学 | 一种基于人机距离的机器控制方法及控制系统 |
CN114089675A (zh) * | 2021-11-23 | 2022-02-25 | 长春工业大学 | 一种基于人机距离的机器控制方法及控制系统 |
CN114359181B (zh) * | 2021-12-17 | 2024-01-26 | 上海应用技术大学 | 一种基于图像和点云的智慧交通目标融合检测方法及系统 |
CN114359181A (zh) * | 2021-12-17 | 2022-04-15 | 上海应用技术大学 | 一种基于图像和点云的智慧交通目标融合检测方法及系统 |
CN114359861A (zh) * | 2021-12-20 | 2022-04-15 | 浙江天尚元科技有限公司 | 基于视觉和激光雷达的智能车辆障碍物识别深度学习方法 |
CN114112945A (zh) * | 2021-12-31 | 2022-03-01 | 安徽大学 | 一种新型巢湖蓝藻水华监测系统 |
CN114403114A (zh) * | 2022-01-26 | 2022-04-29 | 安徽农业大学 | 一种高地隙植保机车身姿态平衡控制系统及方法 |
CN114648579A (zh) * | 2022-02-15 | 2022-06-21 | 浙江零跑科技股份有限公司 | 一种多分支输入的激光雷达目标检测方法 |
CN114879180A (zh) * | 2022-03-22 | 2022-08-09 | 大连海事大学 | 一种无人艇载多元多尺度雷达实时融合的无缝态势感知方法 |
CN114879180B (zh) * | 2022-03-22 | 2024-08-30 | 大连海事大学 | 一种无人艇载多元多尺度雷达实时融合的无缝态势感知方法 |
CN114677531A (zh) * | 2022-03-23 | 2022-06-28 | 东南大学 | 一种融合多模态信息的水面无人艇目标检测与定位方法 |
CN114779275A (zh) * | 2022-03-24 | 2022-07-22 | 南京理工大学 | 基于AprilTag与激光雷达的移动机器人自动跟随避障方法 |
CN114779275B (zh) * | 2022-03-24 | 2024-06-11 | 南京理工大学 | 基于AprilTag与激光雷达的移动机器人自动跟随避障方法 |
CN115100287A (zh) * | 2022-04-14 | 2022-09-23 | 美的集团(上海)有限公司 | 外参标定方法及机器人 |
CN114879685A (zh) * | 2022-05-25 | 2022-08-09 | 合肥工业大学 | 一种用于无人船的河岸线检测及自主巡航方法 |
CN115115595A (zh) * | 2022-06-30 | 2022-09-27 | 东北林业大学 | 一种面向森林火灾监测的机载激光雷达和红外相机的实时标定方法 |
CN115115595B (zh) * | 2022-06-30 | 2023-03-03 | 东北林业大学 | 一种面向森林火灾监测的机载激光雷达和红外相机的实时标定方法 |
CN114862973A (zh) * | 2022-07-11 | 2022-08-05 | 中铁电气化局集团有限公司 | 基于固定点位的空间定位方法、装置、设备及存储介质 |
CN115342814B (zh) * | 2022-07-26 | 2024-03-19 | 江苏科技大学 | 一种基于多传感器数据融合的无人船定位方法 |
WO2024021642A1 (zh) * | 2022-07-26 | 2024-02-01 | 江苏科技大学 | 一种基于多传感器数据融合的无人船定位方法 |
CN115342814A (zh) * | 2022-07-26 | 2022-11-15 | 江苏科技大学 | 一种基于多传感器数据融合的无人船定位方法 |
CN115187743A (zh) * | 2022-07-29 | 2022-10-14 | 江西科骏实业有限公司 | 一种地铁站内部环境布置预测和白模采集方法及系统 |
CN115049825A (zh) * | 2022-08-16 | 2022-09-13 | 北京大学 | 水面清洁方法、装置、设备及计算机可读存储介质 |
CN115049825B (zh) * | 2022-08-16 | 2022-11-01 | 北京大学 | 水面清洁方法、装置、设备及计算机可读存储介质 |
CN115097442A (zh) * | 2022-08-24 | 2022-09-23 | 陕西欧卡电子智能科技有限公司 | 基于毫米波雷达的水面环境地图构建方法 |
CN115496923B (zh) * | 2022-09-14 | 2023-10-20 | 北京化工大学 | 一种基于不确定性感知的多模态融合目标检测方法及装置 |
CN115496923A (zh) * | 2022-09-14 | 2022-12-20 | 北京化工大学 | 一种基于不确定性感知的多模态融合目标检测方法及装置 |
CN115641434A (zh) * | 2022-12-26 | 2023-01-24 | 浙江天铂云科光电股份有限公司 | 电力设备定位方法、系统、终端及存储介质 |
CN116030023A (zh) * | 2023-02-02 | 2023-04-28 | 泉州装备制造研究所 | 一种点云检测方法及系统 |
CN116524017A (zh) * | 2023-03-13 | 2023-08-01 | 明创慧远科技集团有限公司 | 一种用于矿山井下检测识别定位系统 |
CN116524017B (zh) * | 2023-03-13 | 2023-09-19 | 明创慧远科技集团有限公司 | 一种用于矿山井下检测识别定位系统 |
CN116106899B (zh) * | 2023-04-14 | 2023-06-23 | 青岛杰瑞工控技术有限公司 | 一种基于机器学习的港口航道小目标识别方法 |
CN116106899A (zh) * | 2023-04-14 | 2023-05-12 | 青岛杰瑞工控技术有限公司 | 一种基于机器学习的港口航道小目标识别方法 |
CN116338628A (zh) * | 2023-05-16 | 2023-06-27 | 中国地质大学(武汉) | 一种基于学习架构的激光雷达测深方法、装置及电子设备 |
CN116338628B (zh) * | 2023-05-16 | 2023-09-15 | 中国地质大学(武汉) | 一种基于学习架构的激光雷达测深方法、装置及电子设备 |
CN117788302B (zh) * | 2024-02-26 | 2024-05-14 | 山东全维地信科技有限公司 | 一种测绘图形处理系统 |
CN117788302A (zh) * | 2024-02-26 | 2024-03-29 | 山东全维地信科技有限公司 | 一种测绘图形处理系统 |
CN117975769A (zh) * | 2024-03-29 | 2024-05-03 | 交通运输部水运科学研究所 | 一种基于多源数据融合的智能航行安全管理方法及系统 |
CN117975769B (zh) * | 2024-03-29 | 2024-06-07 | 交通运输部水运科学研究所 | 一种基于多源数据融合的智能航行安全管理方法及系统 |
CN117994797A (zh) * | 2024-04-02 | 2024-05-07 | 杭州海康威视数字技术股份有限公司 | 一种水尺读数方法、装置、存储介质和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN110188696A (zh) | 2019-08-30 |
CN110188696B (zh) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020237693A1 (zh) | 一种水面无人装备多源感知方法及系统 | |
CN107844750B (zh) | 一种水面全景图像目标检测识别方法 | |
CN112258618B (zh) | 基于先验激光点云与深度图融合的语义建图与定位方法 | |
WO2021142902A1 (zh) | 基于DANet的无人机海岸线漂浮垃圾巡检系统 | |
CN109597087B (zh) | 一种基于点云数据的3d目标检测方法 | |
Bovcon et al. | WaSR—A water segmentation and refinement maritime obstacle detection network | |
CN109145747B (zh) | 一种水面全景图像语义分割方法 | |
Bovcon et al. | A water-obstacle separation and refinement network for unmanned surface vehicles | |
CN109919026B (zh) | 一种水面无人艇局部路径规划方法 | |
Yao et al. | ShorelineNet: An efficient deep learning approach for shoreline semantic segmentation for unmanned surface vehicles | |
Zhang et al. | Research on unmanned surface vehicles environment perception based on the fusion of vision and lidar | |
Arain et al. | Improving underwater obstacle detection using semantic image segmentation | |
CN115032648B (zh) | 一种基于激光雷达密集点云的三维目标识别与定位方法 | |
Ouyang et al. | A cgans-based scene reconstruction model using lidar point cloud | |
Yao et al. | WaterScenes: A multi-task 4d radar-camera fusion dataset and benchmarks for autonomous driving on water surfaces | |
Yao et al. | Vision-based environment perception and autonomous obstacle avoidance for unmanned underwater vehicle | |
Cai et al. | LWDNet-A lightweight water-obstacles detection network for unmanned surface vehicles | |
CN114677531A (zh) | 一种融合多模态信息的水面无人艇目标检测与定位方法 | |
CN112950786A (zh) | 一种基于神经网络的车辆三维重建方法 | |
CN117570960A (zh) | 一种用于导盲机器人的室内定位导航系统及方法 | |
Xu et al. | Autonomous obstacle avoidance assistant system for unmanned surface vehicle based on Intelligent Vision | |
CN110033050B (zh) | 一种水面无人船实时目标检测计算方法 | |
Xu et al. | Real-time Volumetric Perception for unmanned surface vehicles through fusion of radar and camera | |
Lu et al. | Research on Unmanned Surface Vessel Perception Algorithm Based on Multi-Sensor Fusion | |
CN110895680A (zh) | 一种基于区域建议网络的无人艇水面目标检测方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19930355 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19930355 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 17.05.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19930355 Country of ref document: EP Kind code of ref document: A1 |