CN112307917A - Indoor positioning method integrating visual odometer and IMU - Google Patents

Indoor positioning method integrating visual odometer and IMU Download PDF

Info

Publication number
CN112307917A
CN112307917A CN202011131968.XA CN202011131968A CN112307917A CN 112307917 A CN112307917 A CN 112307917A CN 202011131968 A CN202011131968 A CN 202011131968A CN 112307917 A CN112307917 A CN 112307917A
Authority
CN
China
Prior art keywords
scene
pose
indoor
scene image
imu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011131968.XA
Other languages
Chinese (zh)
Inventor
邵宇鹰
李新利
王孝伟
刘文杰
苏填
王一帆
彭鹏
陈怡君
陆启宇
张琪祁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
State Grid Shanghai Electric Power Co Ltd
Original Assignee
North China Electric Power University
State Grid Shanghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University, State Grid Shanghai Electric Power Co Ltd filed Critical North China Electric Power University
Priority to CN202011131968.XA priority Critical patent/CN112307917A/en
Publication of CN112307917A publication Critical patent/CN112307917A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an indoor positioning method integrating a visual odometer and an IMU (inertial measurement Unit), which comprises the following steps of: step 1: performing target analysis on an indoor scene to obtain a scene image; step 2: extracting a key frame from the scene image; and step 3: matching feature points of two continuous key frames to obtain pose constraint information; and 4, step 4: based on a factor graph optimization algorithm, performing pose global optimization on the scene image according to pose constraint information to obtain an optimized pose; and 5: and optimizing the pose of the camera in real time according to the pose constraint information and the optimized pose to obtain a scene track and a global map, and completing indoor positioning. The method solves the problem that the traditional robot positioning is poor in real-time performance and robustness, improves the stereo matching precision by utilizing a binocular camera in combination with scene structural features and a stereo matching method based on indoor scene structural features, and improves the real-time performance and robustness of the robot positioning by combining with the back-end global optimization based on a factor graph.

Description

Indoor positioning method integrating visual odometer and IMU
Technical Field
The invention relates to the technical field of robot positioning, in particular to an indoor positioning method integrating a visual odometer and an IMU.
Background
With the rapid development of technologies such as sensors and artificial intelligence, the research of robots is more and more focused. The robot acquires external environment information and self state information through the sensor, and realizes autonomous movement and completes certain operation tasks according to the information.
However, autonomous positioning is the basis of intelligent navigation and environment exploration research of the robot, and since a single sensor is difficult to acquire all information required by the system, information fusion of multiple sensors becomes a key for realizing autonomous positioning of the robot.
At present, the positioning accuracy and stability of a single sensor or two sensors are difficult to meet requirements, a visual or odometer method is mature, but indoor motion and illumination environments have great influence on the stability and accuracy of the sensors.
Therefore, it is possible to obtain the instantaneous displacement increment of the robot by using an Inertial Measurement Unit (IMU) to calculate the trajectory of the robot, and then assist the positioning.
Disclosure of Invention
The invention aims to provide an indoor positioning method integrating a visual odometer and an IMU. The method aims to solve the problem that the traditional robot positioning is poor in real-time performance and robustness, the stereoscopic matching precision is improved by using a binocular camera in combination with scene structural features and a stereoscopic matching method based on indoor scene structural features, and the method is combined with back-end global optimization based on a factor graph so as to improve the real-time performance and robustness of the robot positioning.
In order to achieve the above object, the present invention provides an indoor positioning method integrating a visual odometer and an Inertial Measurement Unit (IMU), comprising the following steps:
step 1: performing target analysis on an indoor scene by using a camera to obtain a scene image;
step 2: extracting key frames of the scene images to obtain the key frames of the scene images;
and step 3: based on a random sampling consistency algorithm, performing feature point matching on key frames in two continuous scene images of the camera under different poses to obtain pose constraint information of the scene images;
and 4, step 4: based on a factor graph optimization algorithm, according to pose constraint information obtained by matching the characteristics of two continuous scene images, giving an initial value of edges between pose nodes in a factor graph, and performing pose global optimization on the scene images to obtain an optimized pose;
and 5: and optimizing the pose of the camera in real time according to the pose constraint information and the optimized pose to obtain a scene track and a global map of an indoor scene, so as to complete indoor positioning.
Most preferably, the key frame extraction comprises the steps of:
step 2.1: based on the combination of the line segment characteristics and the binary line descriptors, extracting the line structure relationship of the scene image to obtain the scene space structure of the scene image;
step 2.2: based on an ORB feature point extraction algorithm, extracting feature points of the scene image to obtain a feature point matrix of the scene image;
step 2.3: and combining the scene space structure of the scene image with the characteristic point matrix to obtain a key frame of the scene image.
Most preferably, the feature point extraction includes the steps of:
step 2.2.1: constructing a multilayer Gaussian pyramid according to the scene image;
step 2.2.2: calculating the position of the feature point of the Gaussian pyramid of each layer according to the multi-layer Gaussian pyramid based on a FAST algorithm;
step 2.2.3: dividing the Gaussian pyramid of each layer into a plurality of areas according to the positions of the feature points of the Gaussian pyramid of each layer;
step 2.2.4: and extracting the interest points with the maximum response value in the Gaussian pyramid of each layer, and performing descriptor calculation to obtain a characteristic point matrix of the scene image.
Most preferably, the camera is a binocular camera.
Most preferably, the indoor scene is any one of an indoor zenith texture image and a floor texture image.
By applying the method, the problem that the traditional robot positioning is poor in real-time performance and robustness is solved, the stereoscopic matching precision is improved by utilizing a stereoscopic matching method based on the indoor scene structural features by combining a binocular camera with the scene structural features, and the real-time performance and robustness of the robot positioning are improved by combining with the back-end global optimization based on the factor graph.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the indoor positioning method fusing the visual odometer and the IMU, provided by the invention, the stereoscopic matching precision and the drawing effect are improved by utilizing a binocular camera in combination with the scene structural characteristics and a stereoscopic matching method based on the indoor scene structural characteristics, and the visual SLAM system is constructed in combination with the back-end global optimization based on the factor graph so as to improve the real-time property and the robustness of robot positioning.
2. According to the indoor positioning method fusing the visual odometer and the IMU, provided by the invention, the target scene is analyzed, the accurate information constraint condition of pose estimation is obtained based on the inherent characteristics of the indoor scene, and the pose is optimized by adopting a factor graph algorithm.
3. The indoor positioning method fusing the visual odometer and the IMU, provided by the invention, has the advantages that the visual odometer is arranged at the front end, the motion of the camera between adjacent images and a local map are estimated, the camera poses measured by the visual odometer at different moments are received by the back end through a factor graph, and the camera poses are optimized to obtain globally consistent tracks and maps.
Drawings
Fig. 1 is a flowchart of an indoor positioning method according to the present invention.
Detailed Description
The invention will be further described by the following specific examples in conjunction with the drawings, which are provided for illustration only and are not intended to limit the scope of the invention.
The invention provides an indoor positioning method integrating a visual odometer and an IMU (inertial measurement Unit), which comprises the following steps as shown in figure 1:
step 1: and performing target analysis on the indoor scene of the transformer substation by adopting a binocular camera to obtain a scene image of the indoor scene of the transformer substation.
In the embodiment, the model of the binocular camera is MYNT S1030-IR-120; the indoor scene of the transformer substation is an indoor zenith texture image, a floor texture image and the like.
Step 2: extracting key frames of the scene images of the indoor scene of the transformer substation to obtain the key frames of the scene images of the indoor scene of the transformer substation;
the key frame extraction method comprises the following steps:
step 2.1: and based on the combination of the line segment characteristics and the binary line descriptors, extracting the line structure relationship of the scene image of the indoor scene of the transformer substation, and acquiring the scene space structure of the scene image of the indoor scene of the transformer substation.
Step 2.2: based on an ORB (organized FAST and rotaed BRIEF) feature point extraction algorithm, feature point extraction is carried out on the scene image, and a feature point matrix of the scene image of the indoor scene of the transformer substation is obtained.
The feature point extraction method comprises the following steps:
step 2.2.1: according to the scene image, a multilayer Gaussian pyramid of the scene image is constructed to realize scale invariance transformation of the scene image and to realize rotation invariance transformation by calibrating the direction through a gray scale centroid.
In this embodiment, the C language program corresponding to the multi-layered gaussian pyramid for constructing the scene image is as follows:
inputting: InputAlrray image, vector feature point, OutputAlrray descriptor
Gaussian blur of input image
The scale of change in pyramid is 1.2; pyramid layer number nLevels 8
for (current layer number 0; layer number < nLevels; +++ current layer number)
Downsampling a picture by number of layers
if (layer number! ═ 0)
Edges are added to the image.
Step 2.2.2: based on a FAST algorithm, calculating the feature point position of the Gaussian pyramid of each layer of the scene image according to the Gaussian pyramids of the layers of the scene image.
In this embodiment, the C language program for calculating the feature point position is as follows:
default threshold iniThFAST of FAST feature point is 20
for (current tier number 0; l current tier number < nlevels; ++ current tier number).
Step 2.2.3: dividing the Gaussian pyramid of each layer into a plurality of areas according to the position of the feature point of the Gaussian pyramid of each layer;
step 2.2.4: and extracting the interest points with the maximum response value in the Gaussian pyramid of each layer, and performing descriptor calculation to obtain a characteristic point matrix of the scene image.
In this embodiment, the descriptor calculates the corresponding C language program as follows:
for (current feature point ID ═ 0; ID < n; +++ ID).
Step 2.3: and combining the scene space structure of the scene image of the indoor scene of the transformer substation and the characteristic point matrix of the scene image to obtain the key frame of the scene image.
And step 3: based on Random Sample Consensus (RANSAC), feature point matching is performed on key frames in two continuous scene images of the camera at different poses, so that the two scene images in continuous time are related in information, and pose constraint information of the scene images is obtained.
The matching effect of the feature points directly influences the accuracy and the real-time performance of the feature point tracking process, and further greatly influences the accuracy and the efficiency of the motion estimation result.
And 4, step 4: and constructing a factor graph optimization only with tracks based on a factor graph optimization algorithm, giving an initial value of an edge between pose nodes according to pose constraint information obtained by feature matching between key frames of two continuous scene images, and performing pose global optimization on the scene images to obtain the optimized pose of the scene images.
Wherein, the global pose optimization means: obtaining a Motion edge (Motion Arcs) and a Measurement edge (Measurement Arcs) from camera pose and map features, wherein the Measurement edge connects the pose and feature points measured on the pose, each edge corresponds to a nonlinear pose constraint value, the pose constraint information represents negative log-likelihood of a Measurement and Motion model, and an objective function is a set of the pose constraint information; and (3) linearizing a series of constraints at the factor graph optimization rear end to obtain an information matrix and an information vector, and maximizing the product of factors by adjusting the value of each variable to obtain the map posterior.
And 5: and (3) according to the motion of the camera between the key frames of the continuous scene images estimated by the front-end visual odometer and the pose constraint information of the scene images, and the optimized pose of the scene images measured by the rear-end visual odometer at different moments through a factor graph, optimizing the pose of the camera in real time to obtain a globally consistent scene track and a globally consistent map, and completing indoor positioning.
The working principle of the invention is as follows:
performing target analysis on an indoor scene by using a camera to obtain a scene image; extracting key frames of the scene images to obtain the key frames of the scene images; based on a random sampling consistency algorithm, performing feature point matching on key frames in two continuous scene images of the camera under different poses to obtain pose constraint information of the scene images; based on a factor graph optimization algorithm, according to pose constraint information obtained by matching the characteristics of two continuous scene images, giving an initial value of edges between pose nodes in a factor graph, and performing pose global optimization on the scene images to obtain an optimized pose; and optimizing the pose of the camera in real time according to the pose constraint information and the optimized pose to obtain a scene track and a global map of an indoor scene, so as to complete indoor positioning.
In conclusion, the indoor positioning method fusing the visual odometer and the IMU solves the problem that the traditional robot is poor in positioning instantaneity and robustness, improves the stereo matching precision by combining the binocular camera with the scene structural feature and based on the stereo matching method of the indoor scene structural feature, and improves the instantaneity and robustness of robot positioning by combining with the factor graph-based back-end global optimization.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (5)

1. An indoor positioning method integrating a visual odometer and an IMU (inertial measurement Unit) is characterized by comprising the following steps of:
step 1: performing target analysis on an indoor scene by using a camera to obtain a scene image;
step 2: extracting key frames of the scene image to obtain the key frames of the scene image;
and step 3: based on a random sampling consistency algorithm, performing feature point matching on key frames in two continuous scene images of the camera under different poses to obtain pose constraint information of the scene images;
and 4, step 4: based on a factor graph optimization algorithm, according to the pose constraint information, giving an initial value of edges between pose nodes in a factor graph, and performing pose global optimization on the scene image to obtain an optimized pose;
and 5: and optimizing the pose of the camera in real time according to the pose constraint information and the optimized pose to obtain a scene track and a global map of an indoor scene, so as to finish indoor positioning.
2. The method of fusing visual odometry and indoor positioning of an IMU of claim 1, wherein the keyframe extraction comprises the steps of:
step 2.1: based on the combination of line segment characteristics and binary line descriptors, extracting the line structure relationship of the scene image to obtain the scene space structure of the scene image;
step 2.2: based on an ORB feature point extraction algorithm, extracting feature points of the scene image to obtain a feature point matrix of the scene image;
step 2.3: and combining the scene space structure of the scene image with the characteristic point matrix to obtain a key frame of the scene image.
3. The method of fusing visual odometry and indoor positioning of an IMU of claim 2, wherein the feature point extraction comprises the steps of:
step 2.2.1: constructing a multilayer Gaussian pyramid according to the scene image;
step 2.2.2: calculating the position of the feature point of the Gaussian pyramid of each layer according to the multi-layer Gaussian pyramid based on a FAST algorithm;
step 2.2.3: dividing the Gaussian pyramid of each layer into a plurality of areas according to the positions of the feature points of the Gaussian pyramid of each layer;
step 2.2.4: and extracting the interest points with the maximum response value in the Gaussian pyramid of each layer, and performing descriptor calculation to obtain a characteristic point matrix of the scene image.
4. The method of fusing visual odometry and indoor positioning of an IMU of claim 1, wherein the camera is a binocular camera.
5. The indoor positioning method integrating visual odometry and IMU according to claim 1, wherein the indoor scene is any one of an indoor zenith texture image and a floor texture image.
CN202011131968.XA 2020-10-21 2020-10-21 Indoor positioning method integrating visual odometer and IMU Pending CN112307917A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011131968.XA CN112307917A (en) 2020-10-21 2020-10-21 Indoor positioning method integrating visual odometer and IMU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011131968.XA CN112307917A (en) 2020-10-21 2020-10-21 Indoor positioning method integrating visual odometer and IMU

Publications (1)

Publication Number Publication Date
CN112307917A true CN112307917A (en) 2021-02-02

Family

ID=74328605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011131968.XA Pending CN112307917A (en) 2020-10-21 2020-10-21 Indoor positioning method integrating visual odometer and IMU

Country Status (1)

Country Link
CN (1) CN112307917A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991515A (en) * 2021-02-26 2021-06-18 山东英信计算机技术有限公司 Three-dimensional reconstruction method, device and related equipment
CN114088104A (en) * 2021-07-23 2022-02-25 武汉理工大学 Map generation method under automatic driving scene

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
CN109558879A (en) * 2017-09-22 2019-04-02 华为技术有限公司 A kind of vision SLAM method and apparatus based on dotted line feature
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110853100A (en) * 2019-10-24 2020-02-28 东南大学 Structured scene vision SLAM method based on improved point-line characteristics
CN111024066A (en) * 2019-12-10 2020-04-17 中国航空无线电电子研究所 Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN111489393A (en) * 2019-01-28 2020-08-04 速感科技(北京)有限公司 VS L AM method, controller and mobile device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
CN109558879A (en) * 2017-09-22 2019-04-02 华为技术有限公司 A kind of vision SLAM method and apparatus based on dotted line feature
CN111489393A (en) * 2019-01-28 2020-08-04 速感科技(北京)有限公司 VS L AM method, controller and mobile device
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110853100A (en) * 2019-10-24 2020-02-28 东南大学 Structured scene vision SLAM method based on improved point-line characteristics
CN111024066A (en) * 2019-12-10 2020-04-17 中国航空无线电电子研究所 Unmanned aerial vehicle vision-inertia fusion indoor positioning method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘宏伟,余辉亮等: "ORB特征四叉树均匀分布算法", 自动化仪表, vol. 39, no. 5, pages 218 - 219 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991515A (en) * 2021-02-26 2021-06-18 山东英信计算机技术有限公司 Three-dimensional reconstruction method, device and related equipment
CN114088104A (en) * 2021-07-23 2022-02-25 武汉理工大学 Map generation method under automatic driving scene
CN114088104B (en) * 2021-07-23 2023-09-29 武汉理工大学 Map generation method under automatic driving scene

Similar Documents

Publication Publication Date Title
CN108242079B (en) VSLAM method based on multi-feature visual odometer and graph optimization model
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
CN110956651B (en) Terrain semantic perception method based on fusion of vision and vibrotactile sense
CN111311666B (en) Monocular vision odometer method integrating edge features and deep learning
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
Long et al. PSPNet-SLAM: A semantic SLAM detect dynamic object by pyramid scene parsing network
CN102722697B (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method
CN111899280B (en) Monocular vision odometer method adopting deep learning and mixed pose estimation
CN112734765A (en) Mobile robot positioning method, system and medium based on example segmentation and multi-sensor fusion
Sujiwo et al. Monocular vision-based localization using ORB-SLAM with LIDAR-aided mapping in real-world robot challenge
Jia et al. A Survey of simultaneous localization and mapping for robot
Yusefi et al. LSTM and filter based comparison analysis for indoor global localization in UAVs
Cai et al. A comprehensive overview of core modules in visual SLAM framework
Li et al. Overview of deep learning application on visual SLAM
Chen et al. A stereo visual-inertial SLAM approach for indoor mobile robots in unknown environments without occlusions
Li et al. A binocular MSCKF-based visual inertial odometry system using LK optical flow
CN112307917A (en) Indoor positioning method integrating visual odometer and IMU
Yu et al. Accurate and robust stereo direct visual odometry for agricultural environment
Zhou et al. A state-of-the-art review on SLAM
Ding et al. Stereo vision SLAM-based 3D reconstruction on UAV development platforms
CN112432653A (en) Monocular vision inertial odometer method based on point-line characteristics
Wang et al. A survey of simultaneous localization and mapping on unstructured lunar complex environment
CN117152228A (en) Self-supervision image depth estimation method based on channel self-attention mechanism
CN113920194B (en) Positioning method of four-rotor aircraft based on visual inertia fusion
Xia et al. YOLO-Based Semantic Segmentation for Dynamic Removal in Visual-Inertial SLAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination