CN112461228B - IMU and vision-based secondary loop detection positioning method in similar environment - Google Patents

IMU and vision-based secondary loop detection positioning method in similar environment Download PDF

Info

Publication number
CN112461228B
CN112461228B CN202011206955.4A CN202011206955A CN112461228B CN 112461228 B CN112461228 B CN 112461228B CN 202011206955 A CN202011206955 A CN 202011206955A CN 112461228 B CN112461228 B CN 112461228B
Authority
CN
China
Prior art keywords
imu
image
loop
pose
binocular camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011206955.4A
Other languages
Chinese (zh)
Other versions
CN112461228A (en
Inventor
吕婧
邹霞
岳定春
应旻
涂良辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Hangkong University
Original Assignee
Nanchang Hangkong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Hangkong University filed Critical Nanchang Hangkong University
Priority to CN202011206955.4A priority Critical patent/CN112461228B/en
Publication of CN112461228A publication Critical patent/CN112461228A/en
Application granted granted Critical
Publication of CN112461228B publication Critical patent/CN112461228B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a secondary loop detection positioning method based on IMU and vision in a similar environment, which comprises the following steps: step one, calibrating and synchronizing parameters of a binocular camera and an IMU; step two, extracting image features and matching the image features; thirdly, pose estimation and movement track formation; step four, loop detection; step five, a secondary loop detection mechanism; and step six, repositioning. The invention can carry out rough comparison constraint on the pose of the current image frame through the IMU pose information, namely, the current position direction of the IMU is compared with the position direction of the primary closed-loop image, and the direction consistency constraint pre-judgment is carried out, so that the problem that the surrounding scenes are similar, the current position of the IMU cannot be the same position and the surrounding scenes are wrong according to the image similarity and the image similarity can be judged to be closed-loop is prevented; the orientation error caused by similar images in similar environments is prevented through the direction prejudgment of the IMU; and after repositioning, updating the high-precision pose for correcting the accumulated drift error of the IMU, and improving the robustness of the secondary loop detection positioning method.

Description

IMU and vision-based secondary loop detection positioning method in similar environment
Technical Field
The invention relates to a secondary loop detection positioning method, in particular to a secondary loop detection positioning method based on IMU and vision in a similar environment.
Background
Unmanned aerial vehicles are developing to intelligent unmanned aerial vehicles and intelligent systems, and the intelligent level of the unmanned aerial vehicles is determined by the autonomous positioning navigation capability. Unmanned aerial vehicles all put forward urgent demands on autonomous positioning capability in the fields of military reconnaissance, electric power inspection, geological exploration, forest fire prevention and the like, and particularly, along with the occurrence of satellite navigation signal interference and decoy technology, the unmanned aerial vehicle is required to have complete autonomous positioning navigation capability in a large scene so as to get rid of dependence on satellite navigation signals. The combined SLAM (Simultaneous LocalizationAnd Mapping, positioning and mapping) of vision and inertia (Inertial measurement unit, IMU for short) provides a new technical support and effective approach for realizing autonomous positioning navigation without GPS signals. Through the complementation of two different sensors, the IMU performs error correction compensation on the visual positioning information, the real-time positioning capability of the unmanned aerial vehicle is improved, but accumulated drift errors exist after the IMU runs for a long time along with the time, and if no effective means are adopted, the unmanned aerial vehicle still can be positioned to lose efficacy after a long time. Meanwhile, in the vision and IMU combined positioning method, the existing back-end loop detection positioning method optimizes the pose of the previous related image frame by judging whether the current position is the previously accessed environment area or not, and the problem of misjudgment exists, because a large number of similar environments exist in the actual environment, such as clustered buildings with similar appearance, undulating sand dunes and dense forest environments, the difficult judgment of loop detection caused by the large number of similar images in the environment without GPS is caused, so that the actual loop is caused, the algorithm judges whether the loop is not the loop or not the loop, and the misjudgment problem of loop is caused by the algorithm judgment, so that the loop is wrong and the positioning is failed. Therefore, in a similar environment without GPS, a critical issue to be solved is how to prevent erroneous loop-back due to a large number of similar environment pictures, resulting in positioning failure.
Disclosure of Invention
The invention aims to solve the technical problem of providing a secondary loop detection positioning method based on an IMU and vision in a similar environment, which can achieve the purposes of correcting vision accumulated errors and IMU accumulated drift errors and correctly positioning the loop in a similar environment without GPS.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the secondary loop detection positioning method based on IMU and vision in similar environment comprises the following steps:
step one, calibrating and synchronizing parameters of a binocular camera and an IMU:
unifying the time stamps of the binocular camera and the IMU, calibrating parameters of the binocular camera and the IMU, including the starting delay time of the binocular camera, the starting delay time of the IMU, the zero bias of the accelerometer and the zero bias of the gyroscope (zero bias, namely, when the IMU is not moved, the initial value which is not zero is output), synchronizing the time of the binocular camera and the IMU, and reading the image acquired by the binocular camera and the pose information of the IMU;
step two, image feature extraction and image matching:
detecting corner points in images acquired by a binocular camera as characteristic points, describing images around the corner points to form characteristic descriptors, matching the characteristic descriptors in two continuous images by using a Hamming distance, and screening characteristic matching point pairs to prevent mismatching;
thirdly, pose estimation and movement track formation:
calculating a rotation vector matrix R and a translation vector t which enable the square sum of errors of the feature matching point pairs to reach a minimum value by using a least square method through 3D information of the matched multiple groups of feature matching point pairs, and obtaining the pose and the moving map point track of the unmanned aerial vehicle;
when the camera moves too fast to cause image blurring and positioning information is lost, acquiring current binocular camera pose information through integrating IMU pose information under a unified timestamp, and updating associated map points;
step four, loop detection:
judging whether to return to the passing position according to the similarity of the two images so as to determine a loop detection relation, and measuring the similarity between image frames in a mode of constructing a tree cluster database so as to judge whether the positions are consistent so as to loop; clustering the feature points and feature descriptions extracted during image matching to form a crotch structure, so as to rapidly screen image features, output candidate images and reduce similarity judging time; calculating the similarity of class vectors of non-connected but adjacent key frames with common feature classes of the current image, and judging whether to loop;
step five, a secondary loop detection mechanism:
after the closed loop is confirmed in the steps, the closed loop is not firstly carried out, but the pose of the current image frame is roughly compared and restrained through IMU pose information, namely, the current position direction of the IMU is compared with the position direction of the primary closed loop image, and the direction consistency constraint pre-judgment is carried out, so that the correctness of the loop is confirmed again, and a secondary detection mechanism is formed;
step six, repositioning:
through the error result after loop confirmation, global nonlinear optimization is adopted, loop errors are evenly distributed to all key image frames, so that pose of all key image frames under a world coordinate system and associated map points are optimized and updated, and repositioning is obtained; and meanwhile, differentiating the updated current map point to correct the accumulated drift error of the current IMU.
Preferably, the calculation formula of the least square method in the third step is as follows:
Figure RE-GDA0002883521860000031
wherein R is a rotation vector matrix, t is a translation vector matrix, and p i As a feature point in the current image,
Figure RE-GDA0002883521860000032
for the last picture and p i Matching the corresponding feature points.
Preferably, the calculation formula of the similarity of the class vectors in the fourth step is as follows:
Figure RE-GDA0002883521860000033
wherein v is 1 、v 2 Is a feature class vector of two images.
The invention has the beneficial effects that:
1. the invention has the advantages that the acquisition frequency of the adopted visual image is low, the acquisition frequency of the IMU is high, after the time stamping is synchronized, when the IMU does not have high accumulated drift error in the initial stage, the IMU has more accurate positioning information than the visual image, and the IMU information can be used for correcting the visual accumulated error;
2. according to the invention, through the combined IMU, when the camera moves too fast to cause image blurring and positioning information is lost, the current pose information is acquired through integrating the IMU pose information under the synchronous timestamp, and the associated map points are updated, so that the problem of positioning failure caused by visual image failure is solved;
3. according to the secondary loop detection positioning method provided by the invention, rough comparison constraint is carried out on the current image frame pose through IMU pose information, namely, the current position direction of the IMU is compared with the position direction of the primary closed loop image, and direction consistency constraint pre-judgment is carried out, so that the situation that surrounding scenes are similar, such as similarity between buildings, is prevented, the situation can be judged to be closed loop according to the image similarity, and the problem that the loop is mistakenly returned because the current actual position of the IMU cannot be the same position at all is solved, namely, the positioning error caused by similar images in similar environments is prevented through the direction pre-judgment of the IMU;
4. the high-precision pose updated after repositioning is used for correcting the accumulated drift error of the IMU, and the robustness of the secondary loop detection positioning method based on the IMU and vision is improved.
Drawings
Fig. 1 is a flow chart of a secondary loop detection positioning method based on IMU and vision in a similar environment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, an embodiment of the present invention is provided: the secondary loop detection positioning method based on IMU and vision in similar environment comprises the following steps:
step one, calibrating and synchronizing parameters of a binocular camera and an IMU:
unifying the time stamps of the binocular camera and the IMU, calibrating parameters of the binocular camera and the IMU, including binocular camera start delay time, IMU start delay time, accelerometer zero bias and gyroscope zero bias (zero bias, i.e. when the IMU is not moving, an initial value which is not zero is output), synchronizing the binocular camera and the IMU time, and reading binocular camera image and IMU pose information;
step two, image feature extraction and image matching:
detecting corner points in the images as characteristic points, describing images around the corner points to form characteristic descriptors, matching the characteristic descriptors in two continuous images by using a Hamming distance, and screening characteristic matching point pairs to prevent mismatching;
thirdly, pose estimation and movement track formation:
calculating a rotation vector matrix R and a translation vector t which enable the square sum of errors of the feature matching point pairs to reach a minimum value by using a least square method through 3D information of the matched multiple groups of feature matching point pairs, and obtaining the pose of the binocular camera and the locus of the moving map point;
when the camera moves too fast to cause image blurring and positioning information is lost, acquiring the current pose information of the double-sided camera through integrating the IMU pose information under the unified timestamp, and updating the associated map points;
step four, loop detection:
judging whether to return to the passing position according to the similarity of the two images so as to determine a loop detection relation, and measuring the similarity between image frames in a mode of constructing a tree cluster database so as to judge whether the positions are consistent so as to loop; clustering the feature points and feature descriptions extracted during image matching to form a crotch structure, so as to rapidly screen image features, output candidate images, reduce similarity judging time, calculate similarity of class vectors of non-connected but adjacent key frames with common feature classes of the current image, and judge whether loop-back;
step five, a secondary loop detection mechanism:
after the closed loop is confirmed in the steps, the closed loop is not firstly carried out, but the pose of the current image frame is roughly compared and restrained through IMU pose information, namely, the current position direction of the IMU is compared with the position direction of the primary closed loop image, and the direction consistency constraint pre-judgment is carried out, so that the correctness of the loop is confirmed again, and a secondary detection mechanism is formed;
step six, repositioning:
through the error result after loop confirmation, global nonlinear optimization is adopted, loop errors are evenly distributed to all key image frames, so that pose of all key image frames under a world coordinate system and associated map points are optimized and updated, and repositioning is obtained; and meanwhile, differentiating the updated current map point to correct the accumulated drift error of the current IMU.
In this embodiment, the calculation formula of the least square method in the third step is:
Figure RE-GDA0002883521860000061
wherein R is a rotation vector matrix, t is a translation vector matrix, and p i As a feature point in the current image,
Figure RE-GDA0002883521860000062
for the last picture and p i Matching the corresponding feature points.
In this embodiment, the calculation formula of the similarity of the class vectors in the fourth step is:
Figure RE-GDA0002883521860000063
wherein v is 1 、v 2 Feature class for two imagesVector.
In summary, the embodiment of the invention provides a secondary loop detection positioning method based on IMU and vision in a similar environment, which performs rough comparison constraint on the pose of a current image frame through IMU pose information, namely performs direction consistency constraint pre-judgment by comparing the current position direction of the IMU with the position direction of a primary closed loop image, prevents surrounding scene similarity such as similarity between buildings, can judge as a closed loop according to the image similarity, and prevents positioning errors caused by similar images in similar environments through direction pre-judgment of the IMU according to the problem that the current actual position of the IMU cannot be the same position at all and loop errors occur; the updated high-precision pose after repositioning is used for correcting the accumulated drift error of the IMU, and the robustness of the secondary loop detection positioning method based on the IMU and vision is improved.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (4)

1. The secondary loop detection positioning method based on IMU and vision in similar environment is characterized by comprising the following steps:
step one, calibrating and synchronizing parameters of a binocular camera and an IMU:
unifying time stamps of the binocular camera and the IMU, calibrating parameters of the binocular camera and the IMU, including binocular camera start delay time, IMU start delay time, accelerometer zero bias and gyroscope zero bias, synchronizing the binocular camera and the IMU time, and reading images acquired by the binocular camera and IMU pose information;
step two, image feature extraction and image matching:
detecting corner points in images acquired by a binocular camera as characteristic points, describing images around the corner points to form characteristic descriptors, matching the characteristic descriptors in two continuous images by using a Hamming distance, and screening characteristic matching point pairs to prevent mismatching;
thirdly, pose estimation and movement track formation:
calculating a rotation vector matrix R and a translation vector t which enable the square sum of errors of the feature matching point pairs to reach a minimum value by using a least square method through 3D information of the matched multiple groups of feature matching point pairs, and obtaining the pose of the binocular camera and the locus of the moving map point;
when the camera moves too fast to cause image blurring and positioning information is lost, acquiring pose information of the current binocular camera through integrating IMU pose information under a unified timestamp, and updating associated map points;
step four, loop detection:
judging whether to return to the passing position according to the similarity of the two images so as to determine a loop detection relation, and measuring the similarity between image frames in a mode of constructing a tree cluster database so as to judge whether the positions are consistent so as to loop; clustering the feature points and feature descriptions extracted during image matching to form a crotch structure, so as to rapidly screen image features, output candidate images and reduce similarity judging time; calculating the similarity of class vectors of non-connected but adjacent key frames with common feature classes of the current image, and judging whether to loop;
step five, a secondary loop detection mechanism:
after the closed loop is confirmed in the steps, the closed loop is not firstly carried out, but the pose of the current image frame is roughly compared and restrained through IMU pose information, namely, the current position direction of the IMU is compared with the position direction of the primary closed loop image, and the direction consistency constraint pre-judgment is carried out, so that the correctness of the loop is confirmed again, and a secondary detection mechanism is formed;
step six, repositioning:
through the error result after loop confirmation, global nonlinear optimization is adopted, loop errors are evenly distributed to all key image frames, so that pose of all key image frames under a world coordinate system and associated map points are optimized and updated, and repositioning is obtained; and meanwhile, differentiating the updated current map point to correct the accumulated drift error of the current IMU.
2. The method for detecting and positioning secondary loop based on IMU and vision according to claim 1, wherein the condition of selecting the feature matching point pair in the second step is that the hamming distance of the descriptor is less than twice the minimum distance.
3. The method for detecting and positioning the secondary loop based on the IMU and the vision in the similar environment according to claim 1, wherein the calculation formula of the least squares method in the third step is:
Figure FDA0004165114380000021
wherein R is a rotation vector matrix, t is a translation vector matrix, and p i As a feature point in the current image,
Figure FDA0004165114380000022
for the last picture and p i Matching the corresponding feature points.
4. The method according to claim 1, wherein the formula for calculating the similarity of the class vectors in the fourth step is:
Figure FDA0004165114380000023
wherein v is 1 、v 2 Is a feature class vector of two images.
CN202011206955.4A 2020-11-03 2020-11-03 IMU and vision-based secondary loop detection positioning method in similar environment Active CN112461228B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011206955.4A CN112461228B (en) 2020-11-03 2020-11-03 IMU and vision-based secondary loop detection positioning method in similar environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011206955.4A CN112461228B (en) 2020-11-03 2020-11-03 IMU and vision-based secondary loop detection positioning method in similar environment

Publications (2)

Publication Number Publication Date
CN112461228A CN112461228A (en) 2021-03-09
CN112461228B true CN112461228B (en) 2023-05-09

Family

ID=74834896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011206955.4A Active CN112461228B (en) 2020-11-03 2020-11-03 IMU and vision-based secondary loop detection positioning method in similar environment

Country Status (1)

Country Link
CN (1) CN112461228B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506342B (en) * 2021-06-08 2024-01-02 北京理工大学 SLAM omni-directional loop correction method based on multi-camera panoramic vision
CN113900517B (en) * 2021-09-30 2022-12-20 北京百度网讯科技有限公司 Route navigation method and device, electronic equipment and computer readable medium
CN115631319B (en) * 2022-11-02 2023-06-23 北京科技大学 Loop detection method based on cross attention network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110533722A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of the robot fast relocation method and system of view-based access control model dictionary
CN110986968A (en) * 2019-10-12 2020-04-10 清华大学 Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction
CN111060101A (en) * 2018-10-16 2020-04-24 深圳市优必选科技有限公司 Vision-assisted distance SLAM method and device and robot
CN111462231A (en) * 2020-03-11 2020-07-28 华南理工大学 Positioning method based on RGBD sensor and IMU sensor
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111767905A (en) * 2020-09-01 2020-10-13 南京晓庄学院 Improved image method based on landmark-convolution characteristics

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111060101A (en) * 2018-10-16 2020-04-24 深圳市优必选科技有限公司 Vision-assisted distance SLAM method and device and robot
CN110044354A (en) * 2019-03-28 2019-07-23 东南大学 A kind of binocular vision indoor positioning and build drawing method and device
CN110533722A (en) * 2019-08-30 2019-12-03 的卢技术有限公司 A kind of the robot fast relocation method and system of view-based access control model dictionary
CN110986968A (en) * 2019-10-12 2020-04-10 清华大学 Method and device for real-time global optimization and error loop judgment in three-dimensional reconstruction
CN111462231A (en) * 2020-03-11 2020-07-28 华南理工大学 Positioning method based on RGBD sensor and IMU sensor
CN111693047A (en) * 2020-05-08 2020-09-22 中国航空工业集团公司西安航空计算技术研究所 Visual navigation method for micro unmanned aerial vehicle in high-dynamic scene
CN111767905A (en) * 2020-09-01 2020-10-13 南京晓庄学院 Improved image method based on landmark-convolution characteristics

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Optimized LOAM using ground plane constraints and SegMatch-based loop detection;Liu X 等;《Sensors》;第19卷(第24期);第1-19页 *
基于单目视觉与惯性测量单元的SLAM技术研究;余威;《中国优秀硕士学位论文全文数据库信息科技辑》(第02期);第I138-2043页 *
基于深度学习的视觉SLAM回环检测方法;余宇等;《计算机工程与设计》;第40卷(第02期);第529-536页 *

Also Published As

Publication number Publication date
CN112461228A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN111561923B (en) SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
Sola et al. Fusing monocular information in multicamera SLAM
CN110428467B (en) Robot positioning method combining camera, imu and laser radar
CN112461228B (en) IMU and vision-based secondary loop detection positioning method in similar environment
Sim et al. Integrated position estimation using aerial image sequences
CN110033489B (en) Method, device and equipment for evaluating vehicle positioning accuracy
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
CN108196285B (en) Accurate positioning system based on multi-sensor fusion
CN103411587B (en) Positioning and orientation method and system
CN114964276B (en) Dynamic vision SLAM method integrating inertial navigation
CN110794828A (en) Road sign positioning method fusing semantic information
Wei et al. GPS and Stereovision‐Based Visual Odometry: Application to Urban Scene Mapping and Intelligent Vehicle Localization
CN118135526B (en) Visual target recognition and positioning method for four-rotor unmanned aerial vehicle based on binocular camera
CN114485640A (en) Monocular vision inertia synchronous positioning and mapping method and system based on point-line characteristics
Hallquist et al. Single view pose estimation of mobile devices in urban environments
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN116007609A (en) Positioning method and computing system for fusion of multispectral image and inertial navigation
CN117128954A (en) Complex environment-oriented combined positioning method for aircraft
CN117974766B (en) Multi-target identity judging method of distributed double infrared sensors based on space-time basis
DeFranco Detecting and tracking moving objects from a small unmanned air vehicle
Sheikh et al. Geodetic alignment of aerial video frames
Fong et al. Computer vision centric hybrid tracking for augmented reality in outdoor urban environments
Hide et al. An integrated IMU, GNSS and image recognition sensor for pedestrian navigation
CN115344033B (en) Unmanned ship navigation and positioning method based on monocular camera/IMU/DVL tight coupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant