A curated list of SLAM resources
Last updated: Mar. 14th, 2021.
The repo is maintained by Youjie Xia. The repo mainly summarizes the awesome repositories relevant to SLAM/VO on GitHub, including those on the PC end, the mobile end and some learner-friendly tutorials.
Regrading awesome SLAM papers, please refer to Awesome-SLAM-Papers.
If you want to know more about dependencies/packages of SLAM systems, please refer to Installing Dependencies on Ubuntu 16.04 LTS towards SLAM Projects (Updating).
If you think this repo is useful, please watch, star or fork it!
Welcome to contribute to this repo, if you are interested in SLAM! Feel free to creat a pull request or contact me.
Note: Name Format -
repository name: one-sentence introduction
(with link to the corresponding repo)
- 1. Hot SLAM Repos on GitHub
- 2. Visual SLAM
- 3. Visual Inertial SLAM
- 4. LIDAR based SLAM
- 5. Learning based SLAM
- 6. Mobile End SLAM
- 7. Datasets
- 8. Tutorials
- 9. Selected Blogs
- 10. Research Groups
- 11. Community
- Awesome-SLAM: Resources and Resource Collections of SLAM
- awesome-slam: A curated list of awesome SLAM tutorials, projects and communities.
- SLAM: learning SLAM,curse,paper and others
- A list of current SLAM (Simultaneous Localization and Mapping) / VO (Visual Odometry) algorithms
- awesome-visual-slam: The list of vision-based SLAM / Visual Odometry open source, blogs, and papers
- Lee-SLAM-source: SLAM 开发学习资源与经验分享
- awesome-SLAM-list
- VIO-Resources
- OpenVSLAM: A Versatile Visual SLAM Framework
- OpenSfM: Open source Structure-from-Motion pipeline
- GSLAM (A General SLAM Framework and BenchMark)
- ScaViSLAM
- ORB_SLAM: A Versatile and Accurate Monocular SLAM
- LSD-SLAM: Large-Scale Direct Monocular SLAM
- DSO: Direct Sparse Odometry
- LDSO: Direct Sparse Odometry with Loop Closure
- SVO: Semi-direct Visual Odometry
- PTAM: Parallel Tracking and Mapping
- LPVO: Line and Plane based Visual Odometry
- LCSD_SLAM: Loosely-Coupled Semi-Direct Monocular SLAM
- CCM-SLAM: Robust and Efficient Centralized Collaborative Monocular SLAM for Robotic Teams
- ORB_SLAM2
- ORBSLAM2_with_pointcloud_map
- PL-SLAM: a Stereo SLAM System through the Combination of Points and Line Segments
- StVO-PL: Stereo Visual Odometry by combining point and line segment features
- PL-SVO
- stereo-dso: Direct Sparse Odometry with Stereo Cameras
- S-PTAM: Stereo Parallel Tracking and Mapping, Python implementation: stereo_ptam
- Robust Stereo Visual Odometry
- OV²SLAM: A Fully Online and Versatile Visual SLAM for Real-Time Applications
- Dense Visual Odometry and SLAM
- DVO:Dense Visual Odometry
- PlanarSLAM
- badslam: Bundle Adjusted Direct RGB-D SLAM
- RESLAM: A real-time robust edge-based SLAM system
- VDO-SLAM: A Visual Dynamic Object-aware SLAM System
- REVO: Robust Edge-based Visual Odometry
- CubeSLAM: Monocular 3D Object Detection and SLAM
- se2lam: Visual-Odometric On-SE(2) Localization and Mapping
- se2clam: SE(2)-Constrained Localization and Mapping by Fusing Odometry and Vision
- BreezySLAM: Simple, efficient, open-source package for Simultaneous Localization and Mapping in Python, Matlab, Java, and C++
- MultiCol-SLAM: a multi-fisheye camera SLAM
- Event-based Stereo Visual Odometry
- maplab: An open visual-inertial mapping framework.
- ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
- VINS-Fusion: An optimization-based multi-sensor state estimator
- Kimera: an open-source library for real-time metric-semantic localization and mapping
- OpenVINS: An open source platform for visual-inertial navigation research
- OKVIS: Open Keyframe-based Visual-Inertial SLAM (ROS Version)
- ROVIO: Robust Visual Inertial Odometry
- R-VIO: Robocentric Visual-Inertial Odometry
- LARVIO: A lightweight, accurate and robust monocular visual inertial odometry based on Multi-State Constraint Kalman Filter
- msckf_mono
- LearnVIORB: Visual Inertial SLAM based on ORB-SLAM2 (ROS Version), LearnViORB_NOROS (Non-ROS Version)
- PVIO: Robust and Efficient Visual-Inertial Odometry with Multi-plane Priors
- PL-VIO: monocular visual inertial system with point and line features
- PL-VINS: Real-Time Monocular Visual-Inertial SLAM with Point and Line Features
- Adaptive Line and Point Feature-based Visual Inertial Odometry for Robust Localization in Indoor Environments
- REBiVO: Realtime Edge Based Inertial Visual Odometry for a Monocular Camera
- Co-VINS: Collaborative Localization for Multiple Monocular Visual-Inertial Systems
- msckf_vio: Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight
- OKVIS: Open Keyframe-based Visual-Inertial SLAM
- Basalt: Visual-Inertial Mapping with Non-Linear Factor Recovery
- ICE-BA: Incremental, Consistent and Efficient Bundle Adjustment for Visual-Inertial SLAM
- ORBSLAM_DWO: stereo + inertial input based on ORB_SLAM
- VI-Stereo-DSO
- Semi-Dense Direct Visual Inertial Odometry
- LearnVIORBnorosgai2: Visual Inertial SLAM based on ORB-SLAM2 (Non-ROS Version)
- ygz-stereo-inertial: a stereo-inertial visual odometry
- Cartographer
- LOAM-Livox: A robust LiDAR Odometry and Mapping (LOAM) package for Livox-LiDAR
- FAST-LIO
- LOL: Lidar-only Odometry and Localization in 3D point cloud maps
- PyICP SLAM: Full-python LiDAR SLAM using ICP and Scan Context
- LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
- LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain
- hdl_graph_slam: 3D LIDAR-based Graph SLAM
- A-LOAM: Advanced implementation of LOAM
- LIO-mapping: A Tightly Coupled 3D Lidar and Inertial Odometry and Mapping Approach
- SC-LeGO-LOAM: LiDAR SLAM: Scan Context + LeGO-LOAM
- Fast LOAM: Fast and Optimized Lidar Odometry And Mapping for indoor/outdoor localization
- SuMa: Surfel-based Mapping using 3D Laser Range Data
- LINS: LiDAR-inertial-SLAM
- ISCLOAM: Intensity Scan Context based full SLAM implementation for autonomous driving
- MULLS: Versatile LiDAR SLAM via Multi-metric Linear Least Square
The SLAM algorithms using conventional methods are listed above by default. The section is to list SLAM algos using learning based methods.
- A collection of deep learning based localization models
- 3D-Reconstruction-with-Deep-Learning-Methods
- TLIO: Tight Learned Inertial Odometry
- Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction
- SuperPoint + ORB_SLAM2
- VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem
- DeepSFM: Structure From Motion Via Deep Bundle Adjustment
- Unsupervised Monocular Visual-inertial Odometry Network
- Semantic SLAM
- CNN-DSO: Direct Sparse Odometry with CNN Depth Prediction
- CNN-SVO
- KFNet: Learning Temporal Camera Relocalization using Kalman Filtering
- Unsupervised Depth Completion from Visual Inertial Odometry
- The Perfect Match: 3D Point Cloud Matching with Smoothed Densities
- Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation
- M^3SNet: Unsupervised Multi-metric Multi-view Stereo Network
- Deep EKF VIO
- Active Neural SLAM
- DeepFactors
- OverlapNet: Loop Closing for 3D LiDAR-based SLAM
- SO-Net: Self-Organizing Network for Point Cloud Analysis
- Geometry-Aware Learning of Maps for Camera Localization
- DeepV2D: Video to Depth with Differentiable Structure from Motion
- PVN3D: A Deep Point-wise 3D Keypoints Voting Network for 6DoF Pose Estimation
- DeepMVS: Learning Multi-View Stereopsis
- Epipolar Transformers
- DF-VO: Depth and Flow for Visual Odometry
- DeepTAM: Deep Tracking and Mapping
- GCNv2 SLAM: Real-time SLAM system with deep features
- FCGF: Fully Convolutional Geometric Features: Fast and accurate 3D features for registration and correspondence
- Deep Image Retrieval
- Key.Net: Keypoint Detection by Handcrafted and Learned CNN Filters
- SuMa++: Efficient LiDAR-based Semantic SLAM
- DS-SLAM
- Probabilistic Data Association via Mixture Models for Robust Semantic SLAM
- SIVO: Semantically Informed Visual Odometry and Mapping
- orbslam_semantic_nav_ros, RGBD
- Pop-up SLAM: Semantic Monocular Plane SLAM for Low-texture Environments
- Semantic SLAM using ROS, ORB SLAM, PSPNet101
The SLAM algorithms running on PC end are listed above by default. The section is to list references and resources for SLAM algo dev on mobile end.
- ORB_SLAM-iOS
- ORB_SLAM2-iOS
- MobileSLAM: LSD SLAM on Mobile Phone
- SLAM_AR_Android
- VINS-Mobile: Monocular Visual-Inertial State Estimator on Mobile Phones
- Awesome-ARKit
- Awesome-ARCore
- MixedRealityToolkit-Unity
- arcore-android-sdk
- OpenARK
- opencv-markerless-AR-Mobile
- DepthAPISampleForiOS11
- AVDepthCamera
- ios11-depth-map-test
- ARCore Depth Lab: Depth API Samples for Unity
- AR-Depth: Fast Depth Densification for Occlusion-Aware Augmented Reality
- AR-Depth-cpp: C++ implementation of Fast Depth Densification for Occlusion-aware Augmented Reality (SIGGRAPH-Asia 2018)
- Microsoft Computer Vision API: Android Client Library & Sample
- GPUImage: An open source iOS framework for GPU-based image and video processing
-
Awesome Robotics Datasets: A collection of useful datasets for robotics and computer vision
-
ADVIO: An Authentic Dataset for Visual-Inertial Odometry
- 视觉SLAM十四讲/14 lectures on visual SLAM,English Version,中文版
- Practice of the SlamBook
- GraphSLAM_tutorials_code
- SLAM 开发学习资源与经验分享
- Visual SLAM/VIO 算法笔记
- opengv
- Geometry Central
- vilib: CUDA Visual Library by RPG
- Vitis Vision Library
- OpenGR: A C++ library for 3D Global Registration
- OpenCP: Computational photography library. The code is parallelized by using SIMD intrinsics and multi-threading.
- RoboticSystemsBook
- MATLABRobotics: MATLAB sample codes for mobile robot navigation
- Kindr: Kinematics and Dynamics for Robotics
- Sensor Fusion in ROS: An in-depth step-by-step tutorial for implementing sensor fusion with robot_localization
- fuse: The fuse stack provides a general architecture for performing sensor fusion live on a robot. Some possible applications include state estimation, localization, mapping, and calibration.
- GPU Computing in Robotics
- Lie groups for Computer Vision
- Lie groups for 2D and 3D Transformations
- Hermite Splines in Lie Groups as Products of Geodesics
- LieTransformer
- Sophus: C++ implementation of Lie Groups using Eigen
- manif: A small C++11 header-only library for Lie theory
- Gauss-Newton/Levenberg-Marquardt Optimization
- How a Kalman filter works, in pictures
- 卡爾曼濾波 (Kalman Filter)
- 翻譯 Understanding the Basis of the Kalman Filter Via a Simple and Intuitive Derivation
- ceres-solver: A large scale non-linear optimization library
- g2o: A General Framework for Graph Optimization
- GTSAM: Georgia Tech Smoothing and Mapping Library
- miniSAM: A general and flexible factor graph non-linear least square optimization framework
- AprilSAM: Real-time Smoothing and Mapping
- GTSAM Tutorial Examples
- AMGCL: C++ library for solving large sparse linear systems with algebraic multigrid method
- Armadillo: fast C++ library for linear algebra & scientific computing
- IFOPT: An Eigen-based, light-weight C++ Interface to Nonlinear Programming Solvers (Ipopt, Snopt)
- LBFGS++: A header-only C++ library for L-BFGS and L-BFGS-B algorithms
- OptimLib: a lightweight C++ library of numerical optimization methods for nonlinear functions
- PoseLib: a collection of minimal solvers for camera pose estimation
- fpm: C++ header-only fixed-point math library
- kalibr: The Kalibr visual-inertial calibration toolbox
- kalibr_allan: IMU Allan standard deviation charts for use with Kalibr and inertial kalman filters
- Accurate geometric camera calibration with generic camera models
- LI-Calib: Targetless Calibration of LiDAR-IMU System Based on Continuous-time Batch Estimation
- Online Photometric Calibration
- IMU-TK: Inertial Measurement Unit ToolKit
- crisp: Camera-to-IMU calibration and synchronization toolbox
- VersaVIS: An Open Versatile Multi-Camera Visual-Inertial Sensor Suite
- RansacLib: Template-based implementation of RANSAC and its variants in C++
- The Future of Real-Time SLAM and Deep Learning vs SLAM
- IMU Data Fusing: Complementary, Kalman, and Mahony Filter
TBA