The following papers focus on SLAM in dynamic environments and life-long SLAM. In dynamic environments, there are two kinds of robust SLAM: first is detection & removal, and the second is detection & tracking. Although mapping in dynamic environments is not my focus, I will also include some interesting articles.
Vision indicates the pipeline is built with a camera. Others are the same, such as lidar, radar, sensor fusion.
-
A survey: which features are required for dynamic visual simultaneous localization and mapping?. Zewen Xu, CAS. 2021
-
State of the Art in Real-time Registration of RGB-D Images. Stotko, Patrick. University of Bonn. 2016
-
Visual SLAM and Structure from Motion in Dynamic Environments: A Survey. University of Oxford. 2018
-
State of the Art on 3D Reconstruction with RGB-D Cameras. Michael Zollhöfer. Stanford University. 2018
-
https://github.com/KTH-RPL/DynamicMap_Benchmark
- benchmark
-
Dynablox: Real-time Detection of Diverse Dynamic Objects in Complex Environments
- Extension of Voxlobx
-
(IROS 2022) CFP-SLAM: A Real-time Visual SLAM Based on Coarse-to-Fine Probability in Dynamic Environments
-
(IROS 2022) DRG-SLAM: A Semantic RGB-D SLAM using Geometric Features for Indoor Dynamic Scene
-
(IEEE RA-L'22) DynaVINS: A Visual-Inertial SLAM for Dynamic Environments, code: https://github.com/url-kaist/dynaVINS
- Non-deep learning approach, using constraints to remove feature points on moving objects
-
DeFlowSLAM: Self-Supervised Scene Motion Decomposition for Dynamic Dense SLAM
-
Efficient Spatial-Temporal Information Fusion for LiDAR-Based 3D Moving Object Segmentation
- Haomo.AI, code, dynamic detection
-
POCD: Probabilistic Object-Level Change Detection and Volumetric Mapping in Semi-Static Scenes
- RSS 2022, map updating in semi-static scenes
-
J. Schauer and A. Nuchter, “The Peopleremover—Removing Dynamic Objects From 3-D Point Cloud Data by Traversing a Voxel Occupancy Grid,” IEEE Robot. Autom. Lett., vol. 3, no. 3, pp. 1679–1686, Jul. 2018, doi: 10.1109/LRA.2018.2801797.
- Method for removing dynamic objects based on voxel traversal. Despite its many shortcomings, the paper proposes many tricks to solve these problems, and the results look quite good.
- code, video
-
N. Rufus, U. K. R. Nair, A. V. S. S. B. Kumar, V. Madiraju, and K. M. Krishna, “SROM: Simple Real-time Odometry and Mapping using LiDAR data for Autonomous Vehicles,” IV 2020
- Roughly removes possible moving objects, removes the ground, and then extracts the remaining parts
-
M. Schorghuber, D. Steininger, Y. Cabon, M. Humenberger, and M. Gelautz, “SLAMANTIC - Leveraging Semantics to Improve VSLAM in Dynamic Environments” ICCV 2019 workshop
- Visual SLAM in dynamic environments. Uses semantics to calculate confidence in points, uses high-confidence points to assist low-confidence points, and ultimately determines which parts are used for localization and mapping.
-
S. Gu, S. Yao, J. Yang, and H. Kong, “Semantics-Guided Moving Object Segmentation with 3D LiDAR,” arxiv 2022.05
- Dynamic object segmentation network, based on the idea of rangenet.
-
Y. Pan, B. Gao, J. Mei, S. Geng, C. Li, and H. Zhao, “SemanticPOSS: A Point Cloud Dataset with Large Quantity of Dynamic Instances,” IV 2020
- Outdoor dataset of dynamic objects, Peking University, website
-
S. Pagad, D. Agarwal, S. Narayanan, K. Rangan, H. Kim, and G. Yalla, “Robust Method for Removing Dynamic Objects from Point Clouds,” ICRA 2020
- video, dynamic removal
-
L. Sun, Z. Yan, A. Zaganidis, C. Zhao, and T. Duckett, “Recurrent-OctoMap: Learning State-Based Map Refinement for Long-Term Semantic Mapping With 3-D-Lidar Data,” RAL
- Life-long SLAM
-
P. Egger, P. V. K. Borges, G. Catt, A. Pfrunder, R. Siegwart, and R. Dubé, “PoseMap: Lifelong, Multi-Environment 3D LiDAR Localization,” IROS 2018
- Lifelong SLAM, ETH SAL group
-
DynamicFilter: an Online Dynamic Objects Removal Framework for Highly Dynamic Environments, ICRA 2022
- IJRR experts, unfortunately not open source, HKUST, SUSTech
-
X. Ma, Y. Wang, B. Zhang, H.-J. Ma, and C. Luo, “DynPL-SVO: A New Method Using Point and Line Features for Stereo Visual Odometry in Dynamic Scenes.” arXiv, May 17, 2022
- Stereo visual odometry using point and line features in dynamic scenes, Northeast University, not yet open source
-
M. T. Lázaro, R. Capobianco, and G. Grisetti, “Efficient Long-term Mapping in Dynamic Environments,” IROS 2018
- Efficient ICP scheme, achieving map entity merging. As it deals with 2D maps, there are not many things to handle. Dynamic point clouds can be removed using point visualization.
- code,
-
T. Krajník, J. P. Fentanes, J. M. Santos, and T. Duckett, “FreMEn: Frequency Map Enhancement for Long-Term Mobile Robot Autonomy in Changing Environments,” TRO 2017
-
G. Kurz, M. Holoch, and P. Biber, “Geometry-based Graph Pruning for Lifelong SLAM.” IROS 2021
- We propose a new method that considers geometric criteria for selecting vertices to prune. This is efficient, easy to implement, and results in a graph with uniformly distributed vertices that remain part of the robot trajectory. Additionally, we propose a new marginalization method that is more robust to erroneous loop closures compared to existing methods. Mainly involves optimization of the SLAM back-end, addressing how to prune the factor graph when the map or factor graph is updated.
-
Quei-An Chen and Akihiro Tsukada, “Flow Supervised Neural Radiance Fields for Static-Dynamic Decomposition,” ICRA 2022
-
W. Ding, S. Hou, H. Gao, G. Wan, and S. Song, “LiDAR Inertial Odometry Aided Robust LiDAR Localization System in Changing City Scenes,” ICRA 2020
- Baidu's solution using LiDAR and IMU for localization in dynamic scenes, updating the map with new elements in the scene.
- Life-long SLAM
-
G. D. Tipaldi, D. Meyer-Delius, and W. Burgard, “Lifelong localization in changing environments,” IJRR 2013
- Life-long localization
-
S. Zhu, X. Zhang, S. Guo, J. Li, and H. Liu, “Lifelong Localization in Semi-Dynamic Environment,” ICRA 2021
- Tsinghua University, life-long localization
-
F. Pomerleau, P. Krüsi, F. Colas, P. Furgale, and R. Siegwart, “Long-term 3D map maintenance in dynamic environments,” ICRA 2014
- Map updating in dynamic environments
-
D. J. Yoon, T. Y. Tang, and T. D. Barfoot, “Mapless Online Detection of Dynamic Objects in 3D Lidar.” Conference on Computer and Robot Vision (CRV) 2019
- Point cloud dynamic detection
-
Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep learning in dynamic environment. Robotics and Autonomous Systems 2019
-
M. Zhao et al., “A General Framework for Lifelong Localization and Mapping in Changing Environment,” IROS 2021
- Highseer Robotics' life-long localization paper
- Multi-session map representation and an efficient online map update strategy, subsystems: local laser odometry (LLO), global laser matching (GLM), and pose graph optimization (PGR). LLO constructs a series of locally consistent sub-maps, GLM calculates relative constraints between incoming scan clouds and global sub-maps, and PGR collects sub-maps and constraints from LLO and GLM, prunes old sub-maps in historical maps, and performs pose graph sparsification and optimization.
-
D. Henning, T. Laidlow, and S. Leutenegger, “BodySLAM: Joint Camera Localisation, Mapping, and Human Motion Tracking,” *arXiv:2205.02301
- Combines human body reconstruction with SLAM, similar to AirDOS
-
Pfreundschuh, Patrick, et al. “Dynamic Object Aware LiDAR SLAM Based on Automatic Generation of Training Data.” (ICRA 2021)
-
Canovas Bruce, et al. “Speed and Memory Efficient Dense RGB-D SLAM in Dynamic Scenes.” (IROS 2020)
-
Yuan Xun and Chen Song, “SaD-SLAM: A Visual SLAM Based on Semantic and Depth Information,” (IROS 2020)
- USTC, code, video
-
Dong, Erqun, et al. “Pair-Navi: Peer-to-Peer Indoor Navigation with Mobile Visual SLAM,” (ICCC 2019)
-
Ji Tete, et al. “Towards Real-Time Semantic RGB-D SLAM in Dynamic Environments,” (ICRA 2021)
-
Palazzolo Emanuele, et al. “ReFusion: 3D Reconstruction in Dynamic Environments for RGB-D Cameras Exploiting Residuals,” (IROS 2019)
-
Arora Mehul, et al. “Mapping the Static Parts of Dynamic Scenes from 3D LiDAR Point Clouds Exploiting Ground Segmentation.”
-
Chen Xieyuanli, et al. “Moving Object Segmentation in 3D LiDAR Data: A Learning-Based Approach Exploiting Sequential Data,” IEEE Robotics and Automation Letters, 2021
-
Zhang Tianwei, et al. “FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow,” (ICRA 2020)
- code, video.
-
Zhang Tianwei, et al. “AcousticFusion: Fusing Sound Source Localization to Visual SLAM in Dynamic Environments,” IROS 2021
- video. Combines sound signals
-
Liu Yubao and Miura Jun, “RDS-SLAM: Real-Time Dynamic SLAM Using Semantic Segmentation Methods,” IEEE Access 2021
-
Liu Yubao and Miura Jun, “RDMO-SLAM: Real-Time Visual SLAM for Dynamic Environments Using Semantic Label Prediction With Optical Flow,” IEEE Access, vol. 9, 2021, pp. 106981–97. IEEE Xplore.
-
code, video.
-
-
Cheng Jiyu, et al. “Improving Visual Localization Accuracy in Dynamic Environments Based on Dynamic Region Removal,” IEEE Transactions on Automation Science and Engineering, vol. 17, no. 3, July 2020, pp. 1585–96. IEEE Xplore.
-
Soares João Carlos Virgolino, et al
. “Crowd-SLAM: Visual SLAM Towards Crowded Environments Using Object Detection,” Journal of Intelligent & Robotic Systems 2021
-
code, video
-
Visual Localization and Mapping in Dynamic and Changing Environments (2022), previously based on ORB-SLAM2, the latest version is based on ORB-SLAM3.
-
Kaveti Pushyami and Singh Hanumant, “A Light Field Front-End for Robust SLAM in Dynamic Environments.”
-
Kuen-Han Lin and Chieh-Chih Wang, “Stereo-Based Simultaneous Localization, Mapping and Moving Object Tracking,” IROS 2010
-
Fu, H.; Xue, H.; Hu, X.; Liu, B., “LiDAR Data Enrichment by Fusing Spatial and Temporal Adjacent Frames,” Remote Sens. 2021, 13, 3640.
-
Qian, Chenglong, et al., “RF-LIO: Removal-First Tightly-Coupled Lidar Inertial Odometry in High Dynamic Environments,” IROS 2021, XJTU
-
K. Minoda, F. Schilling, V. Wüest, D. Floreano, and T. Yairi, “VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments,” RAL 2021
- Dynamic environment dataset, including static and dynamic levels of scenes, suitable for verification.
- University of Tokyo, code
-
W. Dai, Y. Zhang, P. Li, Z. Fang, and S. Scherer, “RGB-D SLAM in Dynamic Environments Using Point Correlations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1–1, 2020
- Zhejiang University, using point correlations for removal.
-
C. Huang, H. Lin, H. Lin, H. Liu, Z. Gao, and L. Huang, “YO-VIO: Robust Multi-Sensor Semantic Fusion Localization in Dynamic Indoor Environments,” in 2021 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 2021.
- Uses YOLO and optical flow to identify moving objects, removes feature points for localization
- Combines VIO
-
(IROS 2022) Dynamic-VINS: RGB-D Inertial Odometry for a Resource-restricted Robot in Dynamic Environments.
-
Youngjae Min, Do-Un Kim, and Han-Lim Choi, "Kernel-Based 3-D Dynamic Occupancy Mapping with Particle Tracking," 2021 IEEE International Conference on Robotics and Automation (ICRA)
-
DyOb-SLAM: Dynamic Object Tracking SLAM System (2022)
- Combination of VDO-SLAM and DynaSLAM
-
- Direct method for dynamic object tracking, page
-
(IROS 2022) MOTSLAM: MOT-assisted monocular dynamic SLAM using single-view depth estimation (2022)
-
TwistSLAM++: Fusing multiple modalities for accurate dynamic semantic SLAM (2022)
- SLAMMOT
-
(IROS 2022) Visual-Inertial Multi-Instance Dynamic SLAM with Object-level Relocalisation (2022)
- IROS 2022, lab website: https://mlr.in.tum.de/research/semanicobjectlevelanddynamicslam
-
Learning to Complete Object Shapes for Object-level Mapping in Dynamic Scenes (2022), by the same author as above,
- Based on MID-Fusion.
-
T. Ma and Y. Ou, “MLO: Multi-Object Tracking and Lidar Odometry in Dynamic Environment,” arXiv, Apr. 29, 2022
- SLAM + MOT
-
Z. Wang, W. Li, Y. Shen, and B. Cai, “4-D SLAM: An Efficient Dynamic Bayes Network-Based Approach for Dynamic Scene Understanding,” IEEE Access
- Semantic recognition of dynamics, uses UKF for dynamic tracking, but the graph results are poor.
-
T. Ma and Y. Ou, “MLO: Multi-Object Tracking and Lidar Odometry in Dynamic Environment,” ArXiv 2022
- Based on LOAM for target tracking, separately estimates moving objects and self, then fuses the results. Seems loosely coupled.
-
(IROS 2022) R. Long, C. Rauch, T. Zhang, V. Ivan, T. L. Lam, and S. Vijayakumar, “RGB-D SLAM in Indoor Planar Environments with Multiple Large Dynamic Objects,”
- Performs dynamic removal first, followed by dynamic tracking. SLAM + MOT in structured environments (surfaces)
-
Qiu Yuheng, et al., “**AirDOS: Dynamic SLAM benefits from Articulated Objects,” 2021 (Arxiv)
-
Ballester, Irene, et al., “DOT: Dynamic Object Tracking for Visual SLAM,” ICRA 2021
- code, video, University of Zaragoza, vision
-
Liu Yubao and Miura Jun, “RDMO-SLAM: Real-Time Visual SLAM for Dynamic Environments Using Semantic Label Prediction With Optical Flow,” IEEE Access.
-
Kim Aleksandr, et al., “EagerMOT: 3D Multi-Object Tracking via Sensor Fusion,” ICRA 2021
-
Shan, Mo, et al., “OrcVIO: Object Residual Constrained Visual-Inertial Odometry,” IROS2020
-
Shan, Mo, et al., “OrcVIO: Object Residual Constrained Visual-Inertial Odometry,” IROS 2021
-
-
Rosen, David M., et al., “Towards Lifelong Feature-Based Mapping in Semi-Static Environments,” ICRA 2016.
-
Henein Mina, et al., “Dynamic SLAM: The Need For Speed,” ICRA 2020.
-
Zhang Jun, et al., “VDO-SLAM: A Visual Dynamic Object-Aware SLAM System,” ArXiv 2020.
-
“Robust Ego and Object 6-DoF Motion Estimation and Tracking,” Jun Zhang, Mina Henein, Robert Mahony, and Viorela Ila, IROS 2020 (code)
-
-
Minoda, Koji, et al., “VIODE: A Simulated Dataset to Address the Challenges of Visual-Inertial Odometry in Dynamic Environments,” RAL 2021
-
Vincent, Jonathan, et al., “Dynamic Object Tracking and Masking for Visual SLAM,” IROS 2020
- code, video,
-
Huang, Jiahui, et al., “ClusterVO: Clustering Moving Instances and Estimating Visual Odometry for Self and Surroundings,” CVPR 2020
-
Liu, Yuzhen, et al., “A Switching-Coupled Backend for Simultaneous Localization and Dynamic Object Tracking,” RAL 2021
- Tsinghua
-
Yang Charig, et al., “Self-Supervised Video Object Segmentation by Motion Grouping,” ICCV 2021
-
Long Ran, et al., “RigidFusion: Robot Localisation and Mapping in Environments with Large Dynamic Rigid Objects,” RAL 2021
- project page, code, video,
-
Yang Bohong, et al., “Multi-Classes and Motion Properties for Concurrent Visual SLAM in Dynamic Environments,” IEEE Transactions on Multimedia, 2021
-
Yang Gengshan and Ramanan Deva, “Learning to Segment Rigid Motions from Two Frames,” CVPR 2021
-
Thomas Hugues, et al., “Learning Spatiotemporal Occupancy Grid Maps for Lifelong Navigation in Dynamic Scenes,”
- code.
-
Jung Dongki, et al., “DnD: Dense Depth Estimation in Crowded Dynamic Indoor Scenes,” ICCV 2021
- code, video.
-
Luiten Jonathon, et al., “Track to Reconstruct and Reconstruct to Track,” RAL+ICRA 2020
-
Grinvald, Margarita, et al., “TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction,” ICRA 2021
-
Wang Chieh-Chih, et al., “Simultaneous Localization, Mapping and Moving Object Tracking,” The International Journal of Robotics Research, 2007
-
Ran Teng, et al., “RS-SLAM: A Robust Semantic SLAM in Dynamic Environments Based on RGB-D Sensor.”
-
Xu Hua, et al., “OD-SLAM: Real-Time Localization and Mapping in Dynamic Environment through Multi-Sensor Fusion,” (ICARM 2020) https://doi.org/10.1109/ICARM49381.2020.9195374.
-
Wimbauer Felix, et al., “MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera,” CVPR 2021
- Project page, code, video, video 2.
-
Liu Yu, et al., “Dynamic RGB-D SLAM Based on Static Probability and Observation Number,” IEEE Transactions on Instrumentation and Measurement, vol. 70, 2021, pp. 1–11. IEEE Xplore, https://doi.org/10.1109/TIM.2021.3089228.
-
P. Li, T. Qin, and S. Shen, “Stereo Vision-based Semantic 3D Object and Ego-motion Tracking for Autonomous Driving,” arXiv 2018
- Shen Shaojie’s group
-
G. B. Nair et al., “Multi-object Monocular SLAM for Dynamic Environments,” IV2020
-
M. Rünz and L. Agapito, “Co-fusion: Real-time segmentation, tracking and fusion of multiple objects,” in 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017, pp. 4471–4478.
- code,
-
(IROS 2022) TwistSLAM: Constrained SLAM in Dynamic Environment,
- Follow-up to S3LAM, uses panoramic segmentation as the detection front-end
-
3D VSG: Long-term Semantic Scene Change Prediction through 3D Variable Scene Graphs (2022)
- Semantic scene change detection
- code: https://github.com/ethz-asl/3d_vsg
-
CubeSLAM: Monocular 3D Object SLAM, IEEE Transactions on Robotics 2019, S. Yang, S. Scherer PDF
-
Salas-Moreno Renato F., et al., “SLAM++: Simultaneous Localisation and Mapping at the Level of Objects,” CVPR 2013
- code, video,
-
Nicholson Lachlan, et al., “QuadricSLAM: Dual Quadrics From Object Detections as Landmarks in Object-Oriented SLAM,” RAL-2018
-
Wu Yanmin, et al., “EAO-SLAM: Monocular Semi-Dense Object SLAM Based on Ensemble Data Association,” 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2020, pp. 4966–73. arXiv.org, https://doi.org/10.1109/IROS45743.2020.9341757.
-
H. Osman, N. Darwish, and A. Bayoumi, “LoopNet: Where to Focus Detecting Loop Closures in Dynamic Scenes,” IEEE Robotics and Automation Letters, pp. 1–1, 2022, doi: 10.1109/LRA.2022.3142901.
- Loop detection network in dynamic environments, code, video
-
M. N. Finean, L. Petrović, W. Merkt, I. Marković, and I. Havoutis, “Motion Planning in Dynamic Environments Using Context-Aware Human Trajectory Prediction,” arXiv:2201.05058 [cs], Jan. 2022.
-
(IROS 2022) Extrinsic Camera Calibration from A Moving Person
-
(IROS 2022) ACEFusion: Accelerated and Energy-Efficient Semantic 3D Reconstruction of Dynamic Scenes
-
(IROS 2022) Efficient 2D LIDAR-Based Map Updating For Long-Term Operations in Dynamic Environments
-
(IROS 2022) Detecting Invalid Map Merges in Lifelong SLAM
-
(IROS 2022) Probabilistic Object Maps for Long-Term Robot Localization
-
(IROS 2022) ROLL: Long-Term Robust LiDAR-based Localization With Temporary Mapping in Changing Environments
TBD