Camera-based simultaneous localization and mapping: methods, camera types, and deep learning trends

Anak Agung Ngurah Bagus Dwimantara, Oskar Natan, Novelio Putra Indarto, Andi Dharmawan

Abstract


The development of simultaneous localization and mapping (SLAM) technology is crucial for advancing autonomous systems in robotics and navigation. However, camera-based SLAM systems face significant challenges in accuracy, robustness, and computational efficiency, particularly under conditions of environmental variability, dynamic scenes, and hardware limitations. This paper provides a comprehensive review of camera-based SLAM methodologies, focusing on their different approaches for pose estimation, map reconstruction, and camera type. The application of deep learning also will be discussed on how it is expected to improve performance. The objective of this paper is to advance the understanding of camera-based SLAM systems and to provide a foundation for future innovations in robust, efficient, and adaptable SLAM solutions. Additionally, it offers pertinent references and insights for the design and implementation of next-generation SLAM systems across various applications.

Keywords


Camera; Deep learning; Map reconstruction; Simultaneous localization and mapping; Visual Odometry

Full Text:

PDF


DOI: http://doi.org/10.11591/ijra.v14i2.pp162-172

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Robotics and Automation (IJRA)
ISSN 2089-4856, e-ISSN 2722-2586
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

Web Analytics Made Easy - Statcounter IJRA Visitor Statistics