Monocular vision-based localization using ORB-SLAM with LIDAR-Aided mapping in real-world robot challenge

Adi Sujiwo, Tomohito Ando, Eijiro Takeuchi, Yoshiki Ninomiya, Masato Edahiro

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

For the 2015 Tsukuba Challenge, we realized an implementation of vision-based localization based on ORB-SLAM. Our method combined mapping based on ORB-SLAM and Velodyne LIDAR SLAM, and utilized these maps in a localization process using only a monocular camera. We also apply sensor fusion method of odometer and ORB-SLAM from all maps. The combined method delivered better accuracy than the original ORB-SLAM, which suffered from scale ambiguities and map distance distortion. This paper reports on our experience when using ORB-SLAM for visual localization, and describes the difficulties encountered.

Original languageEnglish
Pages (from-to)479-490
Number of pages12
JournalJournal of Robotics and Mechatronics
Volume28
Issue number4
DOIs
Publication statusPublished - 2016 Aug

Keywords

  • Autonomous vehicle
  • Field robotics
  • Tsukuba Challenge
  • Visual localization

ASJC Scopus subject areas

  • Computer Science(all)
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Monocular vision-based localization using ORB-SLAM with LIDAR-Aided mapping in real-world robot challenge'. Together they form a unique fingerprint.

Cite this