Monocular Simultaneous Localization and Mapping

Simultaneous localization and mapping (SLAM) using the whole image data is an appealing framework to address shortcoming of sparse feature-based methods — in particular frequent failures in textureless environments. Hence, direct methods bypassing the need of feature extraction and matching became recently popular. Many of these methods operate by alternating between pose estimation and computing (semi-)dense depth maps, and are therefore not fully exploiting the advantages of joint optimization with respect to depth and pose. In our work, we propose a framework for monocular SLAM, and its local model in particular, which optimizes simultaneously over depth and pose.

S. Liwicki, C. Zach, O. Miksik, P. Torr. “Coarse-to-fine Planar Regularization for Dense Monocular Depth Estimation”, Proceedings of the 14th European Conference on Computer Vision (ECCV’16), Amsterdam, The Netherlands, pp. 398 – 405, 2016. [pdf | page]

S. Liwicki, C. Zach. “Scale Exploiting Minimal Solvers for Relative Pose with Calibrated Cameras”, Proceedings of the 28th British Machine Vision Conference (BMVC’17), in print. [page]