Accurate localization of moving sensors is essential for many fields, such as robot navigation and urban mapping. In this paper, we present a framework for GPS-supported visual Simultaneous Localization and Mapping with Bundle Adjustment (BA-SLAM) using a rigorous sensor model in a panoramic camera. The rigorous model does not cause system errors, thus representing an improvement over the widely used ideal sensor model. The proposed SLAM does not require additional restrictions, such as loop closing, or additional sensors, such as expensive inertial measurement units. In this paper, the problems of the ideal sensor model for a panoramic camera are analysed, and a rigorous sensor model is established. GPS data are then introduced for global optimization and georeferencing. Using the rigorous sensor model with the geometric observation equations of BA, a GPS-supported BA-SLAM approach that combines ray observations and GPS observations is then established. Finally, our method is applied to a set of vehicle-borne panoramic images captured from a campus environment, and several ground control points (GCP) are used to check the localization accuracy. The results demonstrated that our method can reach an accuracy of several centimetres.
References
[1]
Eade, E.; Fong, P.; Munich, M.E. Monocular graph SLAM with complexity reduction. Proceedings of 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 18– 22 October 2010; pp. 3017–3024.
[2]
Weiss, S.; Scaramuzza, D.; Siegwart, R. Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-Denied environments. J. Field Robot. 2011, 28, 854–874.
[3]
Senlet, T.; Elgammal, A. A framework for global vehicle localization using stereo images and satellite and road maps. Proceedings of 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 2034–2041.
[4]
Lin, K.H.; Chang, C.H.; Dopfer, A.; Wang, C.C. Mapping and localization in 3D environments using a 2D Laser scanner and a stereo camera. J. Inf. Sci. Eng. 2012, 28, 131–144.
[5]
Geyer, C.; Daniilidis, K. Catadioptric projective geometry. Int. J. Comp. Vis. 2001, 45, 223–243.
[6]
Barreto, J.P.; Araujo, H. Geometric properties of central catadioptric line images and their application in calibration. IEEE Tran. Pattern Anal. Mach. Intel. 2005, 27, 1327–1333.
[7]
Mei, C.; Benhimane, S.; Malis, E.; Rives, P. Efficient homography-based tracking and 3-D reconstruction for single-viewpoint sensors. IEEE Trans. Robot. 2008, 24, 1352–1364.
[8]
Kaess, M.; Dellaert, F. Probabilistic structure matching for visual SLAM with a multi-camera rig. Comput. Vis. Image Underst. 2010, 114, 286–296.
[9]
Paya, L; Fernandez, L; Gil, A.; Reinoso, O. Map building and Monte Carlo localization using global appearance of omnidirectional images. Sensors 2010, 10, 11468–11497.
[10]
Gutierrez, D.; Rituerto, A.; Montiel, J.M.M.; Guerrero, J.J. Adapting a real-time monocular visual SLAM from conventional to omnidirectional cameras. Proceedings of the 11th OMNIVIS in IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 343–350.
Artieda, J.; Sebastian, J.M.; Campoy, P.; Correa, J.F.; Mondragon, I.F.; Martinez, C.; Olivares, M. Visual 3-D SLAM from UAVs. J. Intell. Robot. Syst. 2009, 55, 299–321.
[13]
Davison, J. Real-time simultaneous localization and mapping with a single camera. Proceedings of the International Conference on Computer Vision (ICCV), Nice, France, 13–16 October 2003; pp. 1403–1410.
[14]
Zhang, X.; Rad, A.B.; Wong, Y.-K. Sensor fusion of monocular cameras and Laser rangefinders for line-based simultaneous localization and mapping (SLAM) tasks in autonomous mobile robots. Sensors 2012, 12, 429–452.
[15]
Eade, E.; Drummond, T. Scalable monocular SLAM. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), New York, NY, USA, 17–22 June 2006; pp. 469–476.
[16]
Sim, R.; Elinas, P.; Griffin, M.; Little, J.J. Vision-based SLAM using the Rao-Blackwellised particle filter. Proceedings of the IJCAI Workshop on Reasoning with Uncertainty in Robotics, Edinburgh, Scotland, 30 July 2005.
[17]
Sibley, G.; Mei, C.; Reid, I.; Newman, P. Vast-scale outdoor navigation using adaptive relative bundle adjustment. Int. J. Robot. Res. 2010, 29, 958–980.
[18]
Lim, J.; Pollefeys, M.; Frahm, J.M. Online environment mapping. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 20–25 June 2011; pp. 3489–3496.
[19]
Miller, I.; Campbell, M.; Huttenlocher, D. Map-aided localization in sparse global positioning system environments using vision and particle filtering. J. Field Robot. 2011, 28, 619–643.
[20]
Bergasa, L.M.; Ocana, M.; Barea, R.; Lopez, M.E. Real-time hierarchical outdoor SLAM based on stereovision and GPS fusion. IEEE Trans. Intell. Trans. Syst. 2009, 10, 440–452.
[21]
Dusha, D.; Mejias, L. Error analysis and attitude observability of a monocular GPS/visual odometry integrated navigation filter. Int. J. Robot. Res. 2012, 31, 714–737.
[22]
Berrabah, S.A.; Sahli, H.; Baudoin, Y. Visual-based simultaneous localization and mapping and global positioning system correction for geo-localization of a mobile robot. Meas. Sci. Technol. 2011, 22, doi:10.1088/0957-0233/22/12/124003.
[23]
Kannala, J. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. PAMI 2006, 28, 1335–1340.
[24]
Tardif, J.P.; Pavlidis, Y.; Daniilidis, K. Monocular visual odometry in urban environments using an omnidirectional camera. IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), Nice, France, 22–26 September 2008; pp. 2531–2538.
[25]
Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110.
[26]
Sinha, S.N.; Frahm, J.M.; Pollefeys, M.; Yakup Genc, Y. GPU-based video feature tracking and matching, EDGE 2006. Proceedings of Workshop on Edge Computing Using New Commodity Architectures, Chapel Hill, NC, USA, 23–24 May 2006.
[27]
Fischler, M.A.; Bolles, R.C. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. CACM 1981, 24, 381–395.
[28]
Scaramuzza, D. 1-point-ransac structure from motion for vehicle-mounted cameras by exploiting non-holonomic constraints. Int. J. Comput. Vis. 2011, 95, 74–85.
[29]
Pinies, P.; Tardos, J.D. Scalable SLAM building conditionally independent local maps. Proceedings of IEEE Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 3466–3471.
[30]
Eade, E.; Drummond, T. Unified loop closing and recovery for real time monocular SLAM. Proceedings of the British Machine Vision Conference, Leeds, UK, 1–4 September 2008.
[31]
Snay, R.; Soler, T. Continuously operating reference station (CORS): History, applications, and future enhancements. J. Surv. Eng. 2008, 134, 95–104.
[32]
Meguro, J.; Hashizume, T.; Takiguchi, J.; Kurosaki, R. Development of an autonomous mobile surveillance system using a network-based RTK-GPS. Proceedings of the IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 3096–3101.
[33]
Kuemmerle, R.; Grisetti, G.; Strasdat, H.; Konolige, K.; Burgard, W. g2o: A general framework for graph optimization. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 3607–3613.
[34]
LADYBUG. Available online: http://www.ptgrey.com/products/spherical.asp (accessed on 15 September 2012).