This work presents the development of an integrated hardware/software sensor system for moving object detection and distance calculation, based on background subtraction algorithm. The sensor comprises a catadioptric system composed by a camera and a convex mirror that reflects the environment to the camera from all directions, obtaining a panoramic view. The sensor is used as an omnidirectional vision system, allowing for localization and navigation tasks of mobile robots. Several image processing operations such as filtering, segmentation and morphology have been included in the processing architecture. For achieving distance measurement, an algorithm to determine the center of mass of a detected object was implemented. The overall architecture has been mapped onto a commercial low-cost FPGA device, using a hardware/software co-design approach, which comprises a Nios II embedded microprocessor and specific image processing blocks, which have been implemented in hardware. The background subtraction algorithm was also used to calibrate the system, allowing for accurate results. Synthesis results show that the system can achieve a throughput of 26.6 processed frames per second and the performance analysis pointed out that the overall architecture achieves a speedup factor of 13.78 in comparison with a PC-based solution running on the real-time operating system xPC Target. 1. Introduction Scientists predict that robots will play an important role in the future. In this scenario, robots will be able to assist humans in many tasks as domestic labors, elderly people care, cleaning, vehicles operation, and surveillance. Animals have mechanisms to interact with the environment provided by natural evolution. They are able to sense the surrounding environment and to move according to a defined objective, contouring obstacles and performing a dynamic path planning. In the robotic field, one of the major challenges is providing robots with sensorial and rational capabilities, allowing them to assist, and possibly substitute, humans in some activities requiring special skills. Autonomous mobile robot navigation considers the execution of three stages: (a) mapping, (b) localization, and (c) decision making. The first stage uses information from sensors for creating a map of the environment. The second one relates the map with the sensor information, allowing the robot to self-localization in the environment. The third stage considers the path-planning problem [1]. Different kinds of sensors can be used for providing environment information to the mobile robot. Such
References
[1]
R. Siegwart and I. Nourbakhsh, Introduction to Autonomous Mobile Robots, MIT Press, Cambridge, Mass, USA, 2004.
[2]
L. Spacek and C. Burbridge, “Instantaneous robot self-localization and motion estimation with omnidirectional vision,” Robotics and Autonomous Systems, vol. 55, no. 9, pp. 667–674, 2007.
[3]
J. Yudi Mori, D. M?oz Arboleda, J. N. Arias Garcia, C. Llanos Quintero, and J. Motta, “FPGA-based image processing for omnidirectional vision on mobile robots,” in Proceedings of the 24th Symposium on Integrated Circuits and Systems Design, pp. 113–118, Jo?o Pessoa, Brazil, 2011.
[4]
E. Trucco and A. Verri, Introductory Techniques for 3-D Computer Vision, Prentice Hall, 1998.
[5]
K. Daniilidis and C. Geyer, “Omnidirectional vision: theory and algorithms,” in Proceedings of the 15th International Conference on Pattern Recognition, vol. 1, pp. 89–96, 2000.
[6]
C. Geyer and K. Daniilidis, “Catadioptric camera calibration,” in Proceedings of the 17th IEEE International Conference on Computer Vision (ICCV '99), vol. 1, pp. 398–404, September 1999.
[7]
G. Botella, M. Rodriguez, A. Garca, and E. Ros, “Neuromorphic configurable architecture for robust motion estimation,” International Journal of Reconfigurable Computing, vol. 2008, Article ID 428265, 9 pages, 2008.
[8]
Z. Wei, D. Lee, N. Brent, J. Archibald, and B. Edwards, “FPGA-based embedded motion estimation sensor,” International Journal of Reconfigurable Computing, vol. 2008, Article ID 636145, 9 pages, 2008.
[9]
K. Shimizu and S. Hirai, “CMOS+FPGA vision system for visual feedback of mechanical systems,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '06), pp. 2060–2065, May 2006.
[10]
G. Salda?a-González and M. Arias-Estrada, “FPGA based acceleration for image processing applications,” in Image Processing, 2009.
[11]
Y. Tu and M. Ho, “Design and implementation of robust visual servoing control of an inverted pendulum with an FPGAbased image co-processor,” Mechatronics, vol. 21, no. 7, pp. 1170–1182, 2011.
[12]
T. Kryjak and M. Gorgoń, “Real-time implementation of moving object detection in video surveillance systems using FPGA,” Computer Science, vol. 12, pp. 149–162, 2011.
[13]
R. Rodriguez-Gomez, E. Fernandez-Sanchez, J. Diaz, and E. Ros, “FPGA implementation for real-time background subtraction based on horprasert model,” Sensors, vol. 12, pp. 585–611, 2012.
[14]
R. Chojecki and B. Siemiatkowska, “Mobile robot navigation based on omnidirectional sensor,” in Proceedings of the European Conference on Mobile Robots (ECMR '03), pp. 101–106, Radziejowice, Poland, September 2003.
[15]
M. A. Vega-Rodríguez, A. Gómez-Iglesias, J. A. Gómez-Pulido, and J. M. Sánchez-Pérez, “Reconfigurable computing system for image processing via the internet,” Microprocessors and Microsystems, vol. 31, pp. 498–515, 2007.
[16]
F. Nava, D. Sciuto, M. D. Santambrogio et al., “Applying dynamic reconfiguration in the mobile robotics domain: a case study on computer vision algorithms,” ACM Transactions on Reconfigurable Technology and Systems, vol. 4, no. 3, 2011.
[17]
L. Chen, M. Zhang, B. Wang, Z. Xiong, and G. Cheng, “Real-time FPGA-based panoramic unrolling of high-resolution catadioptric omnidirectional images,” in Proceedings of the International Conference on Measuring Technology and Mechatronics Automation (ICMTMA '09), pp. 502–505, Hunan, China, April 2009.
[18]
A. Gardel, A. Hernández, R. Miota, I. Bravo, and R. Mateos, “Correction of omnidirectional camera images using reconfigurable hardware,” in Proceedings of the 32nd Annual Conference on IEEE Industrial Electronics, pp. 3403–3407, Paris, France, November 2006.
[19]
B. Zhang, Z. Qi, J. Zhu, and Z. Cao, “Omnidirection image restoration based on spherical perspective projection,” in Proceedings of the IEEE Asia Pacific Conference on Circuits and Systems, pp. 922–925, Macao, China, December 2008.
[20]
T. Shu-ren, Z. Mao-jun, X. Zhi-hui, L. Le, and C. L. Dong, “Design and implementation of high-resolution omnidirectional vision system,” Chinese Journal of Video Engineering, vol. 10, no. 1, pp. 1–6, 2008.
[21]
A. Maeder, H. Bistry, and J. Zhang, “Towards intelligent autonomous vision systems—smart image processing for robotic applications,” in Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO '07), pp. 1081–1086, Sanya, China, December 2007.
[22]
R. Benosman, E. Deforas, and J. Devars, “A new catadioptric sensor for the panoramic vision of mobile robots,” in Proceedings of the IEEE Workshop on Omnidirectional Vision, pp. 112–116, 2000.
[23]
J. Fabrizio, J.-P. Tarel, and R. Benosman, “Calibration of panoramic catadioptric sensors made easier,” in Proceedings of the 3rd Workshop on Omnidirectional Vision, pp. 45–52, 2002.
[24]
S. Ramalingam, P. Sturm, and S. K. Lodha, “Towards complete generic camera calibration,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05), vol. 1, pp. 1093–1098, June 2005.
[25]
J. Y. Mori, C. Sánchez-Ferreira, D. M. Munoz, C. H. Llanos, and P. Berger, “An unified approach for convolution-based image filtering on reconfigurable systems,” in Proceedings of the 7th Southern Conference on Programmable Logic (SPL '11), pp. 63–68, Crdoba, Argentina, April 2011.
[26]
J. Mori, Implementa??o de técnicas de processamento de imagens no domínio espacial em sistemas reconfiguráveis, M.S. thesis, Universidade de Brasília, Brasília, Brazil, 2010.
[27]
D. M. Mu?oz, D. F. Sanchez, C. H. Llanos, and M. Ayala-Rincón, “Tradeoff of FPGA design of a floating-point library for arithmetic operators,” Journal of Integrated Circuits and Systems, vol. 5, no. 1, pp. 42–52, 2010.