%0 Journal Article %T An FPGA-Based Omnidirectional Vision Sensor for Motion Detection on Mobile Robots %A Jones Y. Mori %A Janier Arias-Garcia %A Camilo S¨˘nchez-Ferreira %A Daniel M. Mu£żoz %A Carlos H. Llanos %A J. M. S. T. Motta %J International Journal of Reconfigurable Computing %D 2012 %I Hindawi Publishing Corporation %R 10.1155/2012/148190 %X This work presents the development of an integrated hardware/software sensor system for moving object detection and distance calculation, based on background subtraction algorithm. The sensor comprises a catadioptric system composed by a camera and a convex mirror that reflects the environment to the camera from all directions, obtaining a panoramic view. The sensor is used as an omnidirectional vision system, allowing for localization and navigation tasks of mobile robots. Several image processing operations such as filtering, segmentation and morphology have been included in the processing architecture. For achieving distance measurement, an algorithm to determine the center of mass of a detected object was implemented. The overall architecture has been mapped onto a commercial low-cost FPGA device, using a hardware/software co-design approach, which comprises a Nios II embedded microprocessor and specific image processing blocks, which have been implemented in hardware. The background subtraction algorithm was also used to calibrate the system, allowing for accurate results. Synthesis results show that the system can achieve a throughput of 26.6 processed frames per second and the performance analysis pointed out that the overall architecture achieves a speedup factor of 13.78 in comparison with a PC-based solution running on the real-time operating system xPC Target. 1. Introduction Scientists predict that robots will play an important role in the future. In this scenario, robots will be able to assist humans in many tasks as domestic labors, elderly people care, cleaning, vehicles operation, and surveillance. Animals have mechanisms to interact with the environment provided by natural evolution. They are able to sense the surrounding environment and to move according to a defined objective, contouring obstacles and performing a dynamic path planning. In the robotic field, one of the major challenges is providing robots with sensorial and rational capabilities, allowing them to assist, and possibly substitute, humans in some activities requiring special skills. Autonomous mobile robot navigation considers the execution of three stages: (a) mapping, (b) localization, and (c) decision making. The first stage uses information from sensors for creating a map of the environment. The second one relates the map with the sensor information, allowing the robot to self-localization in the environment. The third stage considers the path-planning problem [1]. Different kinds of sensors can be used for providing environment information to the mobile robot. Such %U http://www.hindawi.com/journals/ijrc/2012/148190/