全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Grey-Level Cooccurrence Matrix Performance Evaluation for Heading Angle Estimation of Moveable Vision System in Static Environment

DOI: 10.1155/2013/624670

Full-Text   Cite this paper   Add to My Lib

Abstract:

A method of extracting information in estimating heading angle of vision system is presented. Integration of grey-level cooccurrence matrix (GLCM) in an area of interest selection is carried out to choose a suitable region that is feasible for optical flow generation. The selected area is employed for optical flow generation by using Horn-Schunck method. From the generated optical flow, heading angle is estimated and enhanced via moving median filter (MMF). In order to ascertain the effectiveness of GLCM, we compared the result with a different estimation method of optical flow which is generated directly from untouched greyscale images. The performance of GLCM is compared to the true heading, and the error is evaluated through mean absolute deviation (MAE). The result ensured that GLCM can improve the estimation result of the heading angle of vision system significantly. 1. Introduction Sensors are most the important components in measurement. With the data gathered from sensor, data analysis as well as control strategy implementation can be conducted. Due to the importance of the sensor, there are various works conducted in this field to cater issues that arise in this field, for instance, new design of force sensor [1, 2] and tactile display [3]. Particularly in the field of robotics, sensors provide information about a variable been measured in order to control the robot, which is most critical part. This information is processed in order to decide the way the robot should take an action with good performance. There are various kinds of sensors available for this purpose, for example, sonar sensor, position sensor, infrared sensor, and camera. Employment of a camera as a sensor, also known as robot vision system, is an interesting idea due to its capabilities to obtain useful information from surroundings through image acquisition. Depending on the application, extraction of information gathered from robot vision system can be executed in various ways. The extraction process commonly utilises image processing and analysis. A research in the field of vision system is ranging from hardware development and testing, algorithm development specifically on image processing and analysis, and strategy proposal regarding the way to use vision system for a specific task and its implementation. Conducted previous works include manipulation of image acquisition capabilities of the optical mouse to be used as a sensor [4–6], utilization of panoramic image in visual navigation [7], log-polar imaging application in robotic vision [8], application of sparse visual

References

[1]  A. Song, J. Wu, G. Qin, and W. Huang, “A novel self-decoupled four degree-of-freedom wrist force/torque sensor,” Measurement, vol. 40, no. 9-10, pp. 883–891, 2007.
[2]  G. Song, H. Yuan, Y. Tang, Q. Song, and Y. Ge, “A novel three-axis force sensor for advanced training of shot-put athletes,” Sensors and Actuators A, vol. 128, no. 1, pp. 60–65, 2006.
[3]  J. Wu, Z. Song, W. Wu, A. Song, and D. Constantinescu, “A vibro-tactile system for image contour display,” in Proceedings of the IEEE International Symposium on Virtual Reality Innovations (ISVRI '11), pp. 145–150, March 2011.
[4]  J. Palacin, I. Valga?on, and R. Pernia, “The optical mouse for indoor mobile robot odometry measurement,” Sensors and Actuators A, vol. 126, no. 1, pp. 141–147, 2006.
[5]  M. Tresanchez, T. Pallejà, M. Teixidó, and J. Palacín, “The optical mouse sensor as an incremental rotary encoder,” Sensors and Actuators A, vol. 155, no. 1, pp. 73–81, 2009.
[6]  M. Tresanchez, T. Pallejà, M. Teixidó, and J. Palacín, “Using the image acquisition capabilities of the optical mouse sensor to build an absolute rotary encoder,” Sensors and Actuators A, vol. 157, no. 1, pp. 161–167, 2010.
[7]  F. Labrosse, “Short and long-range visual navigation using warped panoramic images,” Robotics and Autonomous Systems, vol. 55, no. 9, pp. 675–684, 2007.
[8]  V. Javier Traver and A. Bernardino, “A review of log-polar imaging for visual perception in robotics,” Robotics and Autonomous Systems, vol. 58, no. 4, pp. 378–398, 2010.
[9]  M. Kronfeld, C. Weiss, and A. Zell, “Swarm-supported outdoor localization with sparse visual data,” Robotics and Autonomous Systems, vol. 58, no. 2, pp. 166–173, 2010.
[10]  K. Souhila and A. Karim, “Optical flow based robot obstacle avoidance,” International Journal of Advanced Robotic Systems, vol. 4, no. 1, pp. 13–16, 2007.
[11]  L. Wang, H. Li, and R. Hartley, “Video local pattern based image matching for visual mapping,” in Proceedings of the 18th International Conference on Pattern Recognition (ICPR '06), vol. 2, pp. 67–70, 2006.
[12]  G. Medioni, A. R. J. Fran?ois, M. Siddiqui, K. Kim, and H. Yoon, “Robust real-time vision for a personal service robot,” Computer Vision and Image Understanding, vol. 108, no. 1-2, pp. 196–203, 2007.
[13]  R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man and Cybernetics, vol. 3, no. 6, pp. 610–621, 1973.
[14]  V. Bino Sebastian, A. Unnikrishnan, and K. Balakrishnan, “Grey level co-occurrence matrices: generalisation and some new features,” International Journal of Computer Science, Engineering and Information Technology, vol. 2, no. 2, pp. 151–157, 2012.
[15]  K. Padmavasavi, N. U. Kumar, E. V. K. Rao, and M. Madhavilatha, “Performance evaluation of adaptive statistical thresholding based edge detection using GLCM in wavelet domain under noisy conditions,” ICGST International Journal on Graphics, Vision and Image Processing (GVIP), vol. 10, no. 3, pp. 35–44, 2010.
[16]  M. M. Mokji and S. A. R. S. A. Bakar, “Adaptive thresholding based on co-occurrence matrix edge information,” Journal of Computers, vol. 2, no. 8, pp. 44–52, 2007.
[17]  K. H. Ghazali, M. M. Mustafa, A. Hussain, and F. Engineering, “Machine vision system for automatic weeding strategy using image processing technique,” American-Eurasian Journal of Agricultural & Environmental Sciences, vol. 3, no. 3, pp. 451–458, 2008.
[18]  C. Lane, R. L. Burguete, and A. Shterenlikht, “An objective criterion for the selection of an optimum DIC pattern and subset size,” in Proceedings of the 11th International Congress and Exhibition on Experimental and Applied Mechanics, pp. 900–908, June 2008.
[19]  B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, vol. 17, no. 1–3, pp. 185–203, 1981.
[20]  R. J. Hyndman and A. B. Koehler, “Another look at measures of forecast accuracy,” International Journal of Forecasting, vol. 22, no. 4, pp. 679–688, 2006.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133