全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Gesture Recognition Using Neural Networks Based on HW/SW Cosimulation Platform

DOI: 10.1155/2013/707248

Full-Text   Cite this paper   Add to My Lib

Abstract:

Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient. 1. Introduction In today’s world, the field programmable gate array (FPGA) technology has advanced enough to model complex chips replacing custom application-specific integrated circuits (ASICs) and processors for signal processing and control applications. FPGAs are preferred as higher-level tools evolve to deliver the benefits of reprogrammable silicon to engineers and scientists at all levels of expertise. Taking advantage from the current FPGA technology, this paper proposes a hardware/software cosimulation methodology using hardware description language (HDL) simulations on FPGA as an effort to accelerate the simulation time and performance [1, 2]. The conventional software simulation method has more flexibility in terms of parameters variation. The desired simulation parameters can be changed to study the system behavior under various conditions. The major drawback with the conventional approach is the intolerable simulation time. On the other hand, the complete hardware-based approach can provide significant speedup in examining the system behavior, but the flexibility will be nonetheless compromised. In this paper, we attempt to leverage the merits of the software simulation and hardware emulation to retain both the flexibility and performance by adopting a HW/SW-based platform approach [1]. 2. Background/Literature Review The communication between human and machines or between people can be done using gestures called sign language [3]. The use of sign language plays an important role in the means

References

[1]  T. Suh, H. S. Lee, S. Lu, and J. Shen, “Initial observations of hardware/software co-simulation using FPGA in architecture research,” in Proceedings of the Workshop on Architecture Research Using FPGA Platforms in Conjunction with International Symposium on High-Performance Computer Architecture, Austin, Tex, USA, February, 2006.
[2]  X. Ling, Z. Li, J. Hu, and S. Wu, “HW/SW co-simulation platforms for VLSI design,” in Proceedings of the IEEE Asia Pacific Conference on Circuits and Systems (APCCAS '08), pp. 578–581, 2008.
[3]  M. M. Hasan and P. K. Misra, “Brightness factor matching for gesture recognition system using scaled normalization,” International Journal of Computer Science & Information Technology, vol. 3, no. 2, pp. 35–46, 2011.
[4]  M. P. Paulraj, S. Yaacob, M. S. bin Zanar Azalan, and R. Palaniappan, “A phoneme based sign language recognition system using skin color segmentation,” in Proceedings of the 6th International Colloquium on Signal Processing and Its Applications (CSPA '10), pp. 1–5, May 2010.
[5]  P. Mekala, Y. Gao, J. Fan, and A. Davari, “Real-time sign language recognition based on neural network architecture,” in Proceedings of the IEEE International Conference on Industrial Technology & 43rd Southeastern Symposium on System Theory (SSST '11), pp. 195–199, Auburn, Ala, USA, March 2011.
[6]  S. Mitra and T. Acharya, “Gesture recognition: a survey,” IEEE Transactions on Systems, Man and Cybernetics, Part C, vol. 37, no. 3, pp. 311–324, 2007.
[7]  T. B. Moeslund and E. Granum, “A survey of computer vision-based human motion capture,” Computer Vision and Image Understanding, vol. 81, no. 3, pp. 231–268, 2001.
[8]  J. J. LaViola Jr., A survey of hand posture and gesture recognition techniques and technology [M.S. thesis], NSF Science and Technology Center for Computer Graphics and Scientific Visualization, Providence, RI, USA, 1999.
[9]  S. Meena, A study on hand gesture recognition technique [M.S. thesis], Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, India, 2011.
[10]  M. M. Hasan and P. K. Mishra, “HSV brightness factor matching for gesture recognition system,” International Journal of Image Processing, vol. 4, no. 5, pp. 456–467, 2010.
[11]  P. Garg, N. Aggarwal, and S. Sofat, “Vision based hand gesture recognition,” World Academy of Science, Engineering and Technology, vol. 49, pp. 972–977, 2009.
[12]  K. Murakami and H. Taguchi, “Gesture recognition using recurrent neural networks,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Reaching through Technology, pp. 237–242, 1999.
[13]  T. H. H. Maung, “Real-time hand tracking and gesture recognition system using neural networks,” World Academy of Science, Engineering and Technology, vol. 50, pp. 466–470, 2009.
[14]  M. Maraqa and R. Abu-Zaiter, “Recognition of Arabic Sign Language (ArSL) using recurrent neural networks,” in Proceedings of the 1st IEEE International Conference on the Applications of Digital Information and Web Technologies (ICADIWT '08), pp. 478–481, August 2008.
[15]  G. Bailador, D. Roggen, and G. Troster, “Real time gesture recognition using continuous time recurrent neural networks,” in Proceedings of the 2nd ICST International Conference on Body Area Networks, 2007.
[16]  E. Stergiopoulou and N. Papamarkos, “Hand gesture recognition using a neural network shape fitting technique,” Engineering Applications of Artificial Intelligence, vol. 22, no. 8, pp. 1141–1158, 2009.
[17]  X. Li, Gesture Recognition Based on Fuzzy C-Means Clustering Algorithm, Department of Computer Science, The University of Tennessee Knoxville, 2003.
[18]  C. Lien and C. Huang, “The model-based dynamic hand posture identification using genetic algorithm,” Springer Machine Vision and Applications, vol. 11, no. 3, pp. 107–121, 1999.
[19]  R. Yang and S. Sarkar, “Gesture recognition using hidden Markov models from fragmented observations,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '06), pp. 766–773, June 2006.
[20]  M. Elmezain, A. Al-Hamadi, J. Appenrodt, and B. Michaelis, “A hidden Markov model-based isolated and meaningful hand gesture recognition,” International Journal of Electrical and Electronics Engineering, vol. 3, no. 3, pp. 156–163, 2009.
[21]  R. Verma and A. Dev, “Vision based hand gesture recognition using finite state machines and fuzzy logic,” in Proceedings of the IEEE International Conference on Ultra Modern Telecommunications and Workshops (ICUMT '09), pp. 1–6, Petersburg, Russia, October 2009.
[22]  B. Krose and P. van der Smagtan, An Introduction to Neural Networks, The University of Amsterdam, 8th edition, 1996.
[23]  A. Chaudhary, J. L. Raheja, K. Das, and S. Raheja, “Intelligent approaches to interact with machines using hand gesture recognition in natural way: a survey,” International Journal of Computer Science & Engineering Survey, vol. 2, no. 1, 2011.
[24]  J. Wu, Neural Networks and Simulation Methods, Marcel Dekker, Inc., New York, NY, USA, 1994.
[25]  R. Z. Khan and N. A. Ibraheem, “Hand gesture recognition: a literature review,” International Journal of Artificial Intelligence & Applications, vol. 3, no. 4, 2012.
[26]  R. Z. Khan and N. A. Ibraheem, “Survey on gesture recognition for hand image postures,” International Journal of Computer and Information Science, vol. 5, no. 3, pp. 110–121, 2012.
[27]  W. T. Freeman and M. Roth, “Orientation histograms for hand gesture recognition,” in Proceedings of the International Workshop on Automatic Face and Gesture-Recognition, pp. 296–301, IEEE Computer Society, Zurich, Switzerland, June 1995.
[28]  http://www.ni.com/white-paper/6984/en.
[29]  https://www.mathworks.com/accesslogin/login.do?uri=http://www.mathworks.com/help/toolbox/edalink/ref/hdldaemon.html.
[30]  G. A. Carpenter and S. Grossberg, “The ART of adaptive pattern recognition by a self-organizing neural network,” Computer, vol. 21, no. 3, pp. 77–788, 1988.
[31]  A. R. Omondi and J. C. Rajapakse, FPGA Implementations of Neural Networks, Springer, Dordrecht, The Netherlands, 2006.
[32]  L. Fausett, Fundamentals of Neural Networks—Architecture, Algorithms and Applications, Prentice Hall Publishers, Upper Saddle River, NJ, USA, 1994.
[33]  J. Heaton, Introduction to Neural Networks for JAVA, Heaton Research, Inc, 2nd edition, 2008.
[34]  S. Geman, E. Bienenstock, and R. Doursat, “Neural networks and the bias/variance dilemma,” Neural Computation, vol. 4, no. 1, pp. 1–58, 1992.
[35]  S. Lawrence, C. L. Giles, and A. C. Tsoi, “What size neural network gives optimal generalization? Convergence properties of back propagation,” Tech. Rep. UMIACS-TR-96-22 and CS-TR-3617, Institute for Advanced Computer Studies, University of Maryland, 1996.
[36]  Xilinx, XST User Guide, Xilinx Inc., 2009.
[37]  T. Y. Young and K. Fu, Handbook of Pattern Recognition and Image Processing, Academic Press, Orlando, Fla, USA, 1986.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413