全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

DOI: 10.1155/2013/831747

Full-Text   Cite this paper   Add to My Lib

Abstract:

Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP)—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK) face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features. 1. Introduction Over the last two decades, automated recognition of human facial expression has been an active research area with a wide variety of potential applications in human-computer interaction, data-driven animation, surveillance, and customized consumer products [1, 2]. Since the classification rate is heavily dependent on the information contained in the feature representation, an effective and discriminative feature set is the most important constituent of a successful facial expression recognition system [3]. Even the best classifier will fail to attain satisfactory performance if supplied with inconsistent or inadequate features. However, in real-world applications, facial images can easily be affected by different factors, such as variations in lighting condition, pose, aging, alignment, and occlusion [4]. Hence, designing a robust feature extraction method that can perform consistently in changing environment is still a challenging task. Based on the types of features used, facial feature extraction methods can be roughly divided into two different categories: geometric feature-based methods and appearance-based methods [1, 2]. In geometric feature-based methods, the feature vector is formed based on the geometric relationships, such as positions, angles, or distances between

References

[1]  F. Ahmed, H. Bari, and E. Hossain, “Person-independent facial expression recognition based on Compound Local Binary Pattern (CLBP),” International Arab Journal of Information Technology, vol. 11, no. 2, 2013.
[2]  T. Jabid, M. H. Kabir, and O. Chae, “Robust facial expression recognition based on local directional pattern,” ETRI Journal, vol. 32, no. 5, pp. 784–794, 2010.
[3]  F. Ahmed and M. H. Kabir, “Directional ternary pattern ( DTP) for facial expression recognition,” in Proceedings of the IEEE International Conference on Consumer Electronics (ICCE '12), pp. 265–266, Las Vegas, Nev, USA, January 2012.
[4]  X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” in IEEE International Workshop on Analysis and Modeling of Faces and Gestures, vol. 4778 of Lecture Notes in Computer Science, pp. 168–182, 2007.
[5]  P. Ekman and W. Friesen, Facial Action Coding System: A Technique for Measurement of Facial Movement, Consulting Psychologists Press, Palo Alto, Calif, USA, 1978.
[6]  Z. Zhang, “Feature-based facial expression recognition: sensitivity analysis and experiments with a multilayer perceptron,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 13, no. 6, pp. 893–911, 1999.
[7]  G. D. Guo and C. R. Dyer, “Simultaneous feature selection and classifier training via linear programming: a case study for face expression recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 346–352, June 2003.
[8]  M. Valstar, I. Patras, and M. Pantic, “Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data,” in IEEE CVPR Workshop, vol. 3, pp. 76–84, 2005.
[9]  C. Padgett and G. Cottrell, “Representation face images for emotion classification,” Advances in Neural Information Processing Systems, vol. 9, pp. 894–900, 1997.
[10]  M. S. Bartlett, J. R. Movellan, and T. J. Sejnowski, “Face recognition by independent component analysis,” IEEE Transactions on Neural Networks, vol. 13, no. 6, pp. 1450–1464, 2002.
[11]  C. C. Fa and F. Y. Shih, “Recognizing facial action units using independent component analysis and support vector machine,” Pattern Recognition, vol. 39, no. 9, pp. 1795–1798, 2006.
[12]  M. J. Lyons, “Automatic classification of single facial images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 12, pp. 1357–1362, 1999.
[13]  Y. Tian, “Evaluation of face resolution for expression analysis,” in IEEE Workshop on Face Processing in Video, 2004.
[14]  T. Jabid, H. Kabir, and O. Chaei, “Local Directional Pattern (LDP) for face recognition,” in Proceedings of the International Conference on Consumer Electronics (ICCE '10), pp. 329–330, Las Vegas, Nev, USA, January 2010.
[15]  S. Zhao, Y. Gao, and B. Zhang, “Sobel-LBP,” in IEEE International Conference on Image Processing, pp. 2144–2147, 2008.
[16]  C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on Local Binary Patterns: a comprehensive study,” Image and Vision Computing, vol. 27, no. 6, pp. 803–816, 2009.
[17]  G. Zhao and M. Pietik?inen, “Boosted multi-resolution spatiotemporal descriptors for facial expression recognition,” Pattern Recognition Letters, vol. 30, no. 12, pp. 1117–1127, 2009.
[18]  T. Kanade, J. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in IEEE International Conference on Automated Face and Gesture Recognition, pp. 46–53, 2000.
[19]  T. Ojala, M. Pietik?inen, and T. M?enp??, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002.
[20]  T. Ahonen, A. Hadid, and M. Pietik?inen, “Face description with local binary patterns: application to face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, no. 12, pp. 2037–2041, 2006.
[21]  D. He and N. Cercone, “Local triplet pattern for content-based image retrieval,” in International Conference on Image Analysis and Recognition, pp. 229–238, 2009.
[22]  S. Gundimada and V. K. Asari, “Facial recognition using multisensor images based on localized kernel eigen spaces,” IEEE Transactions on Image Processing, vol. 18, no. 6, pp. 1314–1325, 2009.
[23]  F. Ahmed, “Gradient directional pattern: a robust feature descriptor for facial expression recognition,” IET Electronics Letters, vol. 48, no. 19, pp. 1203–1204, 2012.
[24]  C.-W. Hsu and C.-J. Lin, “A comparison of methods for multiclass support vector machines,” IEEE Transactions on Neural Networks, vol. 13, no. 2, pp. 415–425, 2002.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133