%0 Journal Article %T Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns %A Faisal Ahmed %A Emam Hossain %J Chinese Journal of Engineering %D 2013 %R 10.1155/2013/831747 %X Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP)¡ªa discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK) face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features. 1. Introduction Over the last two decades, automated recognition of human facial expression has been an active research area with a wide variety of potential applications in human-computer interaction, data-driven animation, surveillance, and customized consumer products [1, 2]. Since the classification rate is heavily dependent on the information contained in the feature representation, an effective and discriminative feature set is the most important constituent of a successful facial expression recognition system [3]. Even the best classifier will fail to attain satisfactory performance if supplied with inconsistent or inadequate features. However, in real-world applications, facial images can easily be affected by different factors, such as variations in lighting condition, pose, aging, alignment, and occlusion [4]. Hence, designing a robust feature extraction method that can perform consistently in changing environment is still a challenging task. Based on the types of features used, facial feature extraction methods can be roughly divided into two different categories: geometric feature-based methods and appearance-based methods [1, 2]. In geometric feature-based methods, the feature vector is formed based on the geometric relationships, such as positions, angles, or distances between %U http://www.hindawi.com/journals/cje/2013/831747/