%0 Journal Article %T Text Entry by Gazing and Smiling %A Outi Tuisku %A Veikko Surakka %A Ville Rantanen %A Toni Vanhala %A Jukka Lekkala %J Advances in Human-Computer Interaction %D 2013 %I Hindawi Publishing Corporation %R 10.1155/2013/218084 %X Face Interface is a wearable prototype that combines the use of voluntary gaze direction and facial activations, for pointing and selecting objects on a computer screen, respectively. The aim was to investigate the functionality of the prototype for entering text. First, three on-screen keyboard layout designs were developed and tested ( ) to find a layout that would be more suitable for text entry with the prototype than traditional QWERTY layout. The task was to enter one word ten times with each of the layouts by pointing letters with gaze and select them by smiling. Subjective ratings showed that a layout with large keys on the edge and small keys near the center of the keyboard was rated as the most enjoyable, clearest, and most functional. Second, using this layout, the aim of the second experiment ( ) was to compare entering text with Face Interface to entering text with mouse. The results showed that text entry rate for Face Interface was 20 characters per minute (cpm) and 27£¿cpm for the mouse. For Face Interface, keystrokes per character (KSPC) value was 1.1 and minimum string distance (MSD) error rate was 0.12. These values compare especially well with other similar techniques. 1. Introduction Recently, there have been several attempts to develop alternative human-computer interaction (HCI) methods that utilize eye tracking in combination with another human behavior related measurement. One line of investigation has been to measure signals that originate from human facial expression systems [1¨C4]. One reason for using facial muscle behavior in HCI has been the fact that the human facial system is versatile when used for communication purposes and whose functionality could serve as a potential solution in HCI systems as well [3]. Pointing and selecting as well as text entry are the most common tasks in HCI, and thus being able to carry them out with acceptable performance can be considered important in order to consider an HCI solution fit for use. The potential of the human facial system has been already utilized in the context of eye tracking research. For example, eye blinks have been used for selecting objects when gaze direction has been used for pointing [5, 6]. The choice of using use eye blinks results from the fact that video-based eye trackers that image the eyes are able to recognize whether the eyes are opened or closed [5, 7]. The relation to the facial muscle system comes from the fact that eye blinks result from the activation of orbicularis oculi facial muscle [8]. While video-based eye trackers track the eyes, computer vision %U http://www.hindawi.com/journals/ahci/2013/218084/