%0 Journal Article %T Testing Two Tools for Multimodal Navigation %A Mats Liljedahl %A Stefan Lindberg %A Katarina Delsing %A Mikko Poloj£¿rvi %A Timo Saloranta %A Ismo Alak£¿rpp£¿ %J Advances in Human-Computer Interaction %D 2012 %I Hindawi Publishing Corporation %R 10.1155/2012/251384 %X The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment. 1. Introduction Visual maps have a number of advantages as a tool for navigation, for example, overview and high information density. Over the last years, new technologies have radically broadened how and in what contexts visual maps can be used and displayed. This development has spawned a plethora of new tools for navigation. Many of these are based on graphics, are meant for the eye, and use traditional map metaphors. The Google Maps application included in, for example, iPhone and Android smartphones is one example. However, visual maps are usually abstract representations of the physical world and must be interpreted in order to be of use. Interpreting a map and relating it to the current surroundings is a relatively demanding task [1]. Moreover, maps often require the users¡¯ full visual attention, disrupt other activities, and may weaken the users¡¯ perception of the surroundings. All in all, maps are in many ways demanding tools for navigation. One major challenge for developers of navigation services based on smartphones is handling the inaccuracy of sensor data, especially GPS location data. The location provided by GPS in urban environments is often very inaccurate from pedestrians¡¯ perspective. Furthermore, the accuracy is heavily influenced by nearby buildings as well as other factors such as the positions of the satellites and weather. This paper addresses the problems described above. The problem with current navigation tools¡¯ demands on users¡¯ attentional and cognitive resources was addressed using multimodal user interfaces built on a mix of audio, pointing gestures, graphics, and %U http://www.hindawi.com/journals/ahci/2012/251384/