全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Usability Testing for Serious Games: Making Informed Design Decisions with User Data

DOI: 10.1155/2012/369637

Full-Text   Cite this paper   Add to My Lib

Abstract:

Usability testing is a key step in the successful design of new technologies and tools, ensuring that heterogeneous populations will be able to interact easily with innovative applications. While usability testing methods of productivity tools (e.g., text editors, spreadsheets, or management tools) are varied, widely available, and valuable, analyzing the usability of games, especially educational “serious” games, presents unique usability challenges. Because games are fundamentally different than general productivity tools, “traditional” usability instruments valid for productivity applications may fall short when used for serious games. In this work we present a methodology especially designed to facilitate usability testing for serious games, taking into account the specific needs of such applications and resulting in a systematically produced list of suggested improvements from large amounts of recorded gameplay data. This methodology was applied to a case study for a medical educational game, MasterMed, intended to improve patients’ medication knowledge. We present the results from this methodology applied to MasterMed and a summary of the central lessons learned that are likely useful for researchers who aim to tune and improve their own serious games before releasing them for the general public. 1. Introduction As the complexity of new technologies increases, affecting wider portions of the population, usability testing is gaining even more relevance in the fields of human-computer interaction (HCI) and user interface (UI) design. Brilliant products run this risk of failing completely if end users cannot fully engage because of user interface failures. Consequently, product designers are increasingly focusing on usability testing during the prototype phase to identify design or implementation issues that might prevent users from successfully interacting with a final product. Prototype usability testing is especially important when the system is to be used by a heterogeneous population or if this population includes individuals who are not accustomed to interacting with new technologies. In this sense, the field of serious games provides a good example where there should be special attention paid to usability issues. Because educational serious games aim to engage players across meaningful learning activities, it is important to evaluate the dimensions of learning effectiveness, engagement, and the appropriateness of the design for a specific context and target audience [1]. Yet because serious games target broad audiences who may not play games

References

[1]  S. de Freitas and M. Oliver, “How can exploratory learning with games and simulations within the curriculum be most effectively evaluated?” Computers and Education, vol. 46, no. 3, pp. 249–264, 2006.
[2]  J. Nielsen, “Heuristic evaluation,” in Usability Inspection Methods, J. Nielsen and R. L. Mack, Eds., vol. 17, pp. 25–62, John Wiley & Sons, 1994.
[3]  M. Kessner, J. Wood, R. F. Dillon, and R. L. West, “On the reliability of usability testing,” in Proceedings of the Extended Abstracts on Human Factors in Computing Systems (CHI '01), p. 97, 2001.
[4]  M. Macleod and R. Rengger, “The development of DRUM: a software tool for video-assisted usability evaluation,” in Proceedings of the 5th International Conference on Human-Computer Interaction (HCI '93), pp. 293–309, August 1993.
[5]  R. J. Pagulayan, K. Keeker, D. Wixon, R. L. Romero, and T. Fuller, “User-centered design in games,” in Design, J. A. Jacko and A. Sears, Eds., vol. 28, pp. 883–906, Lawrence Erlbaum Associates, 2003.
[6]  R. Koster, Theory of Fun for Game Design, Paraglyph, Scottsdale, Ariz, USA, 2004.
[7]  E. Ju and C. Wagner, “Personal computer adventure games: their structure, principles, and applicability for training,” Data Base for Advances in Information Systems, vol. 28, no. 2, pp. 78–92, 1997.
[8]  International Organization For Standardization, “ISO, 9241-11: guidance on usability,” Ergonomic requirements for office work with visual display terminals, 1998, http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=16883.
[9]  A. R. Cooper, The Inmates are Running the Asylum: Why High Tech Products Drive us Crazy and How to Restore the Sanity, Macmillan Publishing, Indianapolis, Ind, USA, 1999.
[10]  J. Nielsen and R. Molich, “Heuristic evaluation of user interfaces,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Empowering People (CHI '90), pp. 249–256, 1990.
[11]  J. Brooke, “SUS: a ‘quick and dirty’usability scale’,” in Usability Evaluation in Industry, P. W. Jordan, B. Thomas, B. A. Weerdmeester, and I. L. McClelland, Eds., pp. 189–194, Taylor & Francis, London, UK, 1996.
[12]  J. Kirakowski and M. Corbett, “SUMI: the software usability measurement inventory,” British Journal of Educational Technology, vol. 24, no. 3, pp. 210–212, 1993.
[13]  B. D. Harper and K. L. Norman, “Improving user satisfaction: the questionnaire for user interaction satisfaction version 5. 5,” in Proceedings of the 1st Annual Mid-Atlantic Human Factors Conference, pp. 224–228, 1993.
[14]  H. W. Jung, S. G. Kim, and C. S. Chung, “Measuring software product quality: a survey of ISO/IEC 9126,” IEEE Software, vol. 21, no. 5, pp. 88–92, 2004.
[15]  I. Wechsung and A. B. Naumann, “Evaluation methods for multimodal systems: a comparison of standardized usability questionnaires,” in Proceedings of the 4th IEEE tutorial and research workshop on Perception and Interactive Technologies for Speech-Based Systems: Perception in Multimodal Dialogue Systems (PIT '08), vol. 5078 of Lecture Notes in Computer Science, pp. 276–284, 2008.
[16]  R. L. Boring, D. I. Gertman, J. C. Joe, and J. L. Marble, “Proof of concept for A human reliability analysis method for heuristic usability evaluation of software,” in Proceedings of the 49th Annual Meeting of the Human Factors and Ergonomics Society (HFES '05), pp. 676–680, Orlando, Fla, USA, September 2005.
[17]  E. L. C. Law and E. T. Hvannberg, “Analysis of strategies for improving and estimating the effectiveness of heuristic evaluation,” in Proceedings of the 3rd Nordic Conference on Human-Computer Interaction (NordiCHI '04), pp. 241–250, Tampere, Finland, October 2004.
[18]  P. Moreno-Ger, D. Burgos, and J. Torrente, “Digital games in elearning environments: current uses and emerging trends,” Simulation & Gaming, vol. 40, no. 5, pp. 669–687, 2009.
[19]  J. Kirriemuir and A. McFarlane, Literature Review in Games and Learning, NESTA Futurelab, Bristol, UK, 2004.
[20]  R. Van Eck, “Digital game-based learning: it’s not just the digital natives who are restless,” EDUCAUSE Review, vol. 41, no. 2, pp. 16–30, 2006.
[21]  J. P. Gee, Good Videogames and Good Learning: Collected Essays on Video Games, Peter Lang Publishing, New York, NY, USA, 2007.
[22]  V. J. Shute, I. Masduki, and O. Donmez, “Conceptual framework for modeling, assessing, and supporting competencies within game environments,” Technology, Instruction, Cognition, and Learning, vol. 8, no. 2, pp. 137–161, 2010.
[23]  C. S. Loh, “Designing online games assessment as information trails,” in Games and Simulations in Online Learning: Research and Development Frameworks, D. Gibson, C. Aldrich, and M. Prensky, Eds., pp. 323–348, Information Science Publishing, Hershey, Pa, USA, 2007.
[24]  K. Squire, “Changing the game: what happens when video games enter the classroom,” Innovate, vol. 1, no. 6, 2005.
[25]  M. P. Eladhari and E. M. I. Ollila, “Design for research results: experimental prototyping and play testing,” Simulation & Gaming, vol. 43, no. 3, pp. 391–412, 2012.
[26]  E. Ollila, Using Prototyping and Evaluation Methods in Iterative Design of Innovative Mobile Games, Tampere University of Technology, Tampere, Finland, 2009.
[27]  J. A. Garcia Marin, E. Lawrence, K. Felix Navarro, and C. Sax, “Heuristic Evaluation for Interactive Games within Elderly Users,” in Proceedings of the 3rd International Conference on eHealth, Telemedicine, and Social Medicine (eTELEMED '11), pp. 130–133, 2011.
[28]  D. Pinelle and N. Wong, “Heuristic evaluation of games,” in Game Usability Advice from the Experts for Advancing the Player Experience, K. Isbister and N. Schaffer, Eds., pp. 79–89, ACM Press, 2008.
[29]  W. Ijsselsteijn, Y. De Kort, K. Poels, A. Jurgelionis, and F. Bellotti, “Characterising and measuring user experiences in digital games,” in Proceedings of the Avances in Computer Entertainment (ACE '07), June 2007.
[30]  K. M. Gilleade and A. Dix, “Using frustration in the design of adaptive videogames,” in Proceedings of the ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (ACE '04), pp. 228–232, Singapore, June 2005.
[31]  G. Sim, S. MacFarlane, and J. Read, “All work and no play: measuring fun, usability, and learning in software for children,” Computers and Education, vol. 46, no. 3, pp. 235–248, 2006.
[32]  G. R. White, P. Mirza-Babaei, G. McAllister, and J. Good, “Weak inter-rater reliability in heuristic evaluation of video games,” in Proceedings of the 29th Annual CHI Conference on Human Factors in Computing Systems (CHI '11), pp. 1441–1446, May 2011.
[33]  R. A. Virzi, “Refining the test phase of usability evaluation: how many subjects is enough?” Human Factors, vol. 34, no. 4, pp. 457–468, 1992.
[34]  J. Nielsen and T. K. Landauer, “Mathematical model of the finding of usability problems,” in Proceedings of the Conference on Human Factors in Computing Systems (INTERACT '93) and (CHI '93), pp. 206–213, April 1993.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413