%0 Journal Article %T Examination of the Quality of Multiple-choice Items on Classroom Tests %A David DiBattista %A Laura Kurzawa %J Canadian Journal for the Scholarship of Teaching and Learning %D 2011 %I Society for Teaching and Learning in Higher Education %X Because multiple-choice testing is so widespread in higher education, we assessed the quality of items used on classroom tests by carrying out a statistical item analysis. We examined undergraduates¡¯ responses to 1198 multiple-choice items on sixteen classroom tests in various disciplines. The mean item discrimination coefficient was +0.25, with more than 30% of items having unsatisfactory coefficients less than +0.20. Of the 3819 distractors, 45% were flawed either because less than 5% of examinees selected them or because their selection was positively rather than negatively correlated with test scores. In three tests, more than 40% of the items had an unsatisfactory discrimination coefficient, and in six tests, more than half of the distractors were flawed. Discriminatory power suffered dramatically when the selection of one or more distractors was positively correlated with test scores, but it was only minimally affected by the presence of distractors that were selected by less than 5% of examinees. Our findings indicate that there is considerable room for improvement in the quality of many multiple-choice tests. We suggest that instructors consider improving the quality of their multiple-choice tests by conducting an item analysis and by modifying distractors that impair the discriminatory power of items.¨¦tant donn¨¦ que les examens ¨¤ choix multiple sont tellement g¨¦n¨¦ralis¨¦s dans l¡¯enseignement sup¨¦rieur, nous avons effectu¨¦ une analyse statistique des items utilis¨¦s dans les examens en classe afin d¡¯en ¨¦valuer la qualit¨¦. Nous avons analys¨¦ les r¨¦ponses des ¨¦tudiants de premier cycle ¨¤ 1198 questions ¨¤ choix multiples dans 16 examens effectu¨¦s en classe dans diverses disciplines. Le coefficient moyen de discrimination de l¡¯item ¨¦tait +0.25. Plus de 30 % des items avaient des coefficients insatisfaisants inf¨¦rieurs ¨¤ + 0.20. Sur les 3819 distracteurs, 45 % ¨¦taient imparfaits parce que moins de 5 % des ¨¦tudiants les ont choisis ou ¨¤ cause d¡¯une corr¨¦lation n¨¦gative plut t que positive avec les r¨¦sultats des examens. Dans trois examens, le coefficient de discrimination de plus de 40 % des items ¨¦tait insatisfaisant et dans six examens, plus de la moiti¨¦ des distracteurs ¨¦tait imparfaits. Le pouvoir de discrimination ¨¦tait consid¨¦rablement affect¨¦ en cas de corr¨¦lation positive entre un distracteur ou plus et les r¨¦sultatsde l¡¯examen, mais la pr¨¦sence de distracteurs choisis par moins de 5 % des ¨¦tudiants avait une influence minime sur ce pouvoir. Nos r¨¦sultats indiquent que les examens ¨¤ choix multiple peuvent ¨ºtre consid¨¦rablement am¨¦lior¨¦s. Nous su %K multiple choice %K assessment %K classroom testing %K item discrimination %K distractor analysis %U http://ir.lib.uwo.ca/cjsotl_rcacea/vol2/iss2/4