全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Feedback for Programming Assignments Using Software-Metrics and Reference Code

DOI: 10.1155/2013/805963

Full-Text   Cite this paper   Add to My Lib

Abstract:

Feedback for student programming assignments on quality is a tedious and laborious task for the instructor. In this paper, we make use of few object-oriented software metrics along with a reference code that is provided by the instructor to analyze student programs and provide feedback. The empirical study finds those software metrics that can be used on the considered programming assignments and the way reference code helps the instructor to assess them. This approach helps the instructor to easily find out quality issues in student programs. Feedback to such assignments can be provided using the guidelines which we will be discussing. We also perform an experimental study on programming assignments of sophomore students who were enrolled in an object-oriented programming course to validate our approach. 1. Introduction Assessment of students programming assignments are mostly done by using certain criteria like functionality, design, and programming style. Moreover, giving feedback on students’ assignments is a hectic task because the instructor needs to inspect all student assignments. Many computer-aided approaches (CAAs) have been proposed for assessing student assignments and giving feedback [1]. Using software metrics to assess student programs is one of those several approaches used in CAAs. Software metrics are measures of certain aspects of a program that help to analyze and control it is quality. Researchers have found a large number of software metrics that can analyze different aspects of the source code. These software metrics can also be used by the instructor to analyze the students programing assignments and give feedback. Most of these software metrics used were related to complexity and size of the programs [2–5]. Cohesion and coupling are certain aspects of a program that measure it is relatedness and dependency. In this research, we will find out whether some metrics relating to cohesion and coupling are capable of assessing student programs and also reveal the results of an experiment that was conducted on the student assignments. As we proceed, we will be answering three questions pertaining to our research.(1)Can the object-oriented metrics assess student programs?(2)How can these metrics with reference code help the instructor in analyzing student programs?(3)Will this approach help the students? We have organized this paper into different sections. In Section 2, we perform a review about the previous related research conducted in this area. The next Section 3 explains our present idea on using software metrics and reference

References

[1]  K. M. Ala-Mutka, “A survey of automated assessment approaches for programming assignments,” Computer Science Education, vol. 15, no. 2, pp. 83–102, 2005.
[2]  R. Cardell-Oliver, “How can software metrics help novice programmers?” in Proceedings of the 13th Australasian Computing Education Conference (ACE '11), vol. 114, pp. 55–62, January 2011.
[3]  S.-L. Hung, I.-F. Kwok, and R. Chan, “Automatic programming assessment,” Computers and Education, vol. 20, no. 2, pp. 183–190, 1993.
[4]  D. Jackson and M. Usher, “Grading student programs using ASSYST,” ACM SIGCSE Bulletin, vol. 29, no. 1, pp. 335–339, 1997.
[5]  F. Jurado, M. A. Redondo, and M. Ortega, “Using fuzzy logic applied to software metrics and test cases to assess programming assignments and give advice,” Journal of Network and Computer Applications, vol. 35, no. 2, pp. 695–712, 2012.
[6]  S. A. Mengel and V. Yerramilli, “A case study of the static analysis of the quality of novice student programs,” ACM SIGCSE Bulletin, vol. 31, no. 1, pp. 78–82, 1999.
[7]  T. Wang, X. Su, Y. Wang, and P. Ma, “Semantic similarity-based grading of student programs,” Information and Software Technology, vol. 49, no. 2, pp. 99–107, 2007.
[8]  B. Cheang, A. Kurnia, A. Lim, and W.-C. Oon, “On automated grading of programming assignments in an academic institution,” Computers and Education, vol. 41, no. 2, pp. 121–131, 2003.
[9]  M. Maxim and A. Venugopal, “FrontDesk: an enterprise class web-based software system for programming assignment submission, feedback dissemination, and grading automation,” in Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT '04), pp. 331–335, September 2004.
[10]  M. Wick, D. Stevenson, and P. Wagner, “Using testing and JUnit across the curriculum,” ACM SIGCSE Bulletin, vol. 37, no. 1, pp. 236–240, 2005.
[11]  B. J. Bowman and W. A. Newman, “Software metrics as a programming training tool,” The Journal of Systems and Software, vol. 13, no. 2, pp. 139–147, 1990.
[12]  T. J. McCabe, “A complexity measures,” IEEE Transactions on Software Engineering, vol. SE-2, no. 4, pp. 308–320, 1976.
[13]  R. J. Leach, “Using metrics to evaluate student programs,” ACM SIGCSE Bulletin, vol. 27, no. 2, pp. 41–43, 1995.
[14]  S. R. Chidamber and C. F. Kemerer, “Metrics suite for object oriented design,” IEEE Transactions on Software Engineering, vol. 20, no. 6, pp. 476–493, 1994.
[15]  http://gromit.iiar.pwr.wroc.pl/p_inf/ckjm/down.html.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413