全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Evaluating the Effect of Control Flow on the Unit Testing Effort of Classes: An Empirical Analysis

DOI: 10.1155/2012/964064

Full-Text   Cite this paper   Add to My Lib

Abstract:

The aim of this paper is to evaluate empirically the relationship between a new metric (Quality Assurance Indicator—Qi) and testability of classes in object-oriented systems. The Qi metric captures the distribution of the control flow in a system. We addressed testability from the perspective of unit testing effort. We collected data from five open source Java software systems for which JUnit test cases exist. To capture the testing effort of classes, we used different metrics to quantify the corresponding JUnit test cases. Classes were classified, according to the required testing effort, in two categories: high and low. In order to evaluate the capability of the Qi metric to predict testability of classes, we used the univariate logistic regression method. The performance of the predicted model was evaluated using Receiver Operating Characteristic (ROC) analysis. The results indicate that the univariate model based on the Qi metric is able to accurately predict the unit testing effort of classes. 1. Introduction Software testing plays a crucial role in software quality assurance. It has, indeed, an important effect on the overall quality of the final product. Software testing is, however, a time and resources consuming process. The overall effort spent on testing depends, in fact, on many different factors including [1–5] human factors, process issues, testing techniques, tools used, and characteristics of the software development artifacts. Software testability is an important software quality attribute. IEEE [6] defines testability as the degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met. ISO [7] defines testability (characteristic of maintainability) as attributes of software that bear on the effort needed to validate the software product. Dealing with software testability raises, in fact, several questions such as [8, 9]: Why is one class easier to test than another? What makes a class hard to test? What contributes to the testability of a class? How can we quantify this notion? Metrics (or models based on metrics) can be used to predict (assess) software testability and better manage the testing effort. Having quantitative data on the testability of a software can, in fact, be used to guide the decision-making of software development managers seeking to produce high-quality software. Particularly, it can help software managers, developers, and testers to [8, 9] plan and monitor testing activities, determine the critical parts of the

References

[1]  P. L. Yeh and J. C. Lin, “Software testability measurement derived from data flow analysis,” in Proceedings of the 2nd Euromicro Conference on Software Maintenance and Reengineering, Florence, Italy, 1998.
[2]  B. Baudry, B. Le Traon, and G. Sunyé, “Testability analysis of a UML class diagram,” in Proceedings of the 9th International Software Metrics Symposium (METRICS ’03), IEEE CS, 2003.
[3]  B. Baudry, Y. Le Traon, G. Sunyé, and J. M. Jézéquel, “Measuring and improving design patterns testability,” in Proceedings of the 9th International Software Metrics Symposium (METRICS '03), IEEE Computer Society, 2003.
[4]  M. Bruntink and A. van Deursen, “An empirical study into class testability,” Journal of Systems and Software, vol. 79, no. 9, pp. 1219–1232, 2006.
[5]  L. Zhao, “A new approach for software testability analysis,” in Proceedings of the 28th International Conference on Software Engineering (ICSE '06), pp. 985–988, May 2006.
[6]  IEEE, IEEE Standard Glossary of Software Engineering Terminology, IEEE Computer Society Press, 1990.
[7]  ISO/IEC 9126: Software Engineering Product Quality, 1991.
[8]  M. Bruntink and A. Van Deursen, “Predicting class testability using object-oriented metrics,” in Proceedings of the 4th IEEE International Workshop on Source Code Analysis and Manipulation (SCAM '04), pp. 136–145, September 2004.
[9]  V. Gupta, K. K. Aggarwal, and Y. Singh, “A Fuzzy Approach for Integrated Measure of Object-Oriented Software Testability,” Journal of Computer Science, vol. 1, no. 2, pp. 276–282, 2005.
[10]  B. Henderson-Sellers, Object-Oriented Metrics Measures of Complexity, Prentice-Hall, 1996.
[11]  Y. Singh, A. Kaur, and R. Malhota, “Predicting testability effort using artificial neural network,” in Proceedings of the World Congress on Engineering and Computer Science, San Francisco, Calif, USA, 2008.
[12]  Y. Singh, A. Kaur, and R. Malhotra, “Empirical validation of object-oriented metrics for predicting fault proneness models,” Software Quality Journal, vol. 18, no. 1, pp. 3–35, 2009.
[13]  L. Badri, M. Badri, and F. Touré, “Exploring empirically the relationship between lack of cohesion and testability in object-oriented systems,” in Advances in Software Engineering, T.-h. Kim, H.-K. Kim, M. K. Khan, et al., Eds., vol. 117 of Communications in Computer and Information Science, Springer, Berlin, Germany, 2010.
[14]  L. Badri, M. Badri, and F. Touré, “An empirical analysis of lack of cohesion metrics for predicting testability of classes,” International Journal of Software Engineering and Its Applications, vol. 5, no. 2, 2011.
[15]  M. Badri and F. Touré, “Empirical analysis for investigating the effect of control flow dependencies on testability of classes,” in Proceedings of the 23rd International Conference on Software Engineering and Knowledge Engineering (SEKE '11), 2011.
[16]  M. Badri, L. Badri, and F. Touré, “Empirical analysis of object-oriented design metrics: towards a new metric using control flow paths and probabilities,” Journal of Object Technology, vol. 8, no. 6, pp. 123–142, 2009.
[17]  N. Fenton and S. L. Pfleeger, Software Metrics: A Rigorous and Practical Approach, PWS Publishing Company, 1997.
[18]  J. Gao and M. C. Shih, “A component testability model for verification and measurement,” in Proceedings of the 29th Annual International Computer Software and Applications Conference (COMPSAC '05), pp. 211–218, July 2005.
[19]  J. W. Sheppard and M. Kaufman, “Formal specification of testability metrics in IEEE P1522,” in Proceedings of the IEEE Systems Readiness Technology Conference Autotestcom (AUTOTESTCON '01), pp. 71–82, Valley Forge, Pa, USA, August 2001.
[20]  R. S. Freedman, “Testability of software components,” IEEE Transactions on Software Engineering, vol. 17, no. 6, pp. 553–564, 1991.
[21]  J. M. Voas, “PIE: a dynamic failure-based technique,” IEEE Transactions on Software Engineering, vol. 18, no. 8, pp. 717–727, 1992.
[22]  J. M. Voas and K. W. Miller, “Semantic metrics for software testability,” The Journal of Systems and Software, vol. 20, no. 3, pp. 207–216, 1993.
[23]  J. M. Voas and K. W. Miller, “Software testability: the new verification,” IEEE Software, vol. 12, no. 3, pp. 17–28, 1995.
[24]  R. V. Binder, “Design for testability in object-oriented systems,” Communications of the ACM, vol. 37, no. 9, 1994.
[25]  T. M. Khoshgoftaar, R. M. Szabo, and J. M. Voas, “Detecting program modules with low testability,” in Proceedings of the 11th IEEE International Conference on Software Maintenance, pp. 242–250, October 1995.
[26]  T. M. Khoshgoftaar, E. B. Allen, and Z. Xu, “Predicting testability of program modules using a neural network,” in Proceedings of the 3rd IEEE Symposium on Application-Specific Systems and SE Technology, 2000.
[27]  J. McGregor and S. Srinivas, “A measure of testing effort,” in Proceedings of the Conference on Object-Oriented Technologies, pp. 129–142, USENIX Association, June1996.
[28]  A. Bertolino and L. Strigini, “On the use of testability measures for dependability assessment,” IEEE Transactions on Software Engineering, vol. 22, no. 2, pp. 97–108, 1996.
[29]  Y. Le Traon and C. Robach, “Testability analysis of co-designed systems,” in Proceedings of the 4th Asian Test Symposium (ATS '95), IEEE Computer Society, Washington, DC, USA, 1995.
[30]  Y. Le Traon and C. Robach, “Testability measurements for data flow designs,” in Proceedings of the 4th International Software Metrics Symposium, pp. 91–98, Albuquerque, NM, USA, November 1997.
[31]  Y. Le Traon, F. Ouabdesselam, and C. Robach, “Analyzing testability on data flow designs,” in Proceedings of the 11th International Symposium on Software Reliability Engineering (ISSRE '00), pp. 162–173, October 2000.
[32]  A. Petrenko, R. Dssouli, and H. Koenig, “On evaluation of testability of protocol structures,” in Proceedings of the International Workshop on Protocol Test Systems (IFIP '93), Pau, France, 1993.
[33]  K. Karoui and R. Dssouli, “Specification transformations and design for testability,” in Proceedings of the IEEE Global Elecommunications Conference (GLOBECOM ’96), 1996.
[34]  S. Jungmayr, “Testability measurement and software dependencies,” in Proceedings of the 12th International Workshop on Software Measurement, October 2002.
[35]  J. Gao, J. Tsao, and Y. Wu, Testing and Quality Assurance for Component-Based Software, Artech House, 2003.
[36]  T. B. Nguyen, M. Delaunay, and C. Robach, “Testability analysis applied to embedded data-flow software,” in Proceedings of the 3rd International Conference on Quality Software (QSIC ’03), 2003.
[37]  B. Baudry, Y. Le Traon, and G. Sunyé, “Improving the testability of UML class diagrams,” in Proceedings of the International Workshop on Testability Analysis (IWoTA '04), Rennes, France, 2004.
[38]  V. Chowdhary, “Practicing testability in the real world,” in Proceedings of the International Conference on Software Testing, Verification and Validation, IEEE Computer Society Press, 2009.
[39]  R. A. Khan and K. Mustafa, “Metric based testability model for object-oriented design (MTMOOD),” ACM SIGSOFT Software Engineering Notes, vol. 34, no. 2, 2009.
[40]  A. Kout, F. Touré, and M. Badri, “An empirical analysis of a testability model for object-oriented programs,” ACM SIGSOFT Software Engineering Notes, vol. 36, no. 4, 2011.
[41]  Y. Singh and A. Saha, “Predicting testability of eclipse: a case study,” Journal of Software Engineering, vol. 4, no. 2, 2010.
[42]  V. R. Basili, L. C. Briand, and W. L. Melo, “A validation of object-oriented design metrics as quality indicators,” IEEE Transactions on Software Engineering, vol. 22, no. 10, pp. 751–761, 1996.
[43]  Y. Zhou and H. Leung, “Empirical analysis of object-oriented design metrics for predicting high and low severity faults,” IEEE Transactions on Software Engineering, vol. 32, no. 10, pp. 771–789, 2006.
[44]  K. K. Aggarwal, Y. Singh, A. Kaur, and R. Malhotra, “Empirical analysis for investigating the effect of object-oriented metrics on fault proneness: a replicated case study,” Software Process Improvement and Practice, vol. 14, no. 1, pp. 39–62, 2009.
[45]  A. Mockus, N. Nagappan, and T. T. Dinh-Trong, “Test coverage and post-verification defects: a multiple case study,” in Proceedings of the 3rd International Symposium on Empirical Software Engineering and Measurement (ESEM '09), pp. 291–301, October 2009.
[46]  B. V. Rompaey and S. Demeyer, “Establishing traceability links between unit test cases and units under test,” in Proceedings of the 13th European Conference on Software Maintenance and Reengineering (CSMR '09), pp. 209–218, March 2009.
[47]  A. Qusef, G. Bavota, R. Oliveto, A. De Lucia, and D. Binkley, “SCOTCH: test-to-code traceability using slicing and conceptual coupling,” in Proceedings of the International Conference on Software Maintenance (ICSM '11), 2011.
[48]  M. H. Halstead, Elements of Software Science, Elsevier/North-Holland, New York, NY, USA, 1977.
[49]  L. C. Briand, J. W. Daly, and J. Wüst, “A unified framework for cohesion measurement in object-oriented systems,” Empirical Software Engineering, vol. 3, no. 1, pp. 65–117, 1998.
[50]  L. C. Briand, J. Wüst, J. W. Daly, and D. Victor Porter, “Exploring the relationships between design measures and software quality in object-oriented systems,” Journal of Systems and Software, vol. 51, no. 3, pp. 245–273, 2000.
[51]  T. Gyimóthy, R. Ferenc, and I. Siket, “Empirical validation of object-oriented metrics on open source software for fault prediction,” IEEE Transactions on Software Engineering, vol. 31, no. 10, pp. 897–910, 2005.
[52]  A. Marcus, D. Poshyvanyk, and R. Ferenc, “Using the conceptual cohesion of classes for fault prediction in object-oriented systems,” IEEE Transactions on Software Engineering, vol. 34, no. 2, pp. 287–300, 2008.
[53]  K. El Emam and W. Melo, “The prediction of faulty classes using object-oriented design metrics,” National Research Council of Canada NRC/ERB 1064, 1999.
[54]  D. Hosmer and S. Lemeshow, Applied Logistic Regression, Wiley-Interscience, 2nd edition, 2000.
[55]  K. El Emam, “A Methodology for validating software product metrics,” National Research Council of Canada NRC/ERB 1076, 2000.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413