全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

The Study of Resource Allocation among Software Development Phases: An Economics-Based Approach

DOI: 10.1155/2011/579292

Full-Text   Cite this paper   Add to My Lib

Abstract:

This paper presents an economics-based approach for studying the problem of resource allocation among software development phases. Our approach is structured along two parallel axes: theoretical and empirical. We developed a general economic model for analyzing the allocation problem as a constrained profit maximization problem. The model, based on a novel concept of software production function, considers the effects of different allocations of development resources on output measures of the resulting software product. An empirical environment for evaluating and refining the model is presented, and a first exploratory study for characterizing the model's components and developers' resource allocation decisions is described. The findings illustrate how the model can be applied and validate its underlying assumptions and usability. Future quantitative empirical studies can refine and substantiate various aspects of the proposed model and ultimately improve the productivity of software development processes. 1. Introduction Fundamental disagreements often arise with regard to the “correct” allocation of resources to various software development phases (SDPs). For example, persuasive arguments are made for devoting substantial effort to requirements analysis and design, in order to avoid the costly consequences of modifications in later development stages [1–4]. However, the pressure to provide an executable product, which can be tested and presented to the customer sooner rather than later, can sometimes constitute a consideration for shifting resource allocation to implementation instead. Yet another approach can be seen in Test-Driven Development methods [5, 6], where much emphasis is placed on testing, both before and after implementation. These tradeoffs are relevant when analyzing graphs showing the cost of detecting and correcting a fault as a function of the phase in which it is detected. Such graphs—prevalent in the literature—demonstrate the dramatic increase in cost when defects are detected later in the development process [7–9]. This observation is often used to support the claim that more resources should be allocated to early SDPs. Obviously, this claim can be pursued ad absurdum: dedicating all (or almost all) resources to the requirements and design phases would leave insufficient resources for implementation. Yet developers1 cannot find any guidance as to how many resources should be allocated to the various SDPs. Why are such fundamentally different approaches being advocated for allocating resources to SDPs? Does a correct allocation

References

[1]  Throughout the paper we use the generic term “developer” for all hierarchical levels in the software organization: from the entire software house to the individual engineer.
[2]  Yiftachel and Hadar [36] explain the link between the problems of defining and measuring software development output, which is a critical component of the resource allocation problem, and what Brooks [17] defines as an essential problem.
[3]  From here on the 4 phases will be noted with capital initials to indicate that the phase is discussed. For example, when we note “Design”, we mean “the design phase”.
[4]  Reference [28] proposes various System Dynamics models to effectively manage such processes and demonstrates their capabilities. However, it does not deal specifically with allocation of resources across software development phases, except to illustrate the concept of product peer inspection in Section .
[5]  The literature suggests additional phases omitted from this paper. Specifically, planning and management are not considered at all in this model, being orthogonal to the development phases we refer to. Integration and maintenance are considered within the existing phases (e.g., each maintenance process can be viewed as comprised of several activities, each associated with one of the four phases).
[6]  Unit testing is viewed by many as part of the implementation (e.g. [31]), although it is a checking operation.
[7]  We refer in our examples to measures related to the object-oriented paradigms. Of course, when using other development paradigms, the choice of measures needs to be done accordingly.
[8]  CBO is a simple metric for evaluating the Design artifact quality in the sense of modularity. The literature suggests more complicated metrics; for the sake of simplicity we illustrate our approach using CBO.
[9]  In this study we did not measure the external maintainability factor. However, we evaluated the effects of design ( ) and implementation ( ) on maintainability. For details see [23].
[10]  S. R. Schach, Object-Oriented and Classical Software Engineering, McGraw-Hill, New York, NY, USA, 5th edition, 2002.
[11]  B. W. Boehm, C. Abts, A. W. Brown, et al., Software Cost Estimation with COCOMO II, Prentice-Hall, Englewood Cliffs, NJ, USA, 2000.
[12]  B. W. Boehm, Software Engineering Economics, Prentice-Hall, Englewood Cliffs, NJ, USA, 1981.
[13]  G. Smith and L. Wildman, “Model checking Z specifications using SAL,” in the International Conference of Z and B Users (ZB '05), H. Treharne, S. King, M. Henson, and S. Schneider, Eds., pp. 87–105, Springer, 2005.
[14]  K. Beck, Extreme Programming Explained: Embrace Change, Addison-Wesley, Boston, Mass, USA, 2000.
[15]  H. Erdogmus, M. Morisio, and M. Torchiano, “On the effectiveness of the test-first approach to programming,” IEEE Transactions on Software Engineering, vol. 31, no. 3, pp. 226–237, 2005.
[16]  G. Tassey, The Economic Impacts of Inadequate Infrastructure for Software Testing, National Institute of Standards and Technology, 2002.
[17]  P. Jalote and B. Vishal, “Optimal resource allocation for the quality control process,” in Proceedings of the 14th International Symposium on Software Reliability Engineering, Denver, Colo, USA, November 2003.
[18]  S. R. Schach, Introduction to Object-Related Analysis and Design, McGraw-Hill, New York, NY, USA, 2004.
[19]  B. Steece, S. Chulani, and B. Boehm, “Determining software quality using COQUALMO,” in Case Studies in Reliability and Maintenance, W Blischke and D Murthy, Eds., Wiley, Sidney, Australia, 2002.
[20]  E. K. Emam, The ROI from Software Quality, Auerbach Publications, Boston, Mass, USA, 2005.
[21]  Y. Yang, M. He, M. Li, Q. Wang, and B. Boehm, “Phase distribution of software development effort,” in Proceedings of the 2nd ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, pp. 61–69, Kaiserslautern, Germany, 2008.
[22]  W. Heijstek and M. R. V. Chaudron, “Effort distribution in model-based development,” in the 2nd Workshop on Model Size Metrics, and the 10th International Conference on Model Driven Engineering Languages and Systems, Nashville, Tenn, USA, 2007.
[23]  AJG Babu and N. Suresh, “Modelling and optimizing software quality,” International Journal of Quality and Reliability Management, vol. 13, no. 3, pp. 95–103, 1996.
[24]  lS Biffl, A Aurum, BW Boehm, H Erdogmus, and P. Grünbacher, Value-Based Software Engineering, Springer, New York, NY, USA, 2005.
[25]  L. Huang and B. Boehm, “Determining how much software assurance is enough: a value-based approach,” in the 7th Workshop on Economics-Driven Software Research, St. Louis, Mo, USA, 2005.
[26]  F. P. Brooks, “No silver bullet refired,” in The Mythical Man-Month, pp. 207–226, Addison Wesley/Longman, Boston, Mass, USA, 1995.
[27]  D. M. Berry, “The inevitable pain of software development: why there is no silver bullet,” in Proceedings of 9th International Workshop on Radical Innovations of Software and Systems Engineering in the Future, October 2002.
[28]  B. A. Kitchenham, S. L. Pfleeger, L. M. Pickard et al., “Preliminary guidelines for empirical research in software engineering,” IEEE Transactions on Software Engineering, vol. 28, no. 8, pp. 721–734, 2002.
[29]  M. Bassey, “Methods of enquiry and the conduct of case study research,” in Case Study Research in Educational Settings, chapter 7, pp. 65–91, Open University Press, Buckingham, UK, 1999.
[30]  C. B. Seaman, “Qualitative methods in empirical studies of software engineering,” IEEE Transactions on Software Engineering, vol. 25, no. 4, pp. 557–572, 1999.
[31]  N. K. Denzin and Y. S. Lincoln, Eds., Handbook of Qualitative Research, Sage, Thousand Oaks, Calif, USA, 2000.
[32]  P. Yiftachel, Resource Allocation among Software Development Phases, M.S. thesis, Computer Science Department, University of Haifa, 2006.
[33]  S. T. Hackman, Production Economics, chapter 2, Springer, Berlin, Germany, 2008.
[34]  ISO/IEC TR 9126, “Software engineering-product quality,” 19-12-2000.
[35]  Q. Hu, “Evaluating alternative software production functions,” IEEE Transactions on Software Engineering, vol. 23, no. 6, pp. 379–387, 1997.
[36]  P. C. Pendharkar, J. A. Rodger, and G. H. Subramanian, “An empirical study of the Cobb-Douglas production function properties of software development effort,” Information and Software Technology, vol. 50, no. 12, pp. 1181–1188, 2008.
[37]  R. J. Madachy, Software Process Dynamics, Prentice-Hall/IEEE Press, Englewood Cliffs, NJ, USA, 2008.
[38]  P. Yiftachel, D. Peled, I. Hadar, and D. Goldwasser, “Resource allocation among development phases: an economic approach,” in Proceedings of Economic Driven Software Engineering Research Workshop, the 28th International Conference on Software Engineering, pp. 43–48, Shanghai, China, 2006.
[39]  W. Heijstek and M. R. V. Chaudron, “On early investments in software development: a relation between effort distribution and defects in RUP projects,” Tech. Rep., Leiden University, Leiden Institute of Advanced Computer Science, 2008.
[40]  I. Sommerville, Software Engineering, Pearson Education Limited, London, UK, 7th edition, 2005.
[41]  IEEE 610.12-1990, “A Glossary of Software Engineering Terminology,” Institute of Electrical and Electrical and Electronic Engineers, Inc, 1990.
[42]  D. M. Berry, “What, Not How? When Is ‘How’ Really ‘What’? and Some Thoughts on Quality Requirements,” Tech. Rep., Computer Science Department, University of Waterloo, 2001.
[43]  P. Ralph and Y. Wand, “A proposal for a formal definition of the design concept,” in Design Requirements Engineering: A Ten-Year Perspective, K. Lyytinen, P. Loucopoulos, J. Mylopoulos, and B. Robinson, Eds., pp. 103–136, Springer, Berlin, Germany, 2008.
[44]  D. Paulson and Y. Wand, “An automated approach to information systems decomposition,” IEEE Transactions on Software Engineering, vol. 18, no. 3, pp. 174–189, 1992.
[45]  P. Yiftachel and I. Hadar, “Defining and measuring software development output: a light at the end of the tunnel for an essential problem. Presented at the workshop, No Silver Bullet: A Retrospective on the Essence and Accidents of Software Engineering,” in the International Conference on Object Oriented Programming, Systems, Languages and Applications (OOPSLA '07), Montreal, Canada, October 2007.
[46]  M. King, “Living up to standards,” in the 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL '03), pp. 65–72, Budapest, Hungary, 2003.
[47]  V. Poladian, S. Butler, M. Shaw, and D. Garlan, “Time is not money: the case for multi-dimensional accounting in value-based software engineering,” in the 5th Workshop on Economics-Driven Software Research, pp. 19–24, Portland, Ore, USA, 2003.
[48]  M. A. C?té, W. Suryn, C. Y. Laporte, and R. A. Martin, “The evolution path for industrial software quality evaluation methods applying ISO/IEC 9126:2001 quality model: example of MITRE's SQAE method,” Software Quality Journal, vol. 13, no. 1, pp. 17–30, 2005.
[49]  D. P. Kelly and R. S. Oshana, “Improving software quality using statistical testing techniques,” Information and Software Technology, vol. 42, no. 12, pp. 801–807, 2000.
[50]  B. Anderson, A. Bajaj, and W. Gorr, “An estimation of the decision models of senior IS managers when evaluating the external quality of organizational software,” Journal of Systems and Software, vol. 61, no. 1, pp. 59–75, 2002.
[51]  J. Lindroos, “Code and design metrics for object-oriented systems,” in the Seminar for Quality Models for Software Engineering, Helsinki, Finland, 2004.
[52]  S. R. Chidamber and C. F. Kemerer, “Metrics suite for object oriented design,” IEEE Transactions on Software Engineering, vol. 20, no. 6, pp. 476–493, 1994.
[53]  Y. Wand and R. Weber, “An ontological evaluation of systems analysis and design methods,” in Information System Concepts: An In-Depth Analysis, E. D. Falkenberg and P. Lindgreen, Eds., pp. 79–107, North-Holland, Amsterdam, The Netherlands, 1989.
[54]  S. C. Misra, “Modeling design/Coding factors that drive maintainability of software systems,” Software Quality Journal, vol. 13, no. 3, pp. 297–320, 2005.
[55]  Y. Cai and K. J. Sullivan, “A value-oriented theory of modularity in design,” in the 7th International Economics-Driven Software Engineering Research Workshop, St. Louis, Mo, USA, 2005.
[56]  K. D. Welker and P. W. Oman, “Software maintainability metrics models in practice,” Crosstalk Journal of Defense Software Engineering, vol. 8, no. 11, pp. 19–23, 1995.
[57]  V. A. Batista, D. C. C. Peixoto, E. P. Borges, W. Pádua, R. F. Resende, and C. I. P. S. Pádua, “ReMoFP: a tool for counting function points from UML requirement models,” Advances in Software Engineering, vol. 2011, Article ID 495232, 7 pages, 2011.
[58]  J. M. Akker, S. van den Brinkkemper, G. Diepen, and J. Versendaal, “Determination of the next release of a software product: an approach using integer linear programming,” in Proceedings of the CAiSE '05 FORUM, pp. 119–124, 2005.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413