全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

A New Look at Worst Case Complexity: A Statistical Approach

DOI: 10.1155/2014/840432

Full-Text   Cite this paper   Add to My Lib

Abstract:

We present a new and improved worst case complexity model for quick sort as , where the LHS gives the worst case time complexity, is the input size, is the frequency of sample elements, and is a function of both the input size and the parameter . The rest of the terms arising due to linear regression have usual meanings. We claim this to be an improvement over the conventional model; namely, , which stems from the worst case complexity for this algorithm. 1. Introduction Sometimes theoretical results on algorithms are not enough for predicting the algorithm’s behavior in real time implementation [1]. From research in parameterized complexity, we already know that for certain algorithms, such as sorting, the parameters of the input distribution must also be taken into account, apart from the input size, for a more precise evaluation of time complexity of the algorithm in question [2, 3]. Based on the results obtained, we present a new and improved worst case complexity model for quick sort as where the LHS gives the worst case time complexity, is the input size, is the frequency of sample elements, and is a function of both the input size and the parameter . The rest of the terms arising due to linear regression have usual meanings. We claim this to be an improvement over the conventional model; namely, , which stems from the worst case complexity for this algorithm. It is important to note that our results change the order of theoretical complexity of this algorithm as we get complexity in some situations. This new model in our opinion can be a guiding factor in distinguishing this algorithm from other sorting algorithms of similar order of theoretical average and/or worst case complexities. The dependence of basic operation(s) on the response is more prominent for discrete distributions rather than continuous ones for the probability of a tie is zero in a continuous case. However, presence of ties and their relative positions in the array is crucial for discrete cases. And this is precisely where the parameters, apart from characterizing the size of the input, of the input distribution come into play. We make a statistical case study on the robustness of theoretical worst case complexity measures for quick sort [4] over discrete uniform distribution inputs. The uniform distribution input parameters are related as . The runtime complexity of quick sort varies from to depending on the extent of tied elements present in a sample. For example, complexity is when all keys are distinct and when all keys are similar [5]. Apart from this result an important

References

[1]  D. S. Johnson, A Theoretician’s Guide to the Experimental Analysis of Algorithms, 2001, http://www.researchatt.com/~dsj/.
[2]  R. G. Downey and M. R. Fellows, Parameterized Complexity, Springer, New York, NY, USA, 1999.
[3]  H. M. Mahmoud, Sorting: A Distribution Theory, Wiley-Interscience Series in Discrete Mathematics and Optimization, John Wiley & Sons, New York, NY, USA, 2000.
[4]  C. A. Hoare, “Quicksort,” The Computer Journal, vol. 5, pp. 10–15, 1962.
[5]  R. Sedgewick, “Quicksort with equal keys,” SIAM Journal on Computing, vol. 6, no. 2, pp. 240–267, 1977.
[6]  N. K. Singh, S. Chakraborty, and D. K. Mallick, “A statistical peek into average case complexity,” International Journal of Experimental Design and Process Optimisation. In press.
[7]  S. Chakraborty and S. K. Sourabh, “On why an algorithmic time complexity measure can be system invariant rather than system independent,” Applied Mathematics and Computation, vol. 190, no. 1, pp. 195–204, 2007.
[8]  S. Chakraborty, D. N. Modi, and S. K. Panigrahi, “Will the weight-based statistical bounds revolutionize the IT,” International Journal of Computational Cognition, vol. 7, no. 3, pp. 16–22, 2009.
[9]  K.-T. Fang, R. Li, and A. Sudjianto, Design and Modeling for Computer Experiments, Chapman & Hall, New York, NY, USA, 2006.
[10]  J. Sacks, W. Weltch, T. Mitchel, and H. Wynn, “Design and analysis of experiments,” Statistical Science, vol. 4, no. 4, pp. 409–423, 1989.
[11]  S. Chakraborty and S. K. Sourabh, A Computer Experiment Oriented Approach to Algorithmic Complexity, Lambert Academic, 2010.
[12]  N. K. Singh and S. Chakraborty, “Partition sort and its empirical analysis,” in Proceedings of the International Conference on Computational Intelligence and Information Technology (CIIT '11), vol. 250 of Communications in Computer and Information Science, pp. 340–346, 2011.
[13]  P. Mathews, Design of Experiments with MINITAB, New Age International, New Delhi, India, 2010.
[14]  D. C. Montgomery, Design and Analysis of Experiments, John Wiley & Sons, New York, NY, USA, 8th edition, 2013.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413