%0 Journal Article %T A New Look at Worst Case Complexity: A Statistical Approach %A Niraj Kumar Singh %A Soubhik Chakraborty %A Dheeresh Kumar Mallick %J International Journal of Analysis %D 2014 %I Hindawi Publishing Corporation %R 10.1155/2014/840432 %X We present a new and improved worst case complexity model for quick sort as , where the LHS gives the worst case time complexity, is the input size, is the frequency of sample elements, and is a function of both the input size and the parameter . The rest of the terms arising due to linear regression have usual meanings. We claim this to be an improvement over the conventional model; namely, , which stems from the worst case complexity for this algorithm. 1. Introduction Sometimes theoretical results on algorithms are not enough for predicting the algorithm¡¯s behavior in real time implementation [1]. From research in parameterized complexity, we already know that for certain algorithms, such as sorting, the parameters of the input distribution must also be taken into account, apart from the input size, for a more precise evaluation of time complexity of the algorithm in question [2, 3]. Based on the results obtained, we present a new and improved worst case complexity model for quick sort as where the LHS gives the worst case time complexity, is the input size, is the frequency of sample elements, and is a function of both the input size and the parameter . The rest of the terms arising due to linear regression have usual meanings. We claim this to be an improvement over the conventional model; namely, , which stems from the worst case complexity for this algorithm. It is important to note that our results change the order of theoretical complexity of this algorithm as we get complexity in some situations. This new model in our opinion can be a guiding factor in distinguishing this algorithm from other sorting algorithms of similar order of theoretical average and/or worst case complexities. The dependence of basic operation(s) on the response is more prominent for discrete distributions rather than continuous ones for the probability of a tie is zero in a continuous case. However, presence of ties and their relative positions in the array is crucial for discrete cases. And this is precisely where the parameters, apart from characterizing the size of the input, of the input distribution come into play. We make a statistical case study on the robustness of theoretical worst case complexity measures for quick sort [4] over discrete uniform distribution inputs. The uniform distribution input parameters are related as . The runtime complexity of quick sort varies from to depending on the extent of tied elements present in a sample. For example, complexity is when all keys are distinct and when all keys are similar [5]. Apart from this result an important %U http://www.hindawi.com/journals/ijanal/2014/840432/