Search This Blog

Simplicity is the ultimate sophistication.” — Leonardo da Vinci
Contact me: sreramk360@gmail.com

Monday 22 June 2015

Analysis on shell sorting algorithm's worse case



Analysis on shell sorting algorithm's worse case

Dear reader,

Before we get to the main part of this post, I would like to share my experience learning this algorithm. I am a student doing my computer science engineering, and this concept of shell sorting is a part of my first year syllabus. I couldn't initially understand why shell sort runs faster than insertion sort, given that shell sort tends to insertion sort algorithm if the separation by which the system does the sorting procedure reaches one. On all cases, it does reach one and the same kind of iterations as of insertion sorting algorithm is found to take place. So this question had bothered me for a two days after since I had come across it. But now, I am completely clarified with that doubt. Because I found it difficult to understand why shell sorting algorithm proceeds faster than insertion sorting algorithm when I was learning this algorithm, I would be able to give a better view on shell sorting algorithm, in a way that you don't face the same problem I had faced.
Yours,
 K Sreram.


Introduction 

 Shell sort algorithm, unlike many other algorithms, modify the unsorted list in a way that the final sorting procedure (which is same as insertion sort) runs faster. Most sorting algorithm's run-time depends upon the order by which the elements are present in the unsorted list. This fact can be ignored in many algorithms, but when it comes to Shell sort algorithm the order by which the elements are present in the unsorted list plays a vital role. If we are to sort an unsorted list using insertion sorting algorithm and if the list is presorted, then this algorithm runs with time complexity $\Theta (N)$. But if its sorted in reverse order, then the time complexity will be $\Theta (N^2)$, which is the upper bound of this algorithm.

 So do you think it is possible to improve the run-time of the sorting algorithm by rearranging the unsorted list in such a way that sorting that list using insertion sorting algorithm would take lesser time compared to the original list? In case of insertion sorting algorithm, the average running time is same as the worst run time and it's $\Theta (N^2)$. So modifying the original unsorted array randomly will not help in improving the upper bound run-time complexity of insertion sorting algorithm. But there are some algorithms for which the random case's run-time is lesser than the run-time of the worse case. One such algorithm is quick sort. For such algorithms, shuffling the unsorted array (with the shuffling process being dependent upon the elements that are being shuffled; i.e., the shuffled list is a function of the order of the elements present in unsorted list) can improve the time vice performance of the algorithm. Usually this won't be a problem because the run-time involved in the shuffling process would be of leaner time ($O(N)$). But still, the dependence of the shuffled list's order should be a function of its previous order. This is vital because, if the transformation is constant, then for a particular order of the series inputted to the system, it might get transformed to the order which causes the algorithm to run at its maximum time.


Usually, this "shuffling" process is not studied in detail, because the probability that the inputted series is obtained in the exact order which causes the maximum run-time is very less. In most cases it is so less that it can be completely ignored. But this is talked about here to introduce you to the effect on the run-time of the sorting algorithm after modifying the series on some basis. So if the original list of elements is shuffled before sorting it using an algorithm that has its average run-time complexity less than that of its worse case run-time, then we may theoretically prove that the algorithm is modified to run at a lesser upper bound run-time complexity. Usually, this shuffling process also have an impact on the lower time bound. List of elements which gets quickly sorted in its original form also gets shuffled, bringing it back to the algorithm's average run-time case. This problem can be solved if the shuffling process does not influence the list that can be easily sorted in its original form. Shell sorting algorithm alters the original list making it more easily sortable by using insertion sorting algorithm and achieving the upper bound run time complexity to be $O(N^{3/2})$. In history, this is the first sorting algorithm that falsified the original assumption that it's not theoretically possible to design an algorithm that runs with time-complexity less than $O(N^2)$.

The algorithm's analysis

There is no theoretical proof for determining the average run-time complexity of shell sorting algorithm. But we may prove the worse running case. Insertion sorting algorithm is given below:

let there be a particular integral increment series defined as $a_1, a_2, ..., a_n$ where $a_1 = 1$. For each iteration as $i$ ranges form $n$ to $1$, the unsorted list is made to be $a_i$ sorted. When the list is said to be $a_i$ sorted, it means that every set of elements that are $a_i$ elements apart get sorted. This sorting is done by the insertion sorting algorithm. Now as $i$ varies from $n$ to $1$, the list gets completely sorted (as a $1$ sorted list is a completely sorted list).

Finally, when the last iteration is processed (i.e., at $a_1 = 1$), the list gets sorted in a manner similar to the insertion sorting algorithm. Let the unsorted list be denoted as $c_1, c_2, c_3, ..., c_m$, where $m$ is the size of the list. Then if we say that the list gets $a_i$ sorted, elements $c_1, c_{1+a_i}, c_{1+2a_i}, ...$ are sorted, and the elements $c_2, c_{2+a_i}, c_{2+2a_i}, ...$ also get sorted. This goes on, until all the independent groups of elements are sorted. That is, if $c_k$ and $c_i$ are to interchange positions then $c_i$ takes the position of $c_k$ and $c_k$ takes the position of $c_i$ in the main list. For any iteration, there are $\frac{m}{a_i}$ number of unique groups of elements that are to be sorted. It is observed that if the list is $a_k$ sorted, and then if its $a_{k-1}$ sorted then, the list still remains $a_k$ sorted (this property will be proved in this article). We may take advantage of this property to form the "shuffling" algorithm. We can show that the upper bound run-time complexity of this algorithm depends upon the increment series chosen. For some increment series, it runs faster. There are many literature proposing newer increment series which run faster than the previous increment series that were in use.



No comments:

Post a Comment

Featured post

Why increasing complexity is not good?

“ Simplicity is the ultimate sophistication.” — Leonardo da Vinci Why is complicating things wrong ? - K Sr...