1

I am now revising the sorting algorithms. Here is the question:

It is given that this sorting algorithm is written in C, and treats 'a' and 'A' as equal, and after running this sorting algorithm, the results are as follows:

10000 random data -> 0.016 sec
100000 random data -> 0.304 sec

10000 ordered data -> 0.006 sec
100000 ordered data -> 0.108 sec

10000 reversed data -> 0.010 sec
100000 reversed data -> 0.138 sec

Question: In point form briefly state the conclusions that you can draw from the test results above.

What I have done
I know this sorting algorithm is non-stable (as stated in the question), and I can guess it is a quick sort. I know that quick sort has worst case O(n^2), average and best case O(n log n), but I have got no idea how to explain from the results, I can't just say oh its because its non-stable and quick sort have bad results in reversed order, so i can determine it's quick sort.

Are there anything specific I can tell from the result? It would be nice if there are maths calculations or some other important observations from the results.

8
  • 1
    "I know this sorting algorithm is non-stable (as stated in the question)" - where does the question state that it's non-stable? Commented Jun 12, 2015 at 8:27
  • @user2357112 it said it treats 'a' and 'A' the same, so I reckon its unstable Commented Jun 12, 2015 at 8:29
  • not stable means that you are replacing equal values. stable sorting algorithms doesnt do that Commented Jun 12, 2015 at 8:29
  • 2
    You've misunderstood the concept of stability. Stability of a sort has nothing to do with what items it considers equal; rather, a stable sort will keep equal elements in the same order relative to each other in the output as they appear in the input. Commented Jun 12, 2015 at 8:33
  • @GiorgiNakeuri So, this is stable? Commented Jun 12, 2015 at 8:34

1 Answer 1

3

We can tell that this isn't a quadratic-time algorithm like selection sort or insertion sort, since raising the input size by a factor of 10 raised the runtime by a factor of 13-19. This is behavior we'd expect from an O(n*log(n)) average-case algorithm, like mergesort or a good quicksort.

We can tell that the algorithm isn't adaptive, or at least, not very adaptive. An adaptive algorithm would have performed much better on the sorted input, and probably on the reversed input, too. In particular, raising the size of the sorted input by a factor of 10 would have raised the runtime by a factor of about 10. While the algorithm did do better on sorted or reverse-sorted input than random input, this looks more like a result of, say, more effective branch prediction in those cases.

We don't have any information that would indicate whether the sort is stable.

I don't see anything that would distinguish whether this is a quicksort, mergesort, heapsort, or other O(n*log(n)) algorithm. We can exclude certain types of pivot selection for quicksort - for example, a quicksort that always picks the first element as the pivot would run in quadratic time on sorted input - but beyond that, I can't tell.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.