0

I've been trying to compare the runtimes of selection sort, quick sort and merge sort and so far, what I got is this table:

n       S(n)    S(n))/n^2   Q(n)    (Q(n))/(nlog2 n)    M(n)    M(n))/(nlog2n)
1000    1.4     1.40000E-06 0.104   1.04357E-05         0.151   1.51518E-05
2000    5.41    1.35250E-06 0.223   1.01680E-05         0.327   1.49100E-05
4000    21.23   1.32688E-06 0.486   1.01540E-05         0.704   1.47086E-05
8000    84.15   1.31484E-06 1.036   9.98783E-06         1.496   1.44226E-05
16000   336.47  1.31434E-06 2.179   9.75151E-06         3.177   1.42178E-05
32000   1322.99 1.29198E-06 4.673   9.75767E-06         6.709   1.40090E-05

n is the size of the array being sorted.

I know that S(n) is the total runtime of the sorting program. I just don't get why I'm being asked to divide this total, by its order (i.e S(n) is O(n^2)). And what these values represent or what information can I infer from these results (3rd, 5th, & 7th column)?

If anyone can briefly explain this to me, that would be greatly appreciated.

5
  • This doesn't have anything to do with C++ specifically. Please remove the tag. Commented Feb 12, 2015 at 22:44
  • 2
    It looks to me as though the divided values are meant to demonstrate that the proposed runtime complexity of the algorithm matches the actual runtimes. That Q(n) / (n * log2(n)) remains roughly constant shows that Q(n) scales with O(n * log2(n)). (Several asterisks apply to this statement in reality, but I think that's what you're supposed to take away from it) Commented Feb 12, 2015 at 22:44
  • To estimate the constant that sits in O(...) ? To show that the complexity formula is correct? To get fancy numbers? This is rather a question for the person who is asking you to divide this. Commented Feb 12, 2015 at 22:47
  • The answer to this question greatly depends on what you are studying and your prerequisites. Commented Feb 12, 2015 at 22:54
  • @CaptainGiraffe I'm currently studying algorithms and data structures, (my course name, if that's what you mean) and I'm not exactly sure what you mean by prerequisites. Commented Feb 12, 2015 at 22:59

1 Answer 1

1

Knowing that something is O(n^2) is great, but it's not the whole story. As you probably know, big-O notation disregards pesky things like constant factors and lower-order terms. So it's a good guideline for how well something will scale, but doesn't tell you a lot about how well it will perform on real data. Your professor is trying to show you the constant factors involved. As you can see, SS has a constant factor that is about 1/10th of the other two. So for smaller inputs, (probably up to about 50 elements(since 50^2/10 ~= 50log2(50))) that might outweigh the O(nlogn) vs O(n^2) difference and make SS the better choice.

Edit: This table doesn't conclusively show that the algorithms are actually in the expected complexity classes. But the results you got do match what you would expect to see from O(n^2) and O(nlog2n) algorithms. So if you did the math to calculate the complexities of the algorithms, you could run a test like this one as a sanity check, but if there happened to be a log(log(log(n))) factor in one of them, you would never notice it unless you ran it on a mind-bogglingly large data set.

(even worse, you can make an exponential function or high-degree polynomial approximate a low-degree polynomial for a while and then skyrocket, and without showing mathematically that your algorithm has a certain complexity you can't really be sure you won't get bitten by something like that.)

These results should help by showing you that the n^2 and nlog2n estimates actually correspond to real runtimes, that you (probably) implemented the algorithms correctly since they're scaling as you would expect them to, and that some of the naive algorithms may not scale well, but they have good constant factors and that makes them well-suited for smaller problems.

Your Quicksort is probably implemented with singleton arrays as the base case, but you could possibly speed it up a bit by making arrays with fewer than 50 elements the base case and running insertion sort on those.

Sign up to request clarification or add additional context in comments.

9 Comments

I don't get that from those results at all. Could you explain it a little better?
@MarkRansom Which part do you want explained better?
How did you use that table of results to infer a constant factor?
@MarkRansom If you have something that's asymptotically n^2, and divide n^2 out from the results and they're all approximately equal, that quotient is probably a constant factor of your function.
@MarkRansom or if you prefer, I've been through enough CS/Math courses that I can usually recognize what a professor is getting at.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.