0

Suppose I have some code and it is important for me that the computational complexity observe some (algebraic) upper bound.

For instance, I might have an algorithm that when properly implemented runs in n^2, but if a bug is introduced runs in n^3. The test would check if the method is in fact running in n^2, and if not, fail it.

My question is, is it possible to accomplish this with MSTest?

I can see that after introducing a bunch of mathematical code, it would in principle be possible to fit given equations to empirical measurements and/or attempt to find the limit.

Alternatively, I imagine it might be possible to produce graphs, together with a best fit, and then ask the human for input on whether the test passes or not.

But are any of these actually realistic? Has anything similar ever been done?

2 Answers 2

1

It may not be possible to conduct such tests for practical purposes. If you try to compare two algorithms by using mathematical analysis alone, then the results may not be useful for practical purposes. For example you may not always prefer a O(n2) over a O(n3) algorithm. You have to consider the hidden constant. For example an algorithm having O(1000000 n2) is still O(n2) in a mathematical point of view. But such an algorithm will not fare better than a O(10 n3) algorithm which is mathematically O(n3) until the value of n is greater than 100000. But many often we might deal with input size far less than this limit.

Sign up to request clarification or add additional context in comments.

Comments

0

First of all, you would need to be able to instrument your code so that you could measure the number of operations performed. For instance, if you were testing a sorting algorithm, you could increment a counter each time the comparison function is called. For other cases this might not be easy, like if the operation in question were integer multiplications.

Then there's the issue of what "O(n^2)" describes. Some algorithms have different best-, worst- and average-case complexity. If that applies to you you'll need to know what inputs lead to each of those situations and test them separately.

If what you want to test is that the run-time is that it's O(n^2) it will be difficult to verify. As you know, big-O talks about asymptotic complexity so you'll need to run very long tests and I suppose do some curve-fitting to see if it's approximated well by some ax^2. It seems like it would be better if you could get a tight bound on the actual complexity, like if you knew the number of operations should always be below 4x^2 when x > 100. Then you pick some data points, including some reasonably large ones, and check if they are under the estimate. Of course this means you might have to update your tests for modifications that change the coefficient while leaving big-O complexity the same, but honestly that sounds like a good thing.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.