0

I have to estimate time complexity of Solve():

// Those methods and list<Element> Elements belongs to Solver class

void Solver::Solve()
{
    while(List is not empty)
        Recursive();
}

void Solver::Recursive(some parameters)
{

    Element WhatCanISolve = WhatCanISolve(some parameters); //O(n) in List size. When called directly from Solve() - will always return valid element. When called by recursion - it may or may not return element
    if(WhatCanISolve == null)
        return;

    //We reduce GLOBAL problem size by one.
    List.remove(Element); //This is list, and Element is pointed by iterator, so O(1)

    //Some simple O(1) operations


    //Now we call recursive function twice.
    Recursive(some other parameters 1);
    Recursive(some other parameters 2);
}

//This function performs search with given parameters
Element Solver::WhatCanISolve(some parameters)
{
    //Iterates through whole List, so O(n) in List size
    //Returns first element matching parameters
    //Returns single Element or null
}

My first thought was that it should be somwhere around O(n^2).

Then I thought of

T(n) = n + 2T(n-1)

which (according to wolframalpha) expands to:

O(2^n)

However i think that the second idea is false, since n is reduced between recursive calls.

I also did some benchmarking with large sets. Here are the results:

N       t(N) in ms
10000   480
20000   1884
30000   4500
40000   8870
50000   15000
60000   27000
70000   44000
80000   81285
90000   128000
100000  204380
150000  754390
7
  • What language is this? Certainly not C++. Commented Jan 22, 2016 at 20:53
  • a lot of time when you reduce the cost buy a certain amount over a given time you'll end up with some form of log(n) Commented Jan 22, 2016 at 20:57
  • It's hard to estimate with knowing how WhatCanISolve works. I.e. when can it return null (w.r.t. list size). Commented Jan 22, 2016 at 21:09
  • @SergeyA this is pseudo-c++, I simplified it a lot, to make it more concise. Commented Jan 22, 2016 at 21:15
  • @MikhailMaltsev this function iterates over Elements (std::list<Element>). In each iteration it compares 6 doubles. Commented Jan 22, 2016 at 21:17

2 Answers 2

2

Your algorithm is still O(2n), even though it reduces the problem size by one item each time. Your difference equation

T(n) = n + 2T(n-1)

does not account for the removal of an item at each step. But it only removes one item, so the equation should be T(n) = n + 2T(n-1) - 1. Following your example and

Saving the algebra by using WolframAlpha to solve this gives the solution T(n) = (c1 + 4) 2n-1 - n - 2 which is still O(2n). It removes one item, which is not a considerable amount given the other factors (especially the recursion).

A similar example that comes to mind is an n*n 2D matrix. Suppose you're only using it for a triangular matrix. Even though you remove one row to process for each column, iterating through every element still has complexity O(n2), which is the same as if all elements were used (i.e. a square matrix).

For further evidence, I present a plot of your own collected running time data:

Plot of your running time data

Sign up to request clarification or add additional context in comments.

10 Comments

"Your difference equation does not account for the removal of an item at each step." Huh? It does account for it by calling the recursion with n-1 and the work done is still theta(n) (which is also why you still get the same result with or without subtracting 1 or any other constant). The graph on the other hand very much looks like n^2 which is what you'd expect since the second recursion is always O(1).
Why is the second recursion always O(1)? We really don't know what it would do because the pseudocode doesn't specify.
The list isn't passed as a parameter, but a field of the class. Clearly a bug one way or the other, but looking at the graph it is there in the real code as well.
Now that I think about, my addition of -1 doesn't make sense to me. Regardless, T(n) = n + 2T(n-1) still O(2^n). I'll change the answer
Yes, but the way the code is written means the second recursion never does work, so it's n+T(n-1).. Look at the graph - that's not how exponential looks, that's classic quadratic behaviour!
|
2

Presumably the time is quadratic. If WhatCanISolve returns nullptr, iff the list is empty, then all calls

Recursive(some other parameters 2);

will finish in O(1), because they are run with an empty list. This means, the correct formula is actually

T(n) = C*n + T(n-1)

This means, T(n)=O(n^2), which corresponds well to what we see on the plot.

4 Comments

We don't really know what some other parameters are. More clarification of what the code is doing would help. We don't know if either of the Recursive() calls would necessarily always return the base case (i.e. in constant time).
We call each recursion twice, this changes the formula.
@peku33 after the first recursion returns we know that WhatCanISolve will return null on the second recursion as well which makes it O(1). Certainly a bug in the code (recursion over a shared list is... weird to say the least, I can't think of any shootout where that makes sense in this form) but as it stands this is the right answer.
@Voo. Why would it return null? WhatCanISolve may match lot of elements for single arguments set. To clarify. This is cube-packing problem. After placing one cube, the space left is divided into two parts and each part is recursively filled

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.