2

I'm trying to calculate the time complexity of a recursive algorithm and I think I've almost got it. Here's the psuedocode I've been looking at:

long pow( long x, int n ) {
    if (n == 0)
        return 1;
    if (n == 1)
    return x;
    if(isEven(n))
        return pow(x, n / 2 ) * pow(x, n / 2);
    else
        return x * pow(x * x, n / 2);
} 

isEven merely determines whether or not the integer passed to it is even or not, and for the point of this example, operates in constant time.

So, if n = 0 or n = 1, it operates it has constant time operation, like this: f(n) = C0. However, when n > 1, it should operate like so: f(n)= f(n-1) + f(n-1) + C1 when n is even and f(n)= f(n-1) + 1 when n is odd, correct? Or should it be: f(n)= f(n/2) + f(n/2) + C1 when n is even and f(n)= f(n/2) + 1 when n is odd?

I've been looking at a lot of examples. Here is one I've found very helpful. My problem stems from there being two recursive calls when n is even. I'm not fully sure what to do here. If anyone could point me in the right direction I'd really appreciate it.

2
  • 2
    This newbie question is well asked. Commented Feb 1, 2011 at 13:00
  • Is it intentional that you are making two recursive calls to pow(x, n / 2) with the same arguments? You could just do long r = pow(x, n / 2); return r * r; and then not only is it more efficient, but the analysis is simpler. Commented Jan 26, 2020 at 18:53

2 Answers 2

3

Take a look at the Master Theorem. You can treat this as a "divide and conquer" algorithm.

The end result is that with the two recursive calls in place, you end up with a worst case O(n) runtime. E.g. pow(x, 4) calls pow(x, 2) twice, and pow(x, 1) four times; in general a power of two will result in n*2-1 calls.

Also note that by just calling pow(x, n/2) once and squaring the result in that branch, the algorithm becomes O(log n).

Sign up to request clarification or add additional context in comments.

2 Comments

Thanks. I've been looking at the Master Theorem a bit and I'm trying to wrap my head around it... guess I'll have to give it a bit more thought!
Don't feel too bad -- it's one of the hardest bits you need in the runtime analysis of most common-case code you see.
0

Let's define f(m) as that it gives you the number of operations of a problem with size m. The 'problem' is of course exponentiation (pow), like x^n or pow(x,n). The long pow( long x, int n ) { function does not need to do more or less work if I change x. So, the size of the problem exponentiation does not depend on x. It does however depend on n. Let's say 2^4 is of size 4 and 3^120 is of size 120. (Makes sense if you see that 2^4=2*2*2*2 and 3^120=3*3*3*3*..*3) The problem size is thus equal to n, the second parameter. We could, if you want, say the problem size is 2*log(n), but that would be silly.

Now we have f(m) is the number of operations to calculate pow(x,m) for any x. Because pow(x,m) is exactly the problem with size m. So, if we have pow(x,y) then the number of operations is, by definition, f(y). For example, pow(3,3*m/2) has f(3*m/2) operations.

Finally, let's count the operations

long pow( long x, int n ) {
    if (n == 0)  //1
        return 1;
    if (n == 1)  //1
    return x;
    if(isEven(n)) //1
        return pow(x, n / 2 ) * //that is f(n/2), y=n / 2
                 pow(x, n / 2); //also f(n/2)
    else
        return x * pow(x * x, n / 2); //1+1+f(n/2)
} 

Taking it together: f(n) = 2*f(n/2) + c1 (n even) and f(n) = f(n/2) + c2 (n odd). If you are only interested in the worst case scenario, then note that the odd case is less work. Thus f(n) is bounded above by the even case: f(n) <= 2*f(n/2)+c.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.